FieldworkSign in

Learn · methods

How many qualitative interviews do you need?

The traditional answer is somewhere between 8 and 15 per study, and for most teams that ceiling is set by moderator availability and budget rather than by what the research question actually deserves. That constraint is real, but it is not a property of qualitative research as a method. It is a property of how qualitative research has always been executed. When the execution model changes, so does the answer.


The methodology answer: data saturation

Data saturation is the point at which additional interviews stop producing new information. New participants are confirming patterns already visible in the data. Nothing substantively new is emerging. That is when a study has enough sessions.

For a well-scoped, specific research question with a relatively homogeneous participant group, saturation typically arrives between 6 and 12 interviews. For broader questions, more diverse groups, or studies spanning multiple segments, the range extends to 15 to 20. Academic research on the topic has consistently found that most major themes in a qualitative dataset are visible by the 12th session, and often earlier.

This is the methodologically correct answer to how many interviews you need: enough to reach saturation on your research question, and no more.

The problem is that it assumes saturation is the only constraint. For most teams, it is not even the primary one.


The operational answer: as many as you can get

In practice, most qualitative studies stop not when saturation is reached but when the budget runs out, the moderator's calendar fills up, or the project deadline arrives. Teams run 8 interviews because that is what fits in the timeline, not because 8 is what the question deserves. They run 12 when they can stretch. They rarely run 25 because 25 moderator-led sessions is a three-week field period with a two-week synthesis backlog on the other side.

The ceiling on qualitative sample sizes has never been methodological. It has been operational. The two have gotten conflated because the ceiling was so consistent that it started to feel like a rule of the method rather than a limitation of the execution model.

This matters because the operational ceiling is not neutral. It shapes what gets studied, how confidently findings get acted on, and how often research gets done at all. A team that can run 10 interviews per study will run fewer studies, ask narrower questions, and hold findings with less confidence than a team that can run 30 without meaningfully increasing the cost or timeline.


What changes when the operational ceiling is removed

Fieldwork's Growth plan includes 100 completed interviews per month. That number was not chosen arbitrarily. It is enough to run qualitative research at a scale and cadence that most teams have never had access to before, while remaining well within the budget of a single researcher's tool spend.

The question shifts from how many interviews can we afford to run to how should we allocate 100 interviews across everything we need to learn this month.

That is a genuinely different question. It implies portfolio thinking rather than project thinking. Instead of one study per quarter, you are allocating interview capacity across a programme. Some of it goes to always-on research that runs continuously in the background. Some goes to specific questions feeding sprint cycles. Some gets concentrated into a larger study when a high-stakes decision needs the confidence that scale provides.


Three ways to allocate 100 interviews a month

Always-on continuous research (30-40 interviews)

A live study running permanently, collecting sessions as participants arrive. Low volume per week, but consistent. Over a month it builds a meaningful dataset on a recurring question: how are new users experiencing onboarding? What's driving support contacts this week? What are churned customers saying about why they left?

This is the qualitative equivalent of always-on analytics. It doesn't replace periodic deep-dive studies. It fills the space between them with continuous signal.

Sprint-cycle research (20-30 interviews per sprint)

A focused study designed around a specific product question with a two-week fieldwork window. Enough participants to reach saturation with confidence, enough diversity to stress-test early patterns, fast enough to produce findings before the sprint where they're relevant closes.

At 20 to 30 interviews, you're running studies that are larger and more confident than most teams manage in a quarter. At a monthly cadence, you're running more research than most teams run in a year.

Qual at the scale of quant (50-100 interviews in a single study)

For the questions that matter most, you can now run qualitative research at a scale that was previously only available to teams with large budgets and long timelines. Fifty in-depth interviews on a strategic question produces findings with a confidence and richness that no survey can match. The method is still qualitative, still adaptive, still probing for the reasoning and experience behind behaviour.

The difference between 12 qualitative interviews and 50 is not just statistical confidence. It is the ability to see patterns within segments, to identify the exceptions that complicate the main finding, and to arrive at conclusions that stakeholders can't dismiss as a small sample. That study used to take six to eight weeks of moderator time. It now takes as long as it takes participants to complete it.


What this looks like in practice

A research lead at a B2B SaaS company has historically run one major qualitative study per quarter, roughly 12 interviews per study, with synthesis taking a week after fieldwork closes. Four studies a year. Forty-eight interviews. A meaningful programme by most standards.

With 100 interviews available per month, she restructures the programme.

Twenty interviews run continuously across two always-on studies: one on new user activation, one on churn signals. These run every month without a formal project kick-off, accumulating signal on the two questions her team is asked about most often.

Thirty interviews go to sprint-cycle research: one focused study per sprint, fast enough to feed the design and engineering teams on their actual timeline rather than a retrospective one.

The remaining fifty go into a quarterly deep-dive study: a 50-session programme on a strategic question that previously would have taken the full quarter's budget. In a month of fieldwork, she has more high-quality qualitative evidence on that question than the company has produced in the past two years combined.

The research programme is not faster at the expense of depth. It is deeper, more continuous, and more responsive than it has ever been: run by the same person, on the same budget line, using the same methodology.


You can still run small studies

Nothing about having 100 interviews available means every study needs to use them all. A quick pulse check on a narrow question still saturates at 6 to 8 sessions. A concept test before a design review might need 10. A fast feasibility check before a larger study might need 5.

The point is not that small studies are no longer valid. They are still the right tool for the right question. The point is that small studies are now a choice rather than a constraint. You run 8 interviews because 8 is what the question needs, not because 8 is all you can manage.

That distinction matters more than it might seem. When sample size is a constraint, researchers unconsciously scope their questions down to fit it. When sample size is a choice, research questions can be as large as they need to be. Teams that have always asked small questions because they could only afford small answers can now ask the big ones.


Frequently asked questions

Is there still a minimum number of interviews for qualitative research to be valid?

Yes. Fewer than 5 or 6 interviews on a well-scoped question makes it difficult to distinguish a genuine pattern from coincidence. That floor doesn't change regardless of how many interviews you have available. The minimum is set by methodology. The ceiling is where things change.

Does running more interviews mean spending more time on synthesis?

With AI-conducted interviews, synthesis can begin as sessions complete rather than waiting for fieldwork to close. A 50-session study doesn't produce a 50-session synthesis backlog if the analysis is running in parallel with the fieldwork. The coverage data, automated quality checks, and emerging themes are visible throughout the field period, not only at the end.

How do we decide how to allocate interviews across multiple studies?

Start with the decisions your organisation needs to make in the next month. Which ones are being made on incomplete understanding? Which ones could be improved by qualitative evidence? Allocate interviews proportionally to decision importance and urgency. Always-on studies should anchor the allocation first, then sprint-cycle research, then any large strategic study.

What is the difference between running 50 qualitative interviews and a quantitative survey?

The method is fundamentally different. A 50-interview qualitative study still involves adaptive, in-depth conversations that probe for reasoning and experience. A survey collects responses to fixed questions from a larger sample. The qualitative study tells you why. The survey tells you how many. At 50 interviews, the qualitative study gains the ability to see patterns within segments and identify exceptions, but it remains a qualitative study in methodology and output.

Can always-on research replace periodic deep-dive studies?

No, and it shouldn't try to. Always-on research surfaces patterns and signals continuously but typically at shallower depth than a purpose-built study. Deep-dive studies are better for complex questions that require a purpose-built design, controlled recruitment criteria, and focused synthesis. The two are complementary: always-on research tells you where to invest the deep-dive studies, and deep-dive studies answer the questions that always-on surfaces.

What happens to unused interviews at the end of the month?

Interview counts reset monthly on paid plans. Unused capacity doesn't roll over. This is a reason to structure a continuous research programme rather than saving capacity for a single project: a programme that uses interviews consistently every month builds knowledge continuously rather than in periodic bursts with gaps in between.


Related on Fieldwork


Last updated: 2026-04-21

Related reading