FieldworkSign in

Product

In-product interview SDK — capture research at the moment it happens

The Fieldwork Ambient SDK embeds Sofi directly in your product. When a user completes onboarding, uses a key feature, or shows signs of dropping off, Sofi can interview them right there, in context, without anyone scheduling a session. You get qualitative insight at the moment that matters—not only weeks later in a separate study.

What does always-on qualitative research actually mean?

Traditional qualitative programmes are project-shaped: define scope, recruit, schedule, interview, then analyse. That rhythm is powerful for big questions, but it leaves quiet weeks where product reality keeps moving.

Always-on qualitative research means interviews can be triggered from what users do in the product, so learning continues between those larger projects. Product teams hear language and reasoning in context; research teams still own methodology, consent, and study quality.

How does implementation work?

  1. Install the SDK in your product (npm package; React Native on Scale).
  2. Create a project in Fieldwork and connect it to one or more studies.
  3. Define trigger rules: which events or traits launch an interview—for example, first successful export, onboarding completion, or a risk score threshold.
  4. Sofi runs a conversational interview inside your product UI.
  5. Transcripts, themes, and topic coverage data sync to your Fieldwork workspace; webhooks can push outcomes to internal systems.

How do resolver rules route participants to studies?

Resolver rules map incoming signals—plan tier, feature usage, lifecycle stage—to the study that should run. When multiple rules match, priority ordering determines which study fires first.

Frequency caps and cooldown windows protect the participant experience. On Growth plans and above, you can rotate studies for A/B style programmes without rebuilding your instrumentation each time.

What is Fieldwork Ambient SDK not meant to replace?

It does not replace moderated sessions when you need very open exploration, sensitive facilitation, or relationship-heavy contexts. It complements analytics by adding why, not by substituting for behavioural measurement.

It is strongest for post-onboarding depth, feature feedback loops, adoption mysteries, and churn investigation—places where timing and context matter as much as the questions you ask.

What does an in-product interview look like in practice?

A B2B SaaS team wants to understand first exports: what users thought they were doing, whether the file matched expectations, and what happens next in their workflow. The SDK triggers a short interview right after the first successful export event.

The brief

"Understand what users were trying to accomplish with their first export, whether the output matched expectations, and what they will do with the data."

Example conversation · not live data

In-product prompt

You just exported for the first time. Got two minutes to tell us whether this matched what you needed?

What were you trying to accomplish with that export—and did the file match what you expected?

I needed to send totals to finance. The CSV opened fine, but the column names did not match our template.

What would have to be true for you to trust this export every month without manual cleanup?

Sofi · following the thread

Transcripts and topic coverage land in the workspace automatically; webhooks can notify your data stack the same hour.

Frequently asked questions

What does "ambient" mean in Fieldwork Ambient SDK?

Ambient means interviews surface in the flow of normal product use, triggered by behaviour rather than scheduled by a researcher. A user who just finished onboarding might see a short invitation to talk through their first experience. Someone showing churn risk might be invited to explain what is not working. The research programme keeps running between major study cycles without manual coordination for each session.

Do we need to replace our existing analytics tools?

No. The Ambient SDK adds qualitative depth—the language and reasons behind behaviour—alongside what analytics already measures. It tells you why users took an action, not only that they did. Most teams run it next to Mixpanel, Amplitude, PostHog, or an internal warehouse pipeline.

Is this for product teams or research teams?

Both. Product managers get in-context signal that helps interpret funnels and feature adoption. Researchers keep control of study design, consent, topic coverage standards, and how interviews are routed. The execution layer scales without turning product experimentation into an unmanaged mess of one-off conversations.

How is participant consent handled for in-product interviews?

Every participant sees a consent prompt before the interview begins, with a link to your privacy policy. Fieldwork records consent for each session—including timestamp, study, and locale—so you have an audit trail per conversation rather than a one-time banner click that no longer matches what was actually fielded.

What frameworks and platforms does the SDK support?

Growth plans support web products through the JavaScript SDK with webhook delivery of outcomes. Scale adds React Native support and headless mode for custom interview surfaces. Enterprise plans add custom domain configuration and single sign-on for larger deployments.

How do trigger rules and study routing work?

You define resolver rules that map user events and traits to specific studies—for example, Pro-plan customers who just used a feature for the first time route to a feature-adoption study. Priority ordering decides which study runs when multiple rules match. Frequency caps and cooldown periods reduce the risk of over-interviewing the same people, and Growth-and-above plans can rotate studies for experimentation.

Related: Fieldwork Interviews, ResearchOps, and pricing.