Skip to main content
Hello Hacker News!

Development got 10x faster. Research didn't. We're fixing that.

AI coding tools have advanced development dramatically. A solo founder can ship a product in a weekend. A small team can build and deploy features in hours.

But knowing what to build hasn't changed. Customer interviews still take weeks. Surveys still get ignored. Research firms still charge $15,000 and take six weeks. So most product teams skip research and guess. Teams are now building the wrong things faster than ever.

We built Inqvey to close this gap. It's a market research engine that returns quantitative data equivalent to surveying 1,000 people in about 15 minutes.

How it works

You create a Market Twin: a simulated model of your target audience equivalent to 1,000 respondents. Describe them in plain language (“B2B product managers at SaaS companies, US, 50-500 employees”) or paste your website URL and our AI figures it out. Takes two minutes.

Then ask questions. Plain English or a structured research mode: pricing, messaging, purchase intent, or feature prioritization. The engine returns probability distributions. Real percentages, not paragraphs. “Purchase intent: 62%. Top barrier: free alternatives at 38%.” Results in 15-20 minutes. Your twin is persistent so you come back anytime.

How the engine works

Most approaches in this space generate thousands of individual simulated personas, ask each one your question, and aggregate the answers. We think that's backwards. You're creating artificial noise and then averaging it out. The noise doesn't add information.

We skip the personas entirely and estimate the aggregate directly. The engine runs multiple independent AI evaluations for each question. Each evaluation approaches it from a different angle using different reasoning paths. Think of it as polling multiple independent evaluations rather than running one evaluation multiple times.

Results are combined using geometric pooling in log-space. Geometric pooling naturally dampens outlier evaluations rather than letting them pull the average. After pooling, we apply calibration and debiasing specifically built for survey research patterns. AI models have known biases when making survey-like predictions: agreement bias (overpredict agreement with positively framed statements) and extreme response avoidance (compress distributions toward the center). Our calibration layer corrects for these using adjustments derived from real human panel comparisons.

Consistency checks use Jensen-Shannon divergence to measure agreement between evaluations. High divergence flags unstable outputs that should be treated with more caution. Everything is logged and auditable.

The output is a calibrated probability distribution across response options with accuracy indicators. Not text. Numbers.

Accuracy

Benchmarked against Ipsos human panel studies with over 1,000 real respondents each.

±2.2pp

Consumer spending study

Ipsos, N=1,116

±2.9pp

Consumer influence study

Ipsos, N=1,116

Traditional human panels carry ±3-4pp at N=1,000. Our accuracy is in the same range. Full question-by-question comparisons at inqvey.com/benchmarks, including where we were off. Additional benchmarks available.

What this is and isn't

Good for directional quantitative decisions where being roughly right in 15 minutes beats being precisely right in 6 weeks. Not a replacement for real user conversations or high-precision studies where ±1pp matters. This is for teams who currently have zero data and need something better than guessing.

We used it on ourselves

Every market stat we cite came from our own product. 89% of product teams decide without market data. Only 14% have reliable access. We eat our own dog food.

Pricing

Starter: $49/mo. 20 research runs, 60 questions, 3 audiences.
Pro: $99/mo. 60 research runs, 180 questions, 10 audiences.
Free trial: 3 research runs. No card.

InqveyPredictive Market Intelligence