Vibe coding is revolutionizing software development. We think "vibe analysis"—think "Cursor for data & analytics"—has the potential to be even bigger. There are about 2 million software engineers in the US, while 4 million data professionals currently support business roles (sales, marketing, operations, etc.) with ad-hoc data requests and building reports.

In the US alone, we’re spending 10+ billion hours a year building and reviewing these reports. Our thesis is that AI will greatly increase that number. Many more people will do analysis, much faster and much more often (much like what is happening with coding tools).

What Cursor, Claude Code, Lovable, etc are doing for code, we're doing for data. Our two goals are:

  1. equip data engineer’s with AI agents to get more done, faster
  2. equip business users with AI agents to explore data on their own

“text-to-sql” is solved

We define “text-to-sql” as: converting an unambiguous natural language question into a SQL query, given all required context.

Based on this definition, text-to-sql is more or less solved.

To prove my point, check out this eval from Rishabh in Feb 2025. It shows top models hit 97%+ accuracy for correct SQL with adequate instruction (even prior to the release of models like Sonnet 4, Gemini 2.5 Pro, Grok 4, etc).

image.png

“self-serve” still isn’t solved

So, if text-to-sql is solved… why aren't companies everywhere rolling out AI data analysts? Shouldn’t every user have an AI to query, visualize, and find insights?

The issue isn’t writing good SQL. The issue is a lack of context and documentation. To help people understand this, I like to make this analogy:

Imagine you hire a data guy with 10+ years of experience. He's a SQL wiz. Day one, he gets access to your company’s data warehouse, dbt repo, BI tools, etc - and then he’s immediately cut off from the rest of the data team. He’s on his own, with only the context that is found within these few tools.

Will he be able to answer 100 stakeholder data requests with 100% accuracy? The answer is definitively - “no”. There is no way in hell he knows that:

Without clean models and docs, he is forced to make assumptions at every turn. An AI analyst is virtually the same: excellent at writing sql, making high quality assumptions, and even validating those assumptions (IMO, Buster is actually better at this than most human analysts). But it just can’t know what it doesn’t know.

the three bottlenecks keeping companies from “self-serve”