DoorDash's v1 was 8 PDF menus on a static HTML website
Revisiting the Minimum Viable Product strategy and its strengths/weaknesses
The DoorDash product, version 1.0
Many of you may know I’ve been busy organizing a16z speedrun, our 12-week program based in SF/LA where we invest up to $1M in seed/pre-seed startups. As part of this, I enlisted a number of great founders to come speak to the startups, including the founders of Figma, Supercell, Zynga, Carta, Twilio, and many others, and had the opportunity to do a Q&A with Tony Xu about the early days of DoorDash. And there was a funny and informative story about how it started.
Here was the “version 1.0” of DoorDash:
static HTML page
8 restaurant menus in PDF
a google voice number that would call one of the founders
originally called "Palo Alto Deliveries"
took all of 45 min to build
Incredible that something so simple could eventually blossom into a product that millions of people use every day.
This essay could just be a parable about launching a new product, and the benefits of the Minimum Viable Product. But I’m not here to bore you, so we’re not here to talk about all the benefits of MVPs, since the idea has become part of the startup vernacular after Eric Ries popularized it in his book, The Lean Startup.
We don’t need to rehash things — instead, I want to talk through the lessons learned in the past decade from actually trying to put MVPs into action, and further, what it means in the era of AI. After all, isn’t it confusing that many of the AI products that are launching today had multiple years of heads-down academic research and weren’t shipped as MVPs? Is the idea of an MVP still applicable?
The problems that plague MVPs
These are important questions, alongside a number of common problems that constantly trip up as MVPs are employed during a zero-to-one period of product development. Product teams often encounter issues like:
constant testing with inconclusive results (and thus, unclear direction)
false negatives because of incomplete products
works for “single player” products but hard to test within communities/networks
hard to justify breakthroughs/research that inherently require a long build cycle
overreliance on data versus true customer insight
inability to compete against pre-existing products in an existing market
local optimization into a mediocre product that is never great
… and much more.
By now many of these probably seem familiar. So now you see the real topic of this essay: we are here to talk about all the endless problems and debates that happen when you try to implement the MVP as part of your product development strategy so that you see it’s not a silver bullet.
Let’s start with a few observations on testing and interpreting results:
you should expect your tests to be mostly inconclusive. The zero-to-one process of building new products is brutal - it's mostly just repeated failure, particularly if you are in a new category. One thing you’ll notice about v1s of products is they tend to end in ambiguous (and negative skewing) results. Did your product fail because you didn’t build enough features? Or maybe the branding was off? Or maybe you just needed better onboarding? Expect your product team to spin their wheels each time there’s a failed experiment. To make these fails useful, you need to actually run a clean test that leads to a conclusion. If the most likely outcome is failure then you need to at least be able to cross off a few things and maybe get a glimmer of success. This naturally leads to strategies like testing one thing at a time, and making the One Main Feature the core of the product experience. If it’s buried underneath a bunch of other things, then the results will be inconclusive.
false negatives cause false restarts. The most dangerous outcome in product testing is getting false information, which is most likely to arrive in the form of false negatives due to the prior discussion about new products being about repeated failure. MVPs that are too minimal often fail because they look weak compared to existing products - they're bare-bones in features, branding, and UX. The resulting engagement metrics will likely hit rock bottom compared to what you'd see after proper iteration and refinement. And here's the real problem: you might wrongly conclude that the entire product direction is worthless. This is doubly damaging - not only might you abandon a potentially promising path, but after accumulating several of these false negatives, you'll likely get discouraged and give up. This triggers a complete product reset where you pivot to an entirely new direction and start over. It's a cycle of experimentation that generates very little actual market insight.
expect to launch, then re-launch, then re-launch again. What’s your strategy? Of course, every test of an MVP actually requires you to follow up with another followup test. That way you can double check an insight by building on it, refining it, and seeing if there is indeed a casual link as you surmised. But running a followup test is hard, because your existing users are now tainted by the previous test — so what do you do? Perhaps they’ll reject the v2 of the product because they already tried v1. Thus, you’ll need some kind of experimentation process in which you can pull new users off of a waitlist and put them into a new experience. Or if you have a social product, you’ll want to onboard various self-contained teams onto your collaboration tool, or different highschools onto your communications app. If you don’t have a clear strategy that allows you to test across dozens of cohorts then you will be limited in the number of tests you can run, and how quickly you can learn.
While testing helps startup teams navigate the Idea Maze from MVP to market-winning product, this view overlooks something crucial: you can learn immensely from studying the successes and failures already in your market, rather than trying to recreate all that knowledge from scratch. This is why building encyclopedic market knowledge is so valuable when entering an existing market versus creating a new category. In an established market, you start with clear signals about customer needs and how different products position themselves. Yes, you'll need to build more functionality to compete with existing players, but you benefit from the accumulated domain expertise in the space.
I strongly prefer this approach over pioneering entirely new product categories. With a new category, you have no idea if there's actually a "there there." Even if you iterate to a seemingly viable product, you can't be certain it will have the business characteristics you want. Take travel products, for instance - their inherently low usage frequency and high customer acquisition costs are fundamental to the category. You could learn this the hard way through iteration, or you could recognize these characteristics upfront by understanding the market dynamics.
MVPs may or may not apply to your product category and stage
As I mentioned earlier in the essay, it’s a weird time for AI products and the MVP concept. What does it mean to MVP a foundation model product, when the initial steps of creating it might entail many hundreds of millions of dollars of training costs and R&D? Maybe it’s just to say that the “M” in MVP in this case is rather large, but I think this idea crops up in a number of places because the MVP is market-dependent.
Here’s what I mean by that:
MVPs are often okay but not great products, and okay product lose in mature markets. Building a mobile app now is different than in 2010 when apps were brand new — today, your customers expect strong design and polish, and you are likely competing against a large number of pre-existing apps. Contrast this to the earliest days of mobile apps, when competition was low and even scrappy small apps could get big (ex: all the flashlight and fart apps). MVPs often have the problem where they can compete well in the early phase of an S-curve when the “it works” feature is all it takes, where minimal can actually mean minimal. But I am not surprised that a product like Figma or Notion took 4+ years to build the v1, because expectations are much higher in their respective categories. Further, great products often require that extra polish that is almost feels low ROI — but loyal, repeat users can often see and appreciate all of that polish, and the little bits of ROI accumulate over a long time. This has led to the observations that many of the products following the MVP theory end up with shitty UXes, iteratively bolted on, unless they take a real beat to polish everything out as they converge.
some categories just require big upfront investment. These might sound like extreme examples, but consider startups aiming to build new nuclear reactors, supersonic aircraft, cancer treatments, or humanoid robots. While there would clearly be customers if you succeeded in building these products, it would take hundreds of millions or even billions of dollars just to create the first working version that someone would actually buy. This isn't to say startups in these categories can't become extremely valuable - they absolutely can. But the reality is they require massive upfront capital, R&D resources, and time. You simply can't create a truly minimal V1 in these spaces. And that's okay. These are obviously extreme examples, but we're seeing similar patterns emerge in other categories. The AI foundation model startups we discussed earlier share some of these characteristics. This is just to say, an MVP might not be possible and may not be a great tool in these cases. Yet these new product ideas might still be amazing opportunities.
it’s easy to create an overreliance on data versus true customer insight. Over the past decade in tech, we've seen metrics come to dominate product strategy over qualitative insights. This is natural given our access to A/B testing, analytics, and systems like OKRs that drive rigorous execution across product organizations. The problem is that the data that tends to dominate is what's easily measurable - not necessarily what drives meaningful product outcomes. While these incremental metrics can help scale an already-successful product, they're simply not enough for zero-to-one products that need to multiply by orders of magnitude to become relevant. Product leaders need to understand the true market opportunity, make the hard calls about direction, and then leverage these quantitative tools to optimize once that bigger strategic goal has been set.
A corollary to all this — in today's tech landscape, where the product culture has turned so metrics-driven, the biggest opportunities might actually lie in areas that require intuition to discover. Taking an intuitive, qualitative approach can be faster than relying on A/B testing - especially for new products where low user numbers make data collection painfully slow.
After all, your initial product direction requires exceptional judgment. Pick the right starting point, and you'll be miles ahead of someone who chose poorly and tried to iterate frantically to success. The ubiquity of metrics-oriented thinking means that breakthrough opportunities often exist precisely where data-driven product leaders won't look. This typically means entering mature markets or categories requiring significant upfront investment. After all, if it were simple and immediately measurable, big tech companies would have already pursued it.
What does this mean about MVPs in today’s age?
Let me be clear: my concerns about MVPs shouldn't be interpreted as a wholesale rejection of the concept. I'm often the first person telling companies who've been building products in isolation for 12+ months to "just ship something already" or to choose product areas where shipping a V1 is actually feasible.
However, I've grown increasingly skeptical of certain product validation approaches I previously championed: Landing pages with email capture, social media traction metrics, or viral preview videos - while seemingly indicative of market interest - rarely translate to actual product stickiness. The key is solving existing customer problems, ones that likely have precedent and current solutions in the market, rather than attempting to create entirely new categories from scratch. The romantic notion of ideating cleverly, shipping an MVP, zooming in on a promising feature, shipping another MVP, and repeating ad infinitum often leads teams to iterate endlessly without direction. Instead, products need a strategic starting point in an attractive market - typically validated by the presence of other players in the space.
Further, the startup journey isn't really about shipping a single MVP. Instead, we need to recognize it for what it actually is: shipping MVP after MVP in a long series of potential failures, punctuated by occasional glimmers of customer interest. You'll need to repeat this cycle many times - drawing conclusions, raising money, keeping your team aligned, and maintaining customer relationships throughout. This constant iteration is what makes the startup experience so challenging. When you view it as a long-term hill-climbing exercise, you quickly realize that reducing your product vision to a single minimal version only gets you through the first few steps of what is ultimately a very long journey.
I started this essay by highlighting DoorDash's elegant MVP experiment, and while my short 40-second post resonated with many readers, it only scratches the surface. The full interview with Tony ran nearly an hour, revealing the countless iterations and tests they ran before finding product-market fit. It took several years before DoorDash's trajectory became clear - a reminder that the startup journey is a long, challenging road. I'm grateful to Tony for sharing these insights during our a16z speedrun session, offering a rare glimpse into the methodical process behind what's now a household name. The longer version of the DoorDash interview here.
I agree 100% about using multivariate testing.
The original DoorDash MVP seemed practical.
MVPs work when they test the core risk, and maybe when they get something out the door.