Revenge of the GPT Wrappers: Defensibility in a world of commoditized AI models
Why network effects and distribution will be king, once more
Growth and network effects soon to return as a dominant force
The AI landscape has evolved a ton in the past year, with many new entrants, booming traction for many AI-first products, and existential questions for foundation model startups. These questions include:
What if there’s low defensibility for AI model startups, and there continue to be open source alternatives and new entrants that erode advantage over time? Who ends up winning instead?
New AI-first apps benefit from novelty effect and are seeing stunning growth. But imagine, with time, that this goes away as AI becomes an expectation, not a novelty. In a world of millions of new products, who wins the distribution game? How will products grow and reach their customers in a crowded market?
Imagine it becomes truly trivial to copy cat another product — something as simple as, “hey AI, build me an app that does what productxyz.com does, and host it at productabc.com!” In the past, a new product might have taken a few months to copy, and enjoyed a bit of time to build its lead. But soon, perhaps it will be fast-followed nearly instantly. How will products hold onto their users?
In recent years, innovative AI products that didn’t build their own models were derided as low-tech “GPT wrappers.” Yet consumer products for the past few decades have been low-tech, with seemingly small moats and yet have generated huge value. Will the future follow the past?
I’ll argue that it’s in this environment of a massive war between “GPT wrappers” that the traditional defensibility strategies — particularly sustained advantages in distribution and network effects — will return to the forefront. They won’t manifest in exactly the same ways, but instead, hybridize with AI features to create something new. In that way, the next gen of AI products will ride some of the forces that have driven the last few waves of computing, whether in Web 2.0 or crypto or the on-demand economy.
To explain why I believe this, let’s start with the prior theory for AI defensibility.
A failed theory of AI defensibility?
The popular theory for AI defensibility was meant to be simple, and pervaded discourse for the past few years: First was the observation that to build each successive generation of AI model, the amount of data/compute/energy required would exponentially increase. In 2024 it might stand at $100M+, but in future years it might be a billion or billions, creating a “scale effect” moat against new entrants. (See the graph above, and note the log scale for the cost!) Further, as the AI models get more powerful, they can do anything that an app built on top of it might want to do, so the vast majority of apps become merely “GPT wrappers” — commoditized bits of mobile and web UI that interface with the more powerful underlying models. In this view, the world will consist of a few large model companies who create all the value, and tax the world of GPT wrapper apps above it.
As I write this in Feb 2025, this theory seems to be facing major complications: State-of-the-art models only seem to be able to stay ~6 months ahead of their open source cousins, and new entrants seem to create near-peer capabilities on a regular basis (Grok, DeepSeek, etc). Also, the amount of data available to train upon — which initially provided a big advantage to scaled players which got access early — is nearing natural limits. And even if SOTA models take a ton of money/energy/compute to train, their competitors are able create similar performance via model distillation. All the while, the ecosystem is teeming with new app-layer startups that specialize on specific niches — creative tools, customer service, legal, and many other sectors — that show $0 to $5M+ ARR growth in under a year.
And in most cases, these startups don’t specify the underlying AI model they integrate with, nor do we care as users/customers. Is it time to cheer for the GPT wrappers? And what should be our new theory of defensibility for this new generation of AI-first apps? In a world of many many AI-first apps, which ones will stick?
And of course, Network effects. We saw that network effects played maybe the critical role in the defensibility of the last generation of workplace collaboration tools, marketplaces, social networks, etc (as I wrote about in my book, The Cold Start Problem) — and I think it could play a big role in the AI age too.
Database wrappers and CRUD apps
Hints to these questions can be 1990s-2010 S-curve cycle of building web apps and how it might apply to today, and although of course the metaphor isn’t perfect, it’s still informative. I’ve written about these concepts in my recent essay The mobile S-curve ends, and the AI S-curve begins, where initially startups in the 1990s dotcom era would raise millions of dollars simply to build the v1 of their websites, because there was so little infrastructure available — you had to put an actual physical server into a datacenter, build with a proprietary software stack with very expensive products at each layer, and marketing/growth was derived from faulty lessons from the CPG industry. Products were successful because they had the “it works” feature, and no wonder the first generation of web companies were built by Stanford Computer Science PhDs.
Two decades later, everything had changed: Websites became trivial to build as we got open source, cloud computing, and cost-per-click advertising to drive growth. Many of the most popular web apps could in fact be called “database wrappers” (actually, at the time, often called CRUD apps) — dead simple products with minimal technology that had the ability to Create/Read/Update/Delete data. Blogging and Twitter/Flickr were that. And a lot of marketplace startups, where you could post a listing, and other people could view it. Plus ecommerce websites. Web frameworks like Ruby on Rails and the entire genre of CMS software was created to make this easy. In fact, it got so easy that I remember venture capitalists asking, at the time, how products like Facebook might be defensible at all.
Of course, Web 2.0 was part of the solution to these questions — the big innovation was to not just allow an individual to do CRUD operations on their own data, but to allow entire communities/networks of people to do it with shared data. And if you kept those networks alive over time, that’s what was defensible, not the product itself. That was the essence of Web 2.0 that re-ignited consumer tech starting in ~2005 after the dotcom era had subsided. (I’ve also been told that other waves of tech, like the Windows/Mac-led GUI desktop boom of the early 90s, was also propelled by “form-based applications” created in Visual Basic).
In other words, we saw the internet make the same transition from an expensive, closed source stack in the dotcom era which opened into a much more ubiquitous, cheaper (but commoditized) stack in the Web 2.0 era. And as millions of new websites emerged, the axis of competition changed from “can you build it? can you raise the money to build it?” to “you can build it, but will consumers come? And will they stick?” I think the same wave is now coming for AI products. It won’t look the same, but instead, fuse network effects and AI into something new.
Growth and network effects in a GPT Wrapper-dominated world
People are often familiar of the definition of a network effects as, “a product where the more people that use it, the more valuable it becomes.” Products like marketplaces, social networks, workplace collaboration tools, etc are all classic examples.
In the next phase of AI products, either these new products will have to add network features, or the incumbent networked products will add AI. The question is which one will get there first?
In some cases it’s obvious how AI products will add network features. For all the products in the B2B/SMB sector, they will naturally add support for teams, and collaboration workflows (commenting, tagging, etc), and allow for sharing inside the enterprise. But in other cases, it’s less clear. Will the way that AI ultimately reinvents social networking be that the other people you interact with on the network will actually be AIs? Perhaps it’s old-fashioned, but I still think that people love to watch and interact with other humans, which is why we want to watch Magnus play chess, not two supercomputers. Or we want to watch Usain Bolt race other people, not a car. Will the various flavors of AI companionship be a full replacement of human-to-human interaction, or will it instead be something that augments? Perhaps AI-first social apps will simply let you communicate with your loved ones by sending not just image-based memes, but entirely custom interactive products to make your joke? (Imagine sending/sharing an interactive Trump companion, rather than just a funny photo). It’s hard to know how it’ll happen, but we know a lot of companies are trying.
Part of the difficulty is that we actually haven’t yet seen a fully consumer-focused win within AI. Of course we’ve seen glimmers, like Character.ai, but a very fast growing sticky AI-first consumer app is still up for grabs. There’s a lot of reasons for this — you want API costs to get much lower, to support an ads-supported/low $ sub model, and the incumbents are damn good. But maybe also the mechanics simply don’t yet work, and the ability for AI to create engaging human-level companions isn’t there yet.
Either way, let’s say all these network+AI features get added, and we see a generation of these hybridized products. Under my hypothesis, these products could still be easily copied, but they would at least benefit from the defensibility offered by their network. The question is “how?”
I’m eventually came to break down the general idea of a network effect into three underlying pillars that can be defined, put onto a roadmap, and otherwise optimized:
First, there is an Acquisition network effect that allows products to tap into their network of users to invite, share, and otherwise gain more users. A traditional “solo” app has to buy their users with advertising, whereas a networked product can get their users to bring more users onto the platform. An AI-first product might generate really compelling or useful content that is often shared with other people, which then acquires them into the product.
Second, a Retention/Engagement effect that allows networked products to get their users to reactivate dormant users — this might happen because of an interaction like a comment, shared file, or because they get tagged — or something more passive like a personalized email that shows activity in your network. Contrast that with a solo product that has to rely on rapidly decaying email/push notifications to get you back.
Finally, there is a Monetization effect that allows products to take advantage of stronger business models. If a collaboration product goes “wall to wall” inside of a workplace and grows virally, it is more likely to convert to high monetization tiers. If a social gaming experience charges you for decorative goods to dress up your avatar, you’ll be more compelled by that value prop if your friends are there to see it.
Done well, you might see an AI product initially enter the market by simply providing a novel interaction — the ability to generate a new kind of content, or by going deeper into some set of workflows. But then it might add viral sharing features, that help it spread amongst friends, or within a team. You might see them start to integrate “multiplayer” use cases more broadly, and ultimately, bake all of this into an enhanced business model.
Of course there will also be B2B SaaS products that succeed the old-fashioned way as well. Rather than building a few simple network effect-driven features, instead they will deeply understand customer workflows, work with compliance, IT, and security, and otherwise build a huge company via lead bullets, not one silver bullet.
Network effects are what defended consumer products, in particular, but we will also see moats develop from the same places they came from the past decades: B2B-specific moats (workflow, compliance, security, etc), brand/UX, growth/distribution advantages, proprietary data, etc etc. The big difference is that instead of taking two decades to sort this all out, as it took between the dotcom and Web 2.0 cycles, we’re speedrunning the whole thing in just a few years.
Will the current generation of AI products win, or a new generation altogether?
Or perhaps this is giving too much credit. Perhaps one generation of startups will prove out the novelty value, and then another will successfully add networked features and eat the first-movers. This is not the only possibility. We are also seeing incumbent products rapidly embrace AI, so I wouldn’t write them off either.
There are two historical precedents worth mentioning:
In prior computing revolutions, from mainframe to desktop to GUI to web to mobile, all the new startups had the advantage that apps would have to be rewritten for the new UX. Incumbents were often caught flat-footed in the transition, and created poor next-gen apps if their expertise was a different platform (just think about Whatsapp vs AIM). This generation of AI is unusual though, in that it doesn’t come with a big reinvention of the UX. We still interact with all of these products as mobile apps, websites, and so on, rather than creating a completely new modality. Maybe incumbents that already control network effects and distribution will have an advantage, and people will rather interact with their LLMs via the Whatsapp search box rather than downloading a whole new app for it.
The other question I have is if the first-movers will actually be the ones to make it. After all, in the first generation of mobile apps after the launch of the iPhone, we saw the massive growth of early movers like Flipboard, Foursquare, Kik, and others. Yet these were not the products that ultimately emerged as the mobile winners — instead it took 5+ years for others, like Uber and Doordash, that used the technology in novel ways, to define the deca-billion dollar outcomes.
Interesting times ahead.
Great piece Andrew. One interesting angle: What if the real moat isn't just adding network effects to AI, but AI's ability to dramatically lower the activation energy for network effects? Traditional products needed significant user behavior change to create networks (posting, sharing, connecting). But AI could automate much of this friction away - imagine AI auto-generating shareable content or proactively connecting relevant users. The next Facebook might not need to convince users to build their networks manually.
One wrinkle worth thinking about: unlike Web 1.0 and 2.0, where incumbents were slow to react, today’s corporate leaders are digital natives who fully grasp the power of AI. They aren’t just sitting on their hands—they’re aggressively investing, locking down proprietary data, and leveraging their distribution advantage.
This game won’t be as open as past cycles. The real question is: if incumbents hoard data and distribution, what’s the new edge for startups? Will it come from novel UX, emergent behaviors, or entirely new business models? Feels like a harder chessboard than before.