Thoughts on AI-native applications
Why the entire software market feels wide open and how AI-native applications can win.
Even before the public release of OpenAI o3 model, it’s becoming clear that the more powerful the foundational layer becomes, the more opportunities arise to build AI-native products that reimagine entire workflows and craft new user experiences in every software category. No incumbent should feel safe. Microsoft’s race to make GitHub Co-Pilot free on VS Code underscores just how fiercely new AI-native startups like Cursor can compete.
This is quite a narrative shift from a year ago, when the common assumption was that building a successful AI application required finding new “hard” use cases. In the emerging co-pilot market, SaaS leaders and incumbent platforms, with vast user bases, huge datasets, and entrenched workflows, seemed poised to win the AI gold rush. Meanwhile, new applications built around the OpenAI API were supposedly just one ChatGPT release away from obsolescence.
If anything, the rapid pace of development and intense competition at the foundational layer have only heightened the competitive dynamics at the application layer. The leading AI research labs keep pushing the frontier to win consumers, enterprise customers, and developers, but open-source alternatives quickly catch up to state-of-the-art performance at a fraction of the cost. The playing field is being raised and then leveled in swift succession. As one CTO in our portfolio put it to describe the AI landscape: “It’s chaos.”
For entrepreneurs and investors, chaos often spells opportunity. There are a number of vectors of differentiation new AI entrants can leverage to their advantage.
Design (UI/UX): New, powerful technology calls for new native experiences that are superior to old solutions. In the mobile era, companies like Citymapper, Instagram, Snap, and Tinder cut through crowded categories by designing better native experiences that leveraged core smartphone features (GPS, multi-touch screen, camera). The same will be true for LLM-powered UX. Cursor, again, is praised for its friendly UI and intelligent features that act right within the interface. Granola is loved for its simplicity and for seamlessly merging rough notes and transcripts. It’s still early, but I suspect the best tools will make users feel like they’re collaborating with the AI rather than handing off all the tasks. Letting people see and understand what the AI is doing will be crucial. As LLM software starts tackling more complex interfaces and workflows, there’s going to be vast room for native experiences where design will be king.
Workflow: Co-pilots built by incumbents assume that the existing workflow remains largely intact, “simply” dropping an AI agent on top of it. But as models grow more powerful, there’s a huge opportunity to reinvent workflows entirely. For instance, Tessl’s AI-native development platform “bypasses the IDE,” operating on the premise that most code will be machine-generated and thus focusing the product on specifications. This ties back to designing native experiences: if you deeply understand the user’s problem and the capabilities of the new technology, you can rebuild the workflow from scratch. This is even more relevant with the recent advances in reasoning capabilities (so called “test-time compute”), where the software’s last-mile custom logic can generate more complex, specialised responses rather than relying solely on what’s “baked in” during pre-training. Though this comes at a cost (in terms of compute and latency), it also widens the scope of what software can do and opens the door to entirely new workflows.
Business model: this has been widely discussed already but the core SaaS model is poised to be challenged by a shift towards outcome-based pricing. Paying per seat seems less relevant when AI software becomes abundant and competes directly with human labor. If the number of AI agents grows much faster than the number of FTEs inside an organisation, software companies will need a new way to price. AI breakouts that reach meaningful scale and tie their business model to a fresh value proposition will trigger a classic innovator’s dilemma for established SaaS players.
New Data (and data stores): There is a decoupling happening between where software logic happens and where data resides. Everyone likes to pick on Salesforce for its poor UX, but the truth is that it feels outdated not only as a front-end application but also as a relational database. This recent piece about Rox, a “native-AI CRM,” notes that 40% of all data in data warehouses is customer data. But if autonomous agents will need to respond to how the software is performing, it’s no longer just static relational data that matters. As an investor in Axiom, we see the value event data can unlock beyond observability use cases. A data lakehouse that can ingest all your time-series data (not just a sample) will quickly become a critical foundation for AI-driven business logic, especially in scenarios where both humans and autonomous agents are operating side by side in the application.
We should of course not underestimate the moat that certain incumbents will continue to enjoy but history has demonstrated that, in an age of disruption, the cards are being reshuffled. Success boils down to the teams that deeply understand the problems they’re solving, can craft elegant native solutions to find a distribution advantage, execute quickly, and, of course, catch some luck. But luck is “hard work meets opportunity” and right now, there are plenty of opportunities out there!
If you think you’re one of these teams, please get in touch, especially if you believe your product will create new abundance!
Bring on 2025 :)