The rise of AI assistants poses a grave threat to companies that have built their business models around customer experience

“How did you go bankrupt?” Bill asked.
“Two ways,” Mike said. “Gradually, then suddenly.”
Ernest Hemingway, The Sun Also Rises
For nearly three decades, digital businesses have delivered value through a familiar construct: the experience. You know it well. It’s the elegant scroll of a curated product page, the dopamine hit of a one-click checkout, the almost imperceptible nudge of a recommendation engine encouraging you to “add one more item.”
From marketplaces to media, web and app experiences have been the primary theater of value exchange. And for good reason: it’s where customers show up, where intent becomes action, and where businesses extract insight, revenue, and – on a good day – loyalty. It’s why the likes of Amazon, Airbnb, and Uber invest so much in pixel-perfect design.
Indeed, billions of dollars have been poured into designing experiences that convert. Entire business models have been built around the idea that if you control and optimize the experience, you can deliver better outcomes. So, what happens when no one shows up to the experience at all?
Enter the assistants
That is the uncomfortable possibility presented by the rise of AI agents or personal assistants from ChatGPT, Gemini, Claude and others. These systems already act as discovery engines. They summarize, surface and suggest. With generative engine optimization (GEO), they are becoming a viable source of high-intent visits and brand awareness: offering concise, personalized, sometimes more satisfying answers than a page of blue links ever could.
Early evidence already points to a change in customer behaviors. Google’s search share is softening. Users are asking AI assistants instead. They’re skipping the scroll and trusting the synthesis. As trust grows, a flywheel forms – better answers lead to more usage, which leads to better training data, which leads to better answers.
In domains where value is information – news media, comparison tools, FAQ-heavy platforms – the implications are immediate. If the assistant can answer your question, why visit the source? If you’re not behind a paywall (and sometimes even if you are), you’ve been commoditized. The interface is no longer needed.
But this is just the thin end of the wedge. The deeper disruption comes when assistants can act as well as advise. Imagine this: you ask Gemini to recommend a book. You like its suggestion. It offers to order it. You say yes. A few days later, the book is at your door.
You never opened Amazon. Never saw a homepage. Never interacted with a nudge, a recommendation engine, or a loyalty prompt. Amazon may still fulfill the transaction and get the revenue, but the experience has been reduced to a background API, with no context, no control and no brand memory.
The implications of indirect
Without cooperation between personal assistants and experiences, indirect engagement of this kind strikes at three key foundations of many business models, destroying long-term value in the process.
1. Context stops Indirect usage means experiences can’t gather more information: your preferences, intent, where you lingered and what caught your eye. Without this context, businesses lose key insights that drive conversion and lifetime value: no more recommendations or personalized marketing.
2. The relationship breaks Once-loyal customers will see the AI assistant as the principal source of value, rather than the experience. The assistant can ‘own’ the customer, hiding identity, payment details and contact information from the underlying service provider.
3. Decision architecture collapses Scarcity cues, social proof and default selections all shape outcomes. But assistants don’t scroll or deviate. They just do. Which means that revenue from upsell pathways is likely to become increasingly unreliable. Chocolate might still be positioned next to the checkout, but there’s no one to be tempted.
A strategic dilemma
Not all industries are equally exposed to these risks. Some categories may prove resistant to the advance of AI assistants – like healthcare and luxury, where people often want the friction. Here, trust lives in the journey. Recall that when price comparison sites first emerged, they promised to reduce many complex sectors to just a few metrics – yet the model only took off in a handful of verticals like utilities and insurance. It turns out people care about more than just price and efficiency. That’s unlikely to stop the trend, but it will shape where and how fast it lands.
Accordingly, experience-oriented businesses need to have a theory on how to respond to these scenarios. The situation can be loosely framed as a prisoner’s dilemma-style problem, in which experiences have two options: cooperate with AI assistants by exposing information, simplifying integration, and maximizing efficiency for indirect consumption – or defect, by building walled gardens, enforcing logins, obfuscating data and withholding structured content.
Before defining your strategy, consider the strategies being pursued by the AI companies. For them, the question is arguably simpler. They know it’s almost impossible for fragmented and competitive industries to shape a coordinated response to their advances, so a dominant strategy usually emerges: defect. That means keeping tight control over fees, data, and the user relationship. For the AI firms, cooperation with experiences may introduce risk and offer little upside; defection is not only easier but safer, more profitable, and strategically aligned with privacy expectations.
In forming a view on how your organization and industry may be impacted, start by reflecting on your strategic context.
1. How distinct is the value we create from the experience layer itself?
2. To what extent are our customers sensitive to efficiency gains?
3. How visible and important are the emotional needs we solve for customers?
4. What are our primary ways of differentiating value today, and do they still hold if accessed through an AI assistant?
5. How important is trust to the value exchange we deliver?
Plausible but not yet proximate
Whilst the threat from AI assistants is plausible, we have some way to go before both social norms and AI capabilities evolve to a point where agents are adopted at scale. Yet while you may not need to launch the transformation program today, leaders of experience-centered models shouldn’t delay in defining a ‘clear-enough’ posture towards AI assistants and agents. How far you opt to cooperate or seek to defect, the implications for your value proposition, competitive strategy, technologies and architecture cannot be overstated.
It’s worth defining your position sooner rather than later, because the future of experience is set to change in a way that will feel a lot like Hemingway’s bankruptcy. Gradually. Then suddenly.
Tom Sykes is a senior strategy executive, non-executive director and author