From hype to usefulness

Avoid being distracted by overinflated claims about AI’s potential and focus on these practical steps to make AI work for you

Writing Bryan Reimer & Magnus Lindkvist

There’s a specter haunting the business world – the ghost of artificial intelligence (AI). Like one of those 1980s TV shopping infomercials, AI is accelerating valuations and promising to boost revenue. It will blow away boring tasks in a gust of air, while making you dramatically more productive – if it doesn’t steal your job, that is.

And why stop there? It promises to discover new chemical compounds, compose music that makes you cry, and cure every disease known to humankind – if it doesn’t destroy the world outright, of course.

The hyperbole seems unlimited. Yes, it’s a stock market bubble, and you’ve likely already missed the best part of Nvidia’s stock surge. And no, there aren’t any innovative AI thought leaders or Python programmers whom you can poach to transform your company into an “AI-first” enterprise overnight. But here’s the thing: AI is not just a bubble. It’s a balloon.

The AI balloon

A bubble pops and disappears completely. A balloon can deflate, but it can also be reinflated in new shapes. Seventeenth century tulips and this century’s subprime mortgages were pure bubbles. When they burst, vast sums were wiped out and nothing useful remained.

But other manias leave behind infrastructure. The railroads of the 1800s and the dot-com crash of the 1990s destroyed fortunes but left train tracks, fiber-optic cables, and a new way of imagining the future. Companies vanished, but the ideas of train travel and internet surfing remained.

The current electricity-guzzling, capex-swallowing iteration of AI is likely to end in tears for many investors. But the underlying capability – automating routine tasks, processing text and data at lightning speed – is here to stay. The challenge for leaders is to look past the hype and think about the long-term potential of AI, and the steps they can take to realize it.

Technology is not a utility

One of the most important points to grasp about AI’s spread is that it is not a utility. Utility emerges slowly, unevenly, and often by accident. But technology evolves at the pace of Moore’s Law – exponential, feverish, out of sync with nearly everything human, from learning to lawmaking. 

Think of the New York Electrical Show a century ago. Electricity was the new thing, showcased in gadgets like vacuum cleaners, lamps (clunkily dubbed “electrical vase light attachments”), and speculative oddities like bathtubs full of bulbs and radio direction finders. In the early phases, new technologies are always gadgets – before the real value becomes clear.

One of electricity’s most transformative uses wasn’t obvious at all: the electric guitar. The Rickenbacker ‘Frying Pan’ arrived 15 years after electricity was mainstream, but for its first decade, it was strummed like an acoustic guitar. Then Sister Rosetta Tharpe cranked her amplifier too far and the screech of feedback was born. That sound changed music forever – not because of engineers, but because of an outsider with no rulebook.

And that’s the pattern. Utility comes later, often by accident, and often through outsiders. Think about the transformation of the World Wide Web, from a stock market bubble to a transformational utility.

The outside edge

In November 2000, amid the wreckage of the dot-com crash, Amazon hosted a recruitment event in Stockholm’s Vasa Museum – home to a giant warship that sank 1,400 yards into its maiden voyage. The symbolism was potent.

Most people thought Amazon was just a bookstore. Jeff Bezos, a nerdy hedge-fund programmer, had no retail pedigree. Pets.com and Boo.com had collapsed into punchlines. Yet Amazon survived – because it wasn’t really about books. It was about customer obsession. What if anything you wanted could arrive at your doorstep at the click of a button – or better, before you even asked?

The same pattern repeats. Apple wasn’t a phone company. Spotify wasn’t a record label. Shein’s founder, Xu Yangtian – unlike the founders of H&M or Inditex – had no experience in clothing or tailoring. He was a specialist in search engine optimization. Outsiders without legacy constraints redefined industries.

AI will follow the same path, with change led not by those bound by dogma – “This is how we do things around here” – but by outsiders who ask new questions.

So, what exactly are the questions we should be asking ourselves about AI’s new tricks?

What AI can and cannot do

AI is not magic: it processes vast amounts of data, ranks patterns, and makes probabilistic guesses about what comes next. Everything it does, a human can do too – but AI does it faster. Modeling a single protein might take a human researcher months, but AlphaFold, the Nobel Prize-winning AI system, can do it in minutes. 

Think of AI as a cognitive power loader. Just as machines let us lift heavy objects quickly, AI helps us accomplish far more with less. An architect can generate dozens of blueprints in an afternoon. A lawyer can scan thousands of contracts in minutes. A doctor can compare a scan against millions of images in seconds.

This forces us to confront a hard truth: many professional barriers are social constructs. Becoming a doctor takes decades – primarily due to the regulatory process, rather than innate ability. A 17-year-old can drive a Formula 1 car; a 12-year-old can land a 747 in Flight Simulator. Avicii made global hits with nothing but a laptop. Soon, a teenager will create the world’s most-watched movie using free AI tools.

We confuse societal habits – and the legacy thinking and regulatory frameworks they create – with reality. When we talk about disruption, it is merely an observation that technology can cut through and clarify these gaps in our assumptions, drastically changing the landscape for law firms and movie studios alike. That doesn’t necessarily mean we will roll out the red carpet for all things AI.

From “Wow!” to “Whoa!”

In the early 2010s, we thought social media would end wars and erase borders. YouTube was supposed to deliver world-class teachers to every child from Botswana to Bradford. A decade later, schools are banning smartphones, and countries are restricting social media for kids. What happened?

Technologies follow what Gartner calls the Hype Cycle: from the peak of inflated expectations to the trough of disillusionment. Or, as we put it: from Wow! to Whoa!

AI’s Wow phase is dazzling. Summarize 10,000-word articles instantly! Write code in seconds! Draft essays, contracts, cover letters!

But the Whoa phase is sobering. AI threatens the very systems it enhances. Students can generate essays as easily as teachers can grade them; job applicants fire AI-written CVs at AI-driven recruiters. And regulators will not sit idly by. Technology doesn’t beat jurisdiction – it collides with it. When governments realize that AI undermines education and credentialing – and the labor pipelines they were designed to protect – a backlash is inevitable.

Universities that rely on degrees as gatekeepers, unions that defend long training paths, and industries built on billable hours will all push back – not necessarily because the technology doesn’t work, but because it works too well. We’ve seen this before: taxi lobbies fought Uber, record labels resisted Napster, and governments have tried to regulate and shut down social media. The pattern repeats. AI’s Whoa phase will not be about whether it can summarize, code, or diagnose – it will be about whether society is willing to let it. 

But the AI backlash isn’t a verdict on the technology. It’s the design brief for what comes next.

The next phase: Grow

In 1888, Bertha Benz set out on a 100-kilometer journey in her husband Karl’s prototype “horseless carriage.” There were no petrol stations, so she bought ligroin – a petroleum-based solvent – from a small-town pharmacy. The car broke down repeatedly, forcing her to improvise roadside repairs. Hills proved nearly impossible, inspiring the later addition of a third gear. What looked like a stunt was really a stress test. By enduring a messy, unglamorous trip, Benz helped turn her husband’s fragile invention into something practical.

Her journey shows us something essential: invention isn’t enough. Technology has to prove itself in the real world. The automobile wasn’t ready for the world, and the world wasn’t ready for the automobile – until both evolved together.

AI is now in its own Bertha Benz moment. The Wow phase dazzled us. The Whoa phase exposed cracks: lawsuits over data scraping, environmental strain, unreliable outputs, digital pollution and performance decay. These flaws invite criticism and resistance – but backlash is not the end of the story. Automobiles needed roads and rules. Electricity needed grids and standards. AI now needs safeguards, governance and better fits with human systems.

This is the Grow phase: the patching, iterating, duct-tape stage where messy trials lead to lasting improvements. AI doesn’t yet replace professions wholesale; instead, it slips tactically into tasks where risks are low and benefits are clear. Step by step, like Benz’s improvised journey, those patches accumulate, building trust, legitimacy and momentum.

What does this mean for business leaders? It means your role is not to wait for AI to arrive fully formed, but to create the conditions where it can be tested, patched and improved inside your own organization. Just as Bertha Benz exposed the missing gear, your pilots and experiments will reveal the gaps where workflows, governance and expectations need to adapt. The lesson is not to adopt everything – but to adopt thoughtfully and strategically, finding the places where AI can act as an equalizer (raising the floor) or amplifier (raising the ceiling).

The leaders who succeed in this phase will not be those chasing hype, but those willing to endure the unglamorous, Bertha Benz-style road tests that turn fragile prototypes into transformative systems.

Becoming an unlearning organization

Today, leaders aren’t driven only by fear of missing out (Fomo). They’re equally driven by fear of f***ing up (Fofu). AI is not a bubble that will vanish; it’s a balloon that will deflate, reinflate, and take new shapes. The question for leaders is not “How fast can I learn the new tools?” but “What am I willing to unlearn so that AI can become useful here?”

Throughout history, progress has depended less on embracing the new and more on discarding the old. Electricity only transformed the world when people unlearned their reliance on old appliances and reimagined what power could do – like creating the electric guitar. Amazon didn’t succeed because it mastered bookstores, but because Bezos unlearned retail dogma and redefined what shopping could mean. Demis Hassabis, the creator of AlphaFold, won the Nobel Prize in chemistry despite – or perhaps thanks to – not being a chemist. Bertha Benz didn’t wait for perfect conditions; she unlearned the idea that roads, fuel and repairs had to already exist. By testing the car in the real world, she revealed the missing pieces that made automobiles practical.

AI today is forcing us to unlearn assumptions about professions, credentialing, and workflows –  showing that regulation and habit, not just capability, shape what counts as work. To lead in the Grow phase requires building an unlearning organization. Such organizations do four critical things.

  • Question inherited assumptions about education, careers, regulation or “the way things are done”
  • Embrace messy, imperfect trials, patching and iterating rather than waiting for flawless solutions
  • Treat AI as both equalizer and amplifier  –  raising the floor for novices while giving experts new leverage
  • See backlash as feedback, not a verdict, and use it as a design brief for adaptation

The leaders who thrive will not be those who adopt AI fastest, nor those who resist it longest, but those who unlearn and relearn with courage. They will take prototypes out of the lab, expose the flaws and adapt. In the end, usefulness does not come from hype or fear – it comes from the willingness to let go of what no longer serves, and discover what might. 

Dr Bryan Reimer is a research scientist at MIT. Magnus Lindkvist is a futurologist. They are the authors of How to Make AI Useful: Moving Beyond the Hype to Real Progress in Business, Society and Life (LID Publishing)