Why leaders are failing on AI

High failure rates for AI projects aren’t inevitable. Here’s how leaders can make change stick

Writing: Andrés Saint-Jean & Oseas Ramírez Assad

Across boardrooms worldwide, an uncomfortable truth is emerging: despite vast investment, many AI transformations fail. MIT research shows that 95% of AI pilots never progress beyond the experimental stage. Boston Consulting Group reports that 74% of companies fail to extract meaningful value from AI even after two years of effort. 

Yet companies continue to pour vast sums of money into AI. Executives know the technology is essential. What they don’t fully grasp is why adoption keeps stalling.

The paradox is clear. Algorithms are advancing rapidly: technical capability is no longer the bottleneck. The real issue is human. Research consistently indicates that the majority of failures stem from people and organizational factors, including culture, leadership, trust and ways of working – not from code.

Meanwhile, the workforce is under intense pressure. IBM’s 2025 CEO Study suggests that 70% of employees will need retraining within the next three years. The World Economic Forum forecasts that by 2030, more than a third of job skills will have changed. AI adoption is not, therefore, just a technology rollout. It is a wholesale reconfiguration of how people work, learn and collaborate.

Satya Nadella, Microsoft’s chairman and chief executive, captures the essence of the challenge: “At the end of the day, companies will have to take a process, simplify it, automate it, and apply these solutions. And so, that requires not just technology, but companies to do the hard work of culturally changing how they adopt technology.”

This is the crisis many leaders are unprepared for. AI adoption is not a technology project. It is leadership and cultural challenge.

What organizations are getting wrong

If investment and ambition are so high, why are results so poor? Across industries, the same patterns recur. Integration and alignment are often neglected. One of the most high-profile examples came from IBM’s Watson for Oncology project with MD Anderson Cancer Center. Despite bold promises – and years of investment – the initiative achieved little clinical impact and was eventually shelved. The problem was not the algorithms, but the lack of integration with medical workflows, limited trust among frontline clinicians, and an absence of organizational alignment.

Many organizations also fall into the trap of pilots that never scale. Promising experiments run in isolated pockets of the business – but without changing workflows, incentives and decision-making authority, these remain proofs of concept. As McKinsey describes it, there is a “GenAI paradox” at play: AI is widely tried, but rarely embedded into enterprise-wide systems that deliver material value.

Another stumbling block is confusing access with adoption. Companies may invest heavily in tools and run surface-level training on how to prompt AI, but fail to shift behaviors, values and leadership mindsets. AI is introduced as a bolt-on, rather than being integrated into the fabric of how work gets done.

Fear and resistance compound these missteps. Leaders may feel unprepared or overwhelmed, or fear reputational risk if AI systems go wrong. The result is often indecision. Managers fear they will be made obsolete and employees fear redundancy or surveillance. KPMG’s global survey of 48,000 people revealed that a majority of employees either hide their AI use from managers or fail to validate AI outputs, a clear sign of low trust and weak governance.

Leadership misalignment further fragments progress. Boards and executive teams often pull in different directions. Deloitte reports that two-thirds of board members admit to having limited or no knowledge of AI, while only a minority address AI at every meeting. Without clear ownership of strategy, remit and resources, even the most sophisticated tools struggle to gain traction.

The outcome is what some call “AI theatre”: visible prototypes, PR announcements and innovation showcases that generate headlines but ultimately fail to transform the organization.

The change journey: from fear to co-intelligence

If failures are primarily cultural, then cultural transformation must be the path to success. This is not an abstract idea but a practical shift in how organizations build trust, frame change, and embed new ways of working.

Start by addressing people’s fears – often the most pervasive barrier. Companies that reframe AI as a complement to their people, not a threat, see very different results. Fashion retailer H&M introduced AI in inventory planning and design by calling it “amplified intelligence.” The company communicated that AI would augment human creativity rather than replace it. Employees retained decision-making authority, were trained to experiment, and began to see AI as a tool for exploration. Resistance gave way to enthusiasm, and the business saw measurable improvements in optimization and speed.

Trust is the second critical ingredient. Adoption requires confidence not only in the technology, but also in the ethics and intentions of the organization deploying it. Morgan Stanley provides a compelling example. When the firm launched a GPT-4-powered assistant for financial advisers, initial reactions included skepticism and concern. To address this, the system was restricted to drawing only from verified internal research, and advisers were trained in its capabilities and limitations. Within months, usage rates rose sharply, and what began as a tentative trial became a trusted daily collaborator.

Finally, organizations must move from pilots to systems. This means embedding AI not just in a lab environment but across incentives, workflows and governance structures. Airbus shows how this can work at scale. The company has invested in widespread AI literacy, ensuring its 130,000 employees can collaborate with AI while keeping human decision-making at the center of its processes. By making AI a support system, not a substitute, Airbus has fostered confidence and accelerated adoption across engineering, manufacturing and customer service.

Similarly, Telefónica recognized that scaling AI required a responsible foundation. The telecoms giant rolled out AI-literacy training across levels, embedded ethical principles into development processes, and appointed ethics champions throughout business units. As a result, AI has scaled across core operations while reinforcing a culture of transparency and accountability.

In each of these cases, success came not from the algorithms themselves but from deliberate cultural design, reframing fear as curiosity, embedding trust, and systematizing adoption across the organization. This is the essence of moving from fear to co-intelligence: creating environments where humans and machines collaborate confidently and productively.

The leadership agenda

What distinguishes successful adopters is not technology alone but leadership across multiple dimensions, corresponding to the critical areas where organizations must invest if they want AI to stick. The leadership agenda spans six dimensions. 

The hybrid workforce Leaders must prepare for new forms of collaboration between humans and machines. This means reskilling employees, clarifying roles and designing workflows where AI augments, rather than replaces.

Trust and ethics AI adoption will stall without trust. Organizations must build governance frameworks that ensure explainability, accountability and responsible use, and then communicate those standards clearly to their workforce.

The culture factor Culture is often miscast as either rigid or soft. In reality, it is the set of behaviors and norms that determine how strategy gets executed. Embedding AI into culture requires tangible levers: incentives, rituals and role-modeling from senior leaders.

AI-driven decision-making Executives must evolve from instinct-driven leadership to decisions informed by data and AI insights. This requires not only access to tools, but also the discipline to integrate them into boardroom and frontline choices alike.

Scaling adoption Pilots should be designed with scale in mind from the outset, with clear runbooks for integration, measurement and governance. Without this, experiments remain isolated wins.

Future-proofing leadership Today’s pace of change requires leaders to anticipate emerging trends and guide hybrid human–AI teams through uncertainty. That means cultivating adaptability, curiosity and resilience, being open to experimentation, willing to adjust strategies quickly, and confident in leading when the path is not fully clear. Leaders must balance performance with responsibility, ensuring that as AI capabilities expand, their organizations remain both competitive and trusted.

Each of these dimensions represents an essential leadership challenge. Mastering them is what separates those who experiment with AI from those who truly embed it. 

To accelerate this shift, Duke CE has partnered with Axialent, a global consultancy specializing in cultural transformation, leadership mindset, and high-performance teams, to design custom learning journeys that help organizations harness the power of AI as a driver of cultural and
business transformation. 

These are not optional skills: they are imperatives for any leader serious about AI transformation.

Six practical lessons for leaders

The current failure rate of AI transformation projects isn’t inevitable – far from it. Building on the above dimensions, leaders should take six practical steps to improve their chances of success.

1. Reframe AI as cultural transformation Technology investment will not deliver value unless adoption is prioritized. AI should be treated as a cultural shift, not simply a technical rollout. Leaders who understand this set the stage for enterprise-wide change rather than isolated pilots.

2. Build trust and transparency Employees will only adopt AI if they trust it. That means ensuing clear governance and transparent communication. Trust grows when organizations are explicit about what AI can and cannot do, and when human oversight is guaranteed.

3. Align leadership priorities AI initiatives fail when executives pull in different directions. Boards and leadership teams must align around a shared AI strategy and governance framework. Without this unity of purpose, adoption fragments and progress stalls.

4. Empower experimentation Organizations must reward curiosity and create safe spaces for employees to learn and experiment. Broad access to AI tools, reduced procurement friction, and public recognition of wins can help non-technical teams become strong adopters.

5. Balance urgency with responsibility The race to adopt AI is fast-moving, but reckless implementation risks a backlash. Leaders must balance speed with responsibility, setting guardrails that encourage experimentation while protecting the organization from costly missteps.

6. Develop new leadership capabilities Leading a hybrid workforce of humans and AI agents requires new skills: navigating complexity, fostering collaboration and balancing performance with human well-being. Those companies who are successful at this have shown how deliberate investment in these capabilities enables adoption to spread responsibly and sustainably

Leadership in the age of co-intelligence

The age of AI will not be defined by algorithms but by leaders. Technology may enable new possibilities, but leadership determines whether adoption sticks. Executives today face a choice. They can continue to invest in technology-first initiatives and risk joining the majority that never realize value. Or they can reframe AI as a cultural transformation: one that demands curiosity, trust, alignment and responsibility.

The organizations that thrive will be those whose leaders consciously design cultures where humans and AI work together – not in conflict, but in co-intelligence.

Andrés Saint-Jean is global head, strategic partnerships, sustainability and digital, at Duke Corporate Education. Oseas Ramírez Assad is CEO at Stoic and former CEO, Axialent