Leadership, trust and transformation

Trust is a critical foundation for successfully implementing AI. Here’s how leaders can accelerate its adoption and reap the benefits of AI’s power – while cultivating trust among key stakeholders.

Executives reading between the lines of innumerable AI articles may have remarked upon the transformative view of AI that has been espoused by many experts. 

“This may be the biggest change that any of us have ever seen,” says Marc Andreessen, general partner of Silicon Valley venture capital firm Andreessen Horowitz. 

“We’re talking about a level of scale that is candidly unprecedented,” says David Solomon, chief executive of Goldman Sachs, in Fortune.

The boldness of these statements reflects the transformative promise of AI. It stems from the use of entirely new computational capabilities, and will deliver the next platform shift in the evolution of the digital economy. For corporate executives, a shift of this magnitude requires new thinking. It demands a nuanced understanding of, and a proactive engagement with, AI capabilities as a matter of overall strategy and organizational design. It is equally vital to pay close attention to developing trust with multiple stakeholder groups during the deployment of this game-changing technology. 

To effectively lead their organizations into an AI-powered economy, leaders should reflect upon the keys to success in the digital economy, and consider leadership behaviors and investments through a broad lens that spans embodiment, empowerment, experimentation, and expansion. This article explores these four critical behaviors, and offers insights into how they can be practically applied to both build trust and to harness the full potential of AI.

Embodiment

Because AI is so transformative, executives should embody an AI-first mindset to lead successfully. This includes developing a deep understanding of AI’s capabilities, limitations, risks, and transformative value-creation potential, and a commitment to integrating AI into the company’s strategy. Leaders can serve as advocates for the exploration of AI use cases, emphasizing transparency, accountability, and fairness to build trust among all stakeholders.

JPMorgan Chase chief executive Jamie Dimon shows the way. He serves as the bank’s leading spokesperson on AI’s transformative impact on the company, publicly describing developments such as its new LLM Suite – a generative AI tool designed to support employees via writing assistance, idea generation and document summarization. That demonstrates his trust in AI tools and fosters a culture of AI adoption.

If leaders miss the opportunity to actively showcase their thinking about the opportunities as well as the risks of AI, they may appear disconnected from the changing market and create internal resistance to adoption. Matthew Shorts, assistant vice president of AI at a Fortune 500 company, remarks: “Leaders can actively encourage their teams to experiment and report back findings. Showing their organization that they are willing to change gives the teams space to find new opportunities.”

Beyond encouraging their teams, leaders should also find their own uses for AI. Igor Jablokov, founder and chief executive of Pryon, says leaders “have to think of AI as a force multiplier, an extension of themselves, as a center of innovation and creativity.” The benefits are significant. “AI allows leaders to take more shots on goal and to pressure-test their ideas for the future. They can ingest lessons learned from the past from across all signals and develop future scenarios for
the organization.”

Empowerment

Enabling the organization to leverage AI effectively requires leaders to ensure that the necessary infrastructure, tools and talent are in place. This includes investing in the right technology platforms to support AI initiatives, and providing ongoing education and training programs for employees to develop AI literacy and skills.

Leaders also need to balance between two potentially competing concerns across their talent base: those wanting access to leading-edge AI tools and those concerned about job displacement. The key is to design organization strategy that delivers AI capabilities in an equitable manner, providing upskilling opportunities across the organization; fosters a culture that supports risk-taking and innovation; restructures traditional roles or teams to better align with agile, AI-driven workflows; and ensures cross-functional collaboration to integrate AI across the organization. “AI is not only for a select few, not just for the data science team,” says Igor Jablokov. “Companies should look for vendors that have made their interfaces easy to use so that more people can use the tools, and contribute to how an organization uses the technology.”

“People feel threatened until they understand what AI can be,” points out Matthew Shorts. “The industry makes AI look like it will replace jobs and people, but once they understand the capabilities and limitations that exist today, most people come around to feeling empowered, and see an opportunity to upskill and improve performance.”

Most executives also agree about the importance of upskilling talent, he adds. “Companies want 10x productivity from their people in places where populations are diminishing. The companies that will survive and thrive are those that harness cognitive AI technologies. Give your people the tech and start figuring out the crawl, walk, run scenarios. It doesn’t have to be about big solutions on day one.”

Experimentation

Experimentation to learn what new value is available is vital in a landscape as dynamic as AI. This includes setting up pilot projects to test new AI-driven products or processes and learning from both successes and failures. In this context, leaders need to manage expectations and accept that not all AI initiatives will succeed. Experimentation should be viewed as a learning tool, an investment in discovering what works best for the organization. 

Thor Ernstsson, founder and chief executive of ArcticBlue AI, suggests that leaders should guide their teams to “focus on a specific business outcome and organize experiments around that outcome. They can start with an aspirational vision and align multiple experiments to make progress towards it.” 

Additionally, Ernstsson recommends that leaders design incentives for this experimentation. “You need to define measurable outcomes and align the incentives, because unless a team is properly bought in, the efforts won’t go anywhere.” 

Deb Dunie, a corporate director serving on multiple boards across several industries, describes her ambition for AI experimentation. “We all hope to see new and innovative capabilities develop as a result of AI. As processing capacity grows we can approach multi-dimensional problems using AI,” she says. The potential is colossal. “Concepts that we have yet to even imagine may become a reality.” 

Indeed, AI allows for new ways of experimenting without the burden of preconceived ideas. “Tools can be adopted for rapid prototyping by people who have grown up with the technologies, people who don’t have a preconceived notion about what should work or worked in the past,” says Jablokov. “We have seen companies where younger people were handed technologies which leaders weren’t sure how to leverage, and they revealed solutions never seen before.” 

Experimentation also helps to de-risk an organization’s strategy – vital as the market increasingly challenges the return on investment (ROI) from AI. “In the very early days on AI, you are almost in a phase of constant experimentation to understand the limits, capabilities and risks AI has for your organization,” says Matthew Shorts. “It is only after that exploration that you know how to bring it into your organization.” 

Yet that cannot mark the end of experimentation – far from it. As the speed of AI continues to accelerate, not experimenting represents existential risk. “Companies won’t be in business if their competitors experiment better than you; you won’t be able to catch up because of how fast these things are evolving,” says Ernstsson.

Expansion

As AI capabilities move beyond initial enablement and experimentation, leaders must ensure that solutions are integrated smoothly into existing business processes. Effective execution also means being proactive in understanding the legal and ethical implications of AI, particularly in areas like data privacy and intellectual property. 

The speed with which AI has captured global headlines has also spurred on an important conversation about the trusted use of the technology. Policy, guidance or laws are already on the books in the European Union and the United States, at both a federal and state level – while multiple lawsuits are in process, especially concerning copyright and discrimination. Leaders must design their AI strategies with these considerations in mind, but also consider multiple other levels of trust. 

AI both creates new opportunities and new risk management concerns. “It’s very hard right now to layer on your culture or ethical structure to have AI behave like a person who shares your value,” says Matthew Shorts. “You can train people to behave in a certain way for risk management, but there are no guardrails for AI on this.”

Once AI initiatives prove successful, the next step is to expand solutions across the organization. This requires strategic scaling, thoughtful consideration of where AI can create additional value, and continuous improvement of AI capabilities.

AI can correlate massive amounts of information at speeds that were not formerly available. That has significant implications, points out Deb Dunie. “This means that subject/topic conclusions can be based on assessing side-by-side comparisons of different queries and assumptions, choosing the most logical endpoint – even one that is an anomaly. So businesses can use these capabilities to find answers to elusive questions, and optimize approaches.”

Leaders must ensure that systems are in place to continuously monitor AI performance and make adjustments as needed, including attention to ethical use. Without careful planning, the speed and scale of AI tools poses new challenges to understanding what data was employed and whether it is considerate of legal requirements, such as copyright. “People are never quite sure if the results of a question might be stomping on others’ intellectual property or personal information,” adds Dunie.

More broadly, leaders must ensure quality that quality data is used to train their AI systems, and monitor accuracy and reliability. Bad data creates bad results. In the last resort, leaders must also be able to turn off the system.

Trust underpins everything

Developing an organizational approach to AI which considers embodiment, empowerment, experimentation and expansion helps leaders to approach AI holistically – and to build trust with multiple stakeholder groups. 

Indeed, trust runs through each of the four areas. Considered engagement with the organization’s talent will build trust that they are active participants in and beneficiaries of change, encouraging them to embrace upskilling opportunities and enhance their productivity. A strategy which embraces experimentation and ongoing expansion will build trust among talent, boards and stakeholders, giving them confidence that leaders have carefully considered ROI, de-risked AI investments, and are actively exploring new competitive threats. Attention to ethical and regulatory considerations builds trust across all stakeholders, including groups such as corporate watchdogs, the media, and governmental and regulatory agencies.

For corporate executives, leading in the age of generative AI is not just about adopting new technologies but also about embodying, enabling, experimenting and expanding these capabilities across their organizations. By cultivating these behaviors, leaders can ensure that their businesses not only adapt to the digital age but thrive in it, leveraging AI to drive innovation, efficiency, and sustainable competitive advantage. 

Ryan McManus is an educator for Duke Corporate Education, a board director, and the founder of Techtonic.io