Humans plus AI: the new value equation

As financial services firms ramp up their investment in artificial intelligence, the crucial role of people in an organization’s future value proposition is becoming clearer

“Artificial intelligence is on the way in, people are on the way out.” If you’ve been living under a rock for the last several years, that sentence would probably provide a pretty good summary of the tenor of recent discussions about the future of the financial services sector. 

Yet the reality, of course, is rather more nuanced. If anything, two years of implementing AI is teaching us the intrinsic value of people and their place in an organization’s value proposition. Senior leaders should be paying careful attention. 

First principles: leaders’ ongoing role 

Senior executives play a crucial role in contextualizing and interpreting the results generated by AI, ensuring that these insights are integrated into broader business strategies ethically and responsibly. To maximize AI’s potential while minimizing risks, executives should approach AI results with a combination of technical understanding, ethical consideration and human judgment, keeping in mind the company’s strategic goals, societal impact, and the need for transparency and fairness.

First, understand AI’s current limits, from the perspective of how it is to be implemented.

Data dependency AI results are only as good as the data they are trained on, so executives need to ask questions about the quality, relevance, and diversity of that data. If it is biased or incomplete, the results will be skewed. For example, banks extend loans to small ‘mom and pop’ grocery stores based on their cash flow and other internal business factors. Are these factors the same for all similar stores in a city? Do external factors exist that might skew the data, such as access to distributors, transportation infrastructure, rent increases, or social preference changes, causing this group to be more or less risky? Are these factors uniform across all small businesses, or
all locations?

Algorithmic bias This can present in many different ways. For example, if an AI system is generating hiring recommendations based on historical employee data, it may perpetuate past biases. Or it could show up in credit assessments. Banks assess customers’ credit-worthiness by validating their sources of income and bill payment history. But now, AI algorithms compare your financial behavior with people of similar profiles to inform lending decisions. 

To what degree are behaviors predictable, and what factors will alter behavior over time? If bias occurs with a grouping of data, the individual may not qualify for a loan even though their individual data indicates they are a good credit risk. On the other hand, one bank realized it could alter behavior simply by interpreting its data in a new way. Customers who habitually pay their credit card bills late often ultimately default. Yet with AI data, the bank realized that the majority of customers in this category use their cards to acquire bonus points for travel or products. Traditionally, late payers are fined, which adds to their financial problems – but the bank changed the rules, so that customers who pay on time receive bonus points. The number of late payers decreased 40% in two months. Executives should ensure that data is carefully curated to spot opportunities and prevent the possibility of perpetuating discrimination.

Model transparency Many AI systems operate as ‘black boxes,’ making it difficult to understand how they reached specific conclusions. Executives should push for models that are explainable and transparent, especially in critical decision-making areas like finance, hiring, or product recommendations. Imagine that a robo-advisor – an AI-based financial advisor – presents a ‘buy’ recommendation for a given company on the stock market. A customer makes the purchase, but the stock price crashes. Does the company that wrote the robo-advisor algorithm have any liability? Or the financial institution that provided the service? Transparency may reduce exposure to such risks. 

Precision versus generalization AI can excel at specific, narrow tasks, but it may struggle with generalizing outside its training scope. Results should not be interpreted as universally applicable without considering the context in which the AI was trained. In recent months, a client experimented with AI to produce the first of six planned training videos for staff, each five minutes long. With just a few words and ideas, AI was able to generate a script, select images, compile video clips and narrate them, producing a polished video. The team was amazed, and set about creating the remaining five videos. But by video three, the structure seemed repetitive; by video five, it seemed mechanical and boring. Human intervention in script development and editing was required.

Prioritizing AI deployment 

When establishing how AI is to be used – as when interpreting the results it produces – senior executives should draw on human judgment, domain expertise and insight.

First, AI results must be placed into the context of the organization’s overall strategy. While AI may offer new insights, executives need to ensure that the results align with long-term business objectives and do not contradict established business practices or ethical frameworks. Implementing AI is not an end unto itself: it is the beginning of reassessing how the organization adds value to customers.

Second, leaders need to prioritize what they act on. Not all AI implementations are created equal. An AI system might suggest aggressive cost-cutting measures that could improve short-term profitability, but damage customer satisfaction or employee morale. The executive team needs to weigh these AI-generated insights against the broader strategic focus.

Leaders also need to think about risk management: AI results should only be one part of any decision-making process. Executives need to assess the risks associated with relying on AI too heavily, including potential legal, ethical, and reputational risks – including understanding where AI has the potential to fail and planning for human oversight to mitigate such failures. In regulated financial services, in particular, this is crucial. 

In short: senior executives need to inject a process of prioritization, ethics and logical deduction into the analysis and context of AI’s output. 

Leadership and domain expertise 

As senior executives think about how to deploy AI in their organizations, the question is this: how much AI is needed? And how much human interaction – to optimize the resources of the organization – is needed to add substantive value to customers, suppliers and investors? A few guiding parameters can help shape how AI is deployed. 

One is human-AI collaboration. AI-generated results should be viewed as supportive tools, not definitive answers that circumvent the need for human interpretation and decisive actions. Senior executives must apply their own judgment, experience, and understanding of the business landscape. An AI system may recommend pricing adjustments based on market data, but executives need to consider other factors, such as customer sentiment, competitive dynamics, or upcoming regulatory changes, which may not be reflected in the AI’s recommendations. As the old maxim goes: to err is human, but to derail a strategy takes an organization. AI could accelerate the process. 

Domain expertise is vital. Wherever AI results will be used, experts must bring context to assess whether results are relevant, feasible, or ethical. AI may miss subtle industry-specific factors that a seasoned professional would notice.

From consumers to ethics

Financial services leaders also need to pay close attention to the ethical dimension of how AI is deployed. While the sector is extensively regulated, in most global markets, AI is not – for now – actively regulated. Most regulators are waiting to see how business implements AI. Will it adopt a proactive self-regulating process for dealing with ethical issues, or is intervention required?

The question is, how should businesses and regulators ensure that AI-driven marketing and product recommendations are fair and ethical, and not exploitative? Should there be limits on how much AI can influence customer choices? Businesses would be wise to demonstrate that they can be self-governing, or policymakers may feel they have little choice but to act aggressively.

AI-generated results should be evaluated not just for their business impact, but for their broader societal and ethical consequences. Executives need to ask whether the use of AI contributes to positive social outcomes, or whether it exacerbates inequality, invades privacy or causes social harm. For instance, if AI is monitoring consumers’ behavior and customizing their customer journeys, will it push consumers into an ever-smaller selection of goods? When it is used to offer personalized financial products or targeted marketing, AI could lead to manipulative practices where vulnerable customers are pushed toward products that benefit the bank, but may not be in the customer’s best interest, such as high-interest loans or unnecessary insurance products. 

AI can undoubtedly enhance the customer experience: when it is used well, customers will be empowered and happy. But if consumers believe they are being manipulated, mistrust will follow. For instance, when AI is used in loan approvals, results must be assessed for how they impact underserved communities. Do they unintentionally exclude certain groups from accessing financial services? Executives must weigh societal impacts carefully against business benefits.

AI adaptation

Assuming that leaders can define suitable ways of using AI, then a consequent question is how its performance and adaptability to a changing business environment can be monitored. Early experience suggests disappointment among some senior executives about the productivity and cost savings delivered by AI. Yet if these do fall short of expectations, other factors are often at play. One is the organization’s inability to incorporate AI into the existing business model, or to adopt a new business model. 

 The key is to manage expectations, and understand the cause and effect of how AI implementation changes business processes. One approach is to initiate a process of continuous improvement of the business around the outputs of the AI engine. AI systems evolve and improve over time as they are exposed to new data, so their performance needs to be monitored to ensure that outputs remain relevant to changing business conditions. Leaders should also establish feedback loops to identify when AI results diverge from expectations or from human expertise. For example, if a bank is trying to identify fraud, an AI system might initially provide strong results, but it could become less effective if fraud patterns evolve faster than the system can learn: after all, AI can also be used to generate synthetic identities and counterfeit plausible personal profiles to apply for loans or credit cards. Regular performance reviews and updates are necessary to keep AI aligned with business needs. 

AI and managing risks

Finally, the use of AI in the active process of scenario planning and risk mitigation is key to strategically align the organization and its ability to add value. As the landscape of the global economy continues to redefine itself, scenario planning has become an important tool for senior executives to determine their next strategic moves. AI’s ability to interpret data and generate results will continually improve as the engines get smarter and machine learning enables AI to incorporate a wider array of logic.

AI outputs can be used to model different potential future scenarios and assess risks, but AI-generated results should always be cross-checked with ‘what if’ scenario planning and a dose of common sense. Insights need to be tested against adverse conditions or outlier events to ensure they hold up under scrutiny. In the context of financial planning, an AI model may predict strong revenue growth, but executives should also model scenarios where market conditions deteriorate or unforeseen crises arise. This helps ensure that AI recommendations are robust across various potential outcomes. 

The C-level agenda

How, when, where and why AI is employed by an organization is not an IT solution looking for a problem. AI will alter how businesses grow revenues, reduce costs and improve the quality of services. The opportunities are vast. Yes, there are risks, and potential for misuse or misinterpretation of its outputs. However, any technological innovation poses some risks.

The challenge for senior executives is to align AI with the business strategy, carefully weighing its pros and cons, and playing an active and ongoing role in understanding the output of AI algorithms. Leaders cannot automate the application of ethical constructs. Sometimes, a dose of human insight is exactly what is required.

The underlying question, then, is: when, where and how can we use AI to enhance the organization’s value proposition, its products and services to deliver value to customers – both in the near and distant future? 

Joe DiVanna is a Duke CE educator, an associate with The Møller Institute, University of Cambridge, a visiting professor at Nazarbayev University Graduate School of Business, and author of Thinking Beyond Technology: Creating New Value in Business