Weak AI ethics are bad for business

The Millennial West was a wild place. By 2000, a nascent thing called the internet had reached millions of homes and offices. But it was being used just barely. In the UK, for example, 26% of adults had access to the web, yet just 3% were comfortable with shopping online. There was a general distrust of ecommerce: almost four in ten told the UK’s National Consumer Council that they were wary of giving their credit card details over the internet. Of those already using the web, the concern was more profound. More than half – 54% – were reluctant to make transactions online. A lack of trust was killing trade.
A quarter-century later, ecommerce has overcome the trust deficit that threatened to derail it before it started. Via a combination of enhanced cybersecurity, improved regulation and – crucially – ongoing communication with consumers, online retail is trusted such that more than 30% of all UK retail sales are now made online. The growth in the US has been similar – with around a quarter of all retail sales now being made over the internet. The moral of the story is that trust must be earned and actively managed. Without clear and decisive steps to foster faith in technology, even the smartest digitalization will fail.
Today, AI faces a similar challenge to that encountered by ecommerce two decades ago. The technology isn’t “coming” – as I hear many C-suite say – it is already here. What is slow to arrive is the confidence that it can, or should, be trusted. Some of that fear is irrational. Yet much of it is founded in valid concerns. The first problem is a lack of understanding of how AI reaches its conclusions. Without a technical knowledge of how a machine operates, we are unable to verify its output. Many organizations recognize that they are insufficiently practiced in managing AI. Thus, they are reluctant to implement it in their organizations.
The second problem is that of bias. Most AIs are created and developed by men. Unintended prejudices lurk in the system. Just as an online image search for ‘secretary’ returns hundreds of pictures of young women, AI is only as equitable as its source material. This column has noted in the past that when women were asked to design a car it looked very different from that designed by men. Organizations looking to use AI to scope new products and services must be wary of it proposing ideas that suit only the profile of its creators. AIs that are left unchecked by human ethics will cause flawed decision-making.
Yet, like the troubles of the ecommerce pioneers, these challenges are not insurmountable. With targeted business education, leaders can learn the skills – and harness the ethics – to work with AI as a partner. Humans can gain the acumen to lead it, rather than follow it.
AI promises to transform corporate governance at a blistering pace. It can enhance decision-making, streamline processes, and provide data insights that were previously unobtainable. Yet it must not replace human decision-makers. Rather, it should empower them. Through effective education, boards can use the technology to identify risk earlier, optimize routine tasks, highlight trends and improve compliance. With upskilled leaders in charge, AI is the tool that helps companies see around corners.
Organizations must focus now on enhancing skills and underpinning ethical standards. AI is racing ahead. It must not be rudderless. Leaders with the know-how can mitigate the risks – and turn the ghoul in the machine to gold.
Dr Sharmla Chetty is chief executive of Duke Corporate Education