Can leaders establish a coherent approach to ethics in today’s digitally-powered business environment?
Today’s executives urgently need to provide clarity when it comes to how information is to be used. Artificial intelligence coupled with big data and machine learning is creating a new dilemma for senior leaders: what to do with information and insight gained through inference? And how can they make sure the parameters of artificial intelligence (AI), machine learning and data-gathering tools are not biased or skewed towards the world vision and understanding of those who program the system?
Technology and business’s new dilemmas
Lines of computer programming code are the building blocks of how AI and machine learning function. Armies of programmers are producing algorithms that are being applied to predict the behaviour of people, companies, governments and any number of non-numerical circumstances. For senior executive this raises strategic questions: how do we make sense of new data, and what is our responsibility when we discover unexpected information patterns? What is our moral and ethical responsibility to customers, stakeholders and shareholders?
How business answers these questions could have profound implications. Using derived data (that about 50% of US marriages end in divorce, let’s say) and statistical data (the average American carries a credit card debt of $6,000), filtered into an inference engine (based on monitoring of consumers’ purchasing behaviours), a bank might predict that a couple is set to divorce 24 months later – well before they themselves consider the possibility. The bank could use this information to reduce the credit limit on the couple’s cards to minimize the risk on its loan portfolio. Should it also tell the couple? Among the issues would be the risk of unsettling a happy marriage by mistake. Or should they say nothing, and plan for the event, perhaps by selling them divorce insurance?
Such dilemmas are often made more acute by historical data creating trends that perpetuate bias, whereby socio-cultural prejudices and other factors are built into systematic processes. Information bias in AI and machine learning results when certain elements of a dataset are more heavily weighted and/or represented than other factors. Consider as another example healthcare, and what we are learning about the influence of our genome on health outcomes. If your parents and grandparents were susceptible to a genetic abnormality such as cystic fibrosis, sickle cell anaemia, or Huntington’s disease, a medical insurance firm might want to know. Should they be able to elect not to insure you, or charge you a higher premium? No – at least not according to US lawmakers. In 2008, the US passed the Genetic Information Nondiscrimination Act (Gina) to prohibit insurance providers from using genetic information in decisions about a person’s health insurance eligibility or coverage.
In this case, as in many, the potential bias in data was not created intentionally; it was the by-product of researchers aggregating new information to discover patterns of health events. The underlying issue is that the use of predictive analytics, statistical algorithms and machine learning techniques to identify the likelihood of future outcomes based on historical data can lead to analysis and outcomes that impinge on privacy and individual freedom – and often impact business outcomes.
Corporate integrity can also be compromised when collected data is used for purposes beyond the expectation of customers. For example, one million US consumers submitted their DNA, via saliva samples, to the genealogy site GEDmatch to discover their roots and learn about their ethnicity. They did not envision that their DNA would be shared with law enforcement officials – yet it was. One could argue that disclaimers and disclosures at the bottom of long contracts often give greater permissions than users might intend. But given the extreme potential impact of a misunderstanding of the uses of data, is it not the responsibility of business to err on the side of greater clarity?
There are other issues, of course. As more and more information is gathered, data may be taken out of context. A number of companies are now offering social media checks which screen online behaviour to assess an employment candidate’s potential risk to an employer, for instance. But of course, one employer’s risk could be another employer’s asset. For example, a candidate who is part of an animal rights group could be an asset to an animal welfare organization, but less desirable to a pharmaceutical company that conducts animal testing. Predictive analysis can contribute to racial, cultural, religious and political bias; association fallacies can easily lead us into making misinterpretations.
There are other problems with the use of big data and the extrapolation of inferences to inform business decisions. Between 1975 and 2017, the consumption of milk in the United States declined by 41%. The marriage rate declined by the same percentage. Can we conclude that the less milk you drink the less likely you are to marry? There are six possible interpretations: milk drinking causes marriages; drinking other things causes divorce; milk drinking and marriage partly cause each other; milk drinking and marriage are influenced by an external factor; that drinking other things is compelled by an external factor which also causes more divorces; or, that there is no causal relationship whatsoever.
The data can be interpreted in ways that appear to support conceptual interpretations, but correlation does not prove causation. Senior leaders need to provide not just guidelines but insight on how to interpret information. This raises another strategic question: how involved should senior management be in scrutinizing (or supervising) how formulas to manipulate data are composed? More importantly, what skills do senior leaders need? What competencies need to be established to ensure operational integrity in the implementation of AI and machine learning?
The new corporate competency, understanding information, requires the C-suite to provide clarity on what the company wants from data, ensure ethics and compliance in how it is gathered and interpreted, and consider the moral and ethical implications on the handling, distribution and dissemination of data beyond simple regulatory requirements. To address these issues, senior executives will want to ensure they communicate corporate responsibilities as the science continues to mature.
Commercial implications and key questions
How an organization is perceived to use data can impact its brand, stock price, revenues, customer loyalty, employee morale, and ability to recruit talent. This is no small matter.
When data is misused, it often begins unintentionally, but becomes normalized over time as exceptions are weighted against a desired outcome. The erosion of ethical behaviour often requires tacit cooperation from the organizational hierarchy. What happens when a client wants to perform a transaction that is legal and within policy, yet may be immoral or unethical? What happens if a company claims it is green, but has structured its pension fund to invest in non-green equities to secure higher returns?
Senior leaders need to project their values, attitudes and beliefs to form an ethical operating culture. Ethical conduct of the organization is a shared responsibility between leaders and subordinates. As the complexity and volume of information increases, the need for clarity increases. It is up to senior leaders to set the standard. What’s more, in many countries, failing to address the ethical parameters of information use could also expose senior executives and board members to personal and corporate liabilities.
The key is to create a culture of integrity by emphasizing the role of ethical behaviour in the treatment of data. It starts with asking several
- How will we benefit from the output of this information – and who might it harm?
- What is the social cost of a wrong decision based on the correlation of specific items of data?
- What is the probability of bias in the use of an algorithm?
- Has data been collected and analysed in the most diverse and ethical ways to avoid bias?
- Is there a risk of exploiting individuals financially when we use behavioural data to determine profitability?
- Do we use data that would enable us to capitalize on negative events or outcomes to ensure higher profits?
Do no harm, or do good?
The principle that binds medical doctors is a guiding ethical consideration for most healing cultures across the world. But is it right for business? Senior leaders need to consider if their organization is motivated by the fear of doing the wrong thing (which may result in penalties or a reduction in profitability) – or by the potential to do the right thing for the greater good.
Certainly, organizations that engage in unethical behaviours face negative consequences. In the UK, for instance, section 172 of the Companies’ House Act 2006 makes clear the obligations of the chief executive to maximize profits. But increasingly, higher profits are associated with ethical behaviour that strengthens brands, enhances customer loyalty, and ultimately drives value and share price.
The best way to initiate an ethical agenda is by conducting an honest dialogue at C-suite level to achieve clarity on strategic intent and the appetite for use of data, and then set clear policies and procedures. After that, it is critical to practise what you preach: to reconcile your ethics and morals with profitability. For example, numerous cryptocurrencies are speculative: for me to get a return, someone else has to lose money. Here, ethics become a matter of perception, and we start to see how our emotions may lead us to distort the facts to satisfy an inner moral conflict. If an employee asks whether he or she should invest their life savings in cryptocurrencies, what would your answer be?
That potential conflict plays out at scale at corporate level. Several major online platforms in effect acted as regulatory bodies during the pandemic, when sellers of items such as hand sanitizers, face masks, toilet rolls or dry food inflated prices to perhaps 20 times the pre-pandemic retail price. These retail platforms had to be warned by attorneys general in several US states about the consequences of price gouging. Why did these major platform businesses need such reminders? Did they lose their moral compass in the quest for higher profits? At what level does free market capitalism transform into profiteering and exploitation?
Moral dilemmas are not restricted to the corporate agenda. Customers look for the cheapest deal, often regardless of how a product is sourced or how the company that sells it compensates their employees. Here is the nature of capitalism, and the freedom of individual choice is central to it. Can that freedom be deployed to elevate standards for all, rather than pushing down prices and therefore costs, which has the (unintended) consequence of perpetuating inequalities?
As technology advances, creating new avenues of information to better understand human behaviour, deploying this understanding for business purposes becomes ever-more important – and a powerful tool for improving the quality of life in our globally-connected world. Business leaders would do well to devote time to thinking through the ethics of how they will act in this new world.
Regulators are looking at embedding ethics into school curricula in many countries around the world. Perhaps they should speed up.
Joe DiVanna is a Duke CE educator, an associate with The Møller Institute, University of Cambridge, a visiting professor at Nazarbayev University Graduate School of Business, and author of People: the New Asset on the Balance Sheet.