Skip to main content

Select your location


Inside the black box: AI must be explainable to be ethical

Representation of AI as a digital box over a dark background

Black box AI

Students attending associate professor Yves-Alexandre de Montjoye’s AI ethics class at Imperial College London could be forgiven for not knowing whether to laugh or cry. The lecture begins with pictures on a screen that were used by AI researchers in China who claimed to have trained an algorithm to spot who is a criminal and who is not. 

The law breakers look understandably unhappy in police mug shots while the contrasting shots of non-criminals show happy-looking professionals taken from online resources, typically corporate websites. Far from predicting criminality, the algorithm had simply been trained to tell the difference between police mug shots and corporate headshots.

“When I show the pictures to my students, they can tell straight away who the criminal is and who isn’t – it’s very obvious to any human,” de Montjoye explains. “AI companies need to be regulated so they don’t make ridiculous claims about what their technology can do. It serves as a warning about developing systems doing something seemingly ‘magical’ and a very clear example of why AI needs to be robust, ethical and, for that, it needs strong regulation.”
“Computers are programmed by people and just as liable to be coded with our faults, as our virtues.”

The lesson underlines the difference between the bold promises made by AI evangelists and the results their systems deliver. Organisations are being told they can be given a magic wand to take today’s analytics and turn them into more accurate predictions. How much stock will we sell next year? Whose loan applications should be approved? Which person can we trust to be a worthy tenant? AI systems have promised to resolve questions such as these, and many more, but when tested the results have often been off the mark.

In finance, there have been so many examples of AI tools discriminating against non-Caucasian loan and mortgage applicants that the Bank of England’s Deputy Governor, David Ramsden, was concerned enough to call an industry meeting at the end of last year. The attending financial services companies were warned that the UK’s monetary regulator expected responsible lenders to ensure there was no bias in decisions. To do this, the regulator insisted any roll out of AI must be ethically developed and trained with human oversight.

Bias in, bias out

For Pete Trainor, the bias shown in algorithms is not a surprise. The author of Hippo: The Human Focused Digital Book believes that computers are programmed by people and so they are just as liable to be coded with our faults, as our virtues. 

“AI is littered with examples of bias and racism, and ageism, and ableism, and all of the ‘isms’, because they’re trained on us, on our previous data, so they’re like a big mirror of society,” he says.

“There was a very well-known bank in America that had an automated tool for approving people for mortgages, which was rejecting people from minorities, but if you’d reapplied for the same mortgage using a name like Pete Smith, you would get straight through. The problem wasn’t the algorithm, the problem was that bank managers had found a way of codifying their racism in the data that informed the algorithm. 

“The better headline will always be ‘bank’s AI is racist’ but actually it’s more a case of racist bank managers codifying their unacceptable past behaviour.”
Abstract imagery of pyramid on black background

Why AI’s ‘dirty secret’ matters

This reliance on previously collected data is at the heart of the issues that have sprung up around AI and bias. It is frequently seen in projects that do not make headlines but disappoint businesses by providing unreliable decisions based on flawed data. According to Cameron Turner, VP of data science at Kin + Carta, the problems mostly lie in people not understanding algorithms are only as good as data allows them to be.

“AI has a dirty secret – it can’t predict well outside of historical observation,” he says. 

“The classic example of where AI can go wrong is demand forecasting. If a company is opening up a new market they will populate the platform with lookalike and surrogate customers to reflect what they see in other markets. This leads to both data omission and prediction errors. The model is not biased, it can only work on history and if data is wrong or does not match the variables in a new market, it’s going to repeat what it has seen before.”

This is bad news for the company concerned, but AI’s mistakes can have far more serious implications for mankind if algorithms are being left to run without being checked they are accurate or have ‘drifted’ into giving skewed answers. It is for this reason that Turner’s guiding principle on checking AI results is to constantly be looking for accuracy and ethical drift.
“AI has a dirty secret – it can’t predict well outside of historical observation.”

Companies must constantly analyse results to check whether a small part of the data input is now responsible for most of the output decision. An example could be a financial tool being trained to approve loan applicants. If it sees some very good risk profiles from a wide variety of people from, say Ohio, the algorithm might then presume people from Ohio are good risks. The result could be it keeps on approving applicants from that state, possibly to the detriment of those based elsewhere, rather than focus on each individual’s financial status. In this case, an unusually high proportion of loans being approved in Ohio, compared with other states, would serve as a red flag the code needs rectification.

If left unchecked, this drift can cause serious problems for organisations relying on an algorithm to make a decision that is important in a person’s life, whether that be a loan, a college application or a medical diagnosis.

How do we make AI ethical?

There is wide consensus among both insiders and observers that with AI being used to make big decisions that affect people’s lives, unintended bias needs to be removed. This means AI needs to be transparent and that means it has to be explainable, so the public can know how a decision was made and with what likely range of accuracy. “If it’s explainable, it has a shot at being ethical, and therefore sustainable,” sums up Turner.

Part of this ethical drive must be for companies to ensure AI only works with high-quality data with informed permission given by people whenever their identifiable information is gathered and processed. These privacy and ethical considerations are leading some experts, such as AI author Trainor, to back a scheme through which software companies would take responsibility for their algorithms and the decisions they make, as well as the privacy protections they have in place. 

“It may seem a little bit twee but one way to get ethical AI might be to ask software companies to take a kind of Hippocratic Oath,” Trainor says.

“In the same way that all doctors swear an oath, why can’t we create that kind of ethics framework? It would be a little bit like a B Corp stamp with companies committing to testing their algorithms and services to ensure they will not cause harm. You would effectively be putting it back on the consumer to choose, like you do with products marked with something like a Fairtrade label.”

White grid

Regulation, oaths and kitemarks

Back in London, in de Montjoye’s Imperial College lecture room, students will hear that much of what we need to make AI ethical is coming through a new EU law for which the European Commission published an outline in April this year. The AI Act will, he believes, ensure the future development of algorithms is always founded on ethical principles.

“It is in an early stage and there’s going to be a lot of lobbying before it becomes law, but it’s essentially saying AI needs to be developed ethically with human oversight from development through to testing and then continuously monitored once it’s deployed,” he says.
“If AI’s explainable, it has a shot at being ethical, and therefore sustainable.”

“The law’s there to protect against drift and bias. I think it will become like aviation security. Everything will have to be documented at every stage with AI developers having to account for what they did, what decisions they made. So if there’s ever a problem, you can go back and find the bit of code responsible; just like a faulty part that caused a problem with an aeroplane. This is what makes a complex piece of engineering, such as an aeroplane, so safe and what we need for high-risk AI systems.” 

It is highly likely the Act will become a standard-bearer for ethical AI in much the same way the GDPR has set the benchmark for data privacy. It is worth noting, though, that as with any proposed EU law, it will likely take several years to be voted through and then ratified by each member state. This means it will fall way behind the current fast pace of AI implementation, which is trending towards machines teaching themselves new models, without human interaction.
“Whether it is an oath, a product label or an ISO number, end-users will soon be able to pick an AI developer based on their adherence to ethical guidelines.”

However, shortly after the outline for the AI Act was published, the International Organization for Standardization (ISO) launched the early stages of what will become a framework for ethical AI development within a shorter time frame. So, whether it is an oath, a product label or an ISO number, end-users will soon be able to pick an AI developer based on their adherence to ethical guidelines. Within a handful of years, the AI Act will likely become law and customer choice will be backed up by large fines for companies that do not toe the line on ethical, transparent practices. 

The troubling examples of bias that regularly hit the headlines are now firmly set on a path from being considered unacceptable to finally being illegal.

Key takeaways

  • Black box AI – neither understandable or explainable – presents an ethical challenge. If we can’t understand why an AI is making a suggestion, we can’t tell if it’s biased or exhibiting signs of drift. Explainable AI offers the opportunity for AI to be transparent, sustainable and ethical.
  • Organisations are responsible for ensuring their AI is explainable and so can be both ethical and sustainable. They will soon be able to demonstrate this through an ISO framework, which will certify development is ethical and undertaken with constant human oversight.
  • Frameworks and guidelines will guard against bias by ensuring AI is both understandable and explainable. We are already seeing this with the EU’s AI Act which could become law in the latter half of 2024.

This article originally appeared in Thread, Edition 2. Thread is Kin + Carta’s quarterly magazine that cuts through the complexity of digital transformation. Making sustainable change real, achievable and attainable. 

Get more insights from Thread

Read now

Share this article

Show me all