The law breakers look understandably unhappy in police mug shots while the contrasting shots of non-criminals show happy-looking professionals taken from online resources, typically corporate websites. Far from predicting criminality, the algorithm had simply been trained to tell the difference between police mug shots and corporate headshots.
“When I show the pictures to my students, they can tell straight away who the criminal is and who isn’t – it’s very obvious to any human,” de Montjoye explains. “AI companies need to be regulated so they don’t make ridiculous claims about what their technology can do. It serves as a warning about developing systems doing something seemingly ‘magical’ and a very clear example of why AI needs to be robust, ethical and, for that, it needs strong regulation.”
In finance, there have been so many examples of AI tools discriminating against non-Caucasian loan and mortgage applicants that the Bank of England’s Deputy Governor, David Ramsden, was concerned enough to call an industry meeting at the end of last year. The attending financial services companies were warned that the UK’s monetary regulator expected responsible lenders to ensure there was no bias in decisions. To do this, the regulator insisted any roll out of AI must be ethically developed and trained with human oversight.
Bias in, bias out
“AI is littered with examples of bias and racism, and ageism, and ableism, and all of the ‘isms’, because they’re trained on us, on our previous data, so they’re like a big mirror of society,” he says.
“There was a very well-known bank in America that had an automated tool for approving people for mortgages, which was rejecting people from minorities, but if you’d reapplied for the same mortgage using a name like Pete Smith, you would get straight through. The problem wasn’t the algorithm, the problem was that bank managers had found a way of codifying their racism in the data that informed the algorithm.
“The better headline will always be ‘bank’s AI is racist’ but actually it’s more a case of racist bank managers codifying their unacceptable past behaviour.”
Why AI’s ‘dirty secret’ matters
“AI has a dirty secret – it can’t predict well outside of historical observation,” he says.
“The classic example of where AI can go wrong is demand forecasting. If a company is opening up a new market they will populate the platform with lookalike and surrogate customers to reflect what they see in other markets. This leads to both data omission and prediction errors. The model is not biased, it can only work on history and if data is wrong or does not match the variables in a new market, it’s going to repeat what it has seen before.”
This is bad news for the company concerned, but AI’s mistakes can have far more serious implications for mankind if algorithms are being left to run without being checked they are accurate or have ‘drifted’ into giving skewed answers. It is for this reason that Turner’s guiding principle on checking AI results is to constantly be looking for accuracy and ethical drift.
If left unchecked, this drift can cause serious problems for organisations relying on an algorithm to make a decision that is important in a person’s life, whether that be a loan, a college application or a medical diagnosis.
How do we make AI ethical?
There is wide consensus among both insiders and observers that with AI being used to make big decisions that affect people’s lives, unintended bias needs to be removed. This means AI needs to be transparent and that means it has to be explainable, so the public can know how a decision was made and with what likely range of accuracy. “If it’s explainable, it has a shot at being ethical, and therefore sustainable,” sums up Turner.
Part of this ethical drive must be for companies to ensure AI only works with high-quality data with informed permission given by people whenever their identifiable information is gathered and processed. These privacy and ethical considerations are leading some experts, such as AI author Trainor, to back a scheme through which software companies would take responsibility for their algorithms and the decisions they make, as well as the privacy protections they have in place.
“It may seem a little bit twee but one way to get ethical AI might be to ask software companies to take a kind of Hippocratic Oath,” Trainor says.
“In the same way that all doctors swear an oath, why can’t we create that kind of ethics framework? It would be a little bit like a B Corp stamp with companies committing to testing their algorithms and services to ensure they will not cause harm. You would effectively be putting it back on the consumer to choose, like you do with products marked with something like a Fairtrade label.”
Regulation, oaths and kitemarks
“It is in an early stage and there’s going to be a lot of lobbying before it becomes law, but it’s essentially saying AI needs to be developed ethically with human oversight from development through to testing and then continuously monitored once it’s deployed,” he says.
It is highly likely the Act will become a standard-bearer for ethical AI in much the same way the GDPR has set the benchmark for data privacy. It is worth noting, though, that as with any proposed EU law, it will likely take several years to be voted through and then ratified by each member state. This means it will fall way behind the current fast pace of AI implementation, which is trending towards machines teaching themselves new models, without human interaction.
The troubling examples of bias that regularly hit the headlines are now firmly set on a path from being considered unacceptable to finally being illegal.
- Black box AI – neither understandable or explainable – presents an ethical challenge. If we can’t understand why an AI is making a suggestion, we can’t tell if it’s biased or exhibiting signs of drift. Explainable AI offers the opportunity for AI to be transparent, sustainable and ethical.
- Organisations are responsible for ensuring their AI is explainable and so can be both ethical and sustainable. They will soon be able to demonstrate this through an ISO framework, which will certify development is ethical and undertaken with constant human oversight.
- Frameworks and guidelines will guard against bias by ensuring AI is both understandable and explainable. We are already seeing this with the EU’s AI Act which could become law in the latter half of 2024.