Show Notes
“Alexa, play Whitney Houston as loud as possible.”
Voice assistants are a great way to demonstrate how an algorithm works. In its simplest form, an algorithm is just a sequence of steps designed to accomplish a task.
Alexa uses a voice recognition algorithm to understand I want music, that I want that music to be Whitney Houston’s music, and that I want it played at maximum volume. It moves through a carefully designed sequence of rules that arrives at “I wanna dance with somebody” playing loud enough to wake up my neighbors. As requested.
So let’s say, hypothetically, that’s how I start every Friday morning. Except, this Friday, I just say, “Alexa, play some music.” Alexa will then be more likely to play Whitney Houston, or something like it because it’s learned my preferences and can now better predict what I want to hear.
(00:48) The Pervasiveness of Algorithms
That’s just one example of how algorithms, machine learning, and AI are used in everyday life. It’s also a fairly harmless example. Which is not always the case. Algorithms are used to predict the things you might buy, the fastest route to the grocery store, your qualifications for a job, how likely you are to pay back a loan, which Pokemon you are based on your grocery list, and more.
But, whatever their purpose, algorithms all have one thing in common, They’re designed by people.
In case you haven’t caught a headline for the last 5,000 years or so, people are far from perfect. So when the stuff that goes in to these algorithms is designed by humans, modeled after human behavior, the output can be just as flawed. Bias in the form of racism, sexism, and other forms of discrimination become solidified in code, embedded into everyday products, and affect people’s lives in very real ways.
So today we’re going to shed a light on the dangers of bias in AI, why it’s so hard to fix, and what we can do to overcome it and help create more representative, equitable, and accountable AI.
(02:10)The Building Blocks of AI
First a little algorithm and AI 101.
Let’s say an algorithm is a building. Data points and lines of code are like brick, mortar, and concrete–raw material used in different ways for different purposes. Some become apartment buildings. Some become museums. And, thankfully, some become Wendy’s restaurants.
Artificial intelligence, then, is sort of like a city–a collection of different buildings, all designed to interact, depend on, and benefit from one another. Today–we’re going to talk a lot about algorithms, the buildings designed by people, which can accomplish extraordinary things–but can also cause harm in all sorts of ways.
Deena: “When you would go wash your hands and you put your hand under the sink, would it work automatically?” Deena asked.
Maxx: Yes, typically, yeah.
Deena McKay, a delivery consultant here at Kin + Carta, was talking with our producer Maxx (who is white). Maxx thought Deena might just be checking up on his COVID hygiene, but she was actually illustrating just how widespread this issue is, even with a fairly low-tech example:
Deena: “So, me being a person of color, it doesn't work automatically. Sometimes I have to move my hand around. Or sometimes I have to maybe even go to an entirely different sink because of the way that these things were created. Was it with a diverse thought? And sometimes people who are Brown/Black minorities, our hands don't automatically get recognized, even just for washing our hands, which is crazy because we obviously need to wash our hands.”
Yes, we do. And with that type of fundamental failure, it doesn’t take much to imagine how it could lead to much more severe consequences. As Deena explained, "If you have that concept of we can barely wash our hands, imagine what would happen if it was a self-driving car, and it didn't recognize me walking across the street. It's going to hit me.”
Deena is also the host of another podcast that we highly encourage you to check out called Black Tech Unplugged. It is an amazing podcast where Deena talks with other Black people currently working in tech to share their stories about how they got started and encourage other people of color to work in the tech industry.
(04:12) Joy Buolamwini – The Coded Gaze
If you’ve heard anything recently about racial bias in AI, you may have heard about the remarkable work of Joy Buolamwini. In her own words, Joy is a poet of code who uses art and research to illuminate the social implications of artificial intelligence. Joy was working at the MIT Media Lab when she made a startling discovery. Joy explains, via a talk at the 2019 World Economic Forum:
“I was working on a project that used computer vision, didn't work on my face, until I did something. I pulled out a white mask,and then I was detected.”
In the talk, Joy shows a video of herself sitting in front of a computer vision system. In this system, white male faces are recognized immediately, but when she sits down, nothing–until she puts on an expressionless, seemingly plastic white mask. Joy set out to determine why this was happening, to uncover the biases within widely used facial recognition systems, and help build solutions to correct the issue.
Joy’s story is the subject of a new documentary called Coded Bias, which premiered at the Sundance Film festival earlier this year. Joy is also the founder of the Algorithmic Justice League, an organization aiming to illuminate the social implications and dangers of artificial intelligence. As Joy says, if black faces are harder for AI to detect accurately, it means there’s a much higher chance they’ll be misidentified.
(05:32) Wrongfully Accused
Take the story of Robert Williams, a man from Detroit wrongfully accused at his home for a crime he didn’t commit. In a piece produced by the ACLU, Robert describes his conversation with police after he was first detained.
“The detective turns over a picture and says, 'That’s not you?' I look, and I say 'No, that’s not me.' He turns another paper over and says 'I guess that’s not you either.' I pick that paper up and hold it next to my face, and I say 'That’s not me. I hope you don’t think all Black people look alike.' And he says, 'The computer says it’s you.''
It wasn’t.
Although companies including Amazon and IBM have announced they are halting the development of facial recognition programs for police use, Robert’s story is, unfortunately, becoming all too common.
However, the dangers of bias in AI aren’t always so easily seen and demonstrated. They’re not always as tangible as a computer seeing a white face, but not a Black face, or a soap dispenser recognizing white hands more than Black hands.
One study found that a language processing algorithm was more likely to rate white names as “more pleasant” than Black names.
In 2016, an algorithm judged a virtual beauty contest of over 600,000 applicants from around the world–and almost exclusively chose white finalists.
There are well documented cases in healthcare, financial services, the justice system, the list goes on.
(06:58) How does bias in AI happen?
So how do these things happen?
The most obvious place to start is with the data being fed into an algorithm.
(07:05) Bad Data
For image recognition models–the algorithms used in things like soap dispensers or facial recognition software–if the data are being trained on mostly white faces or white hands, it’s going to learn to recognize white skin more easily. Because many of these systems were trained on such a disproportionate sample of white men, Joy gave the phenomenon a name:
“I ran into a problem, a problem I call the pale male data issue. So, in machine learning, which includes techniques being used for computer vision–hence finding the pattern of the face–data is destiny. And right now if we look at many of the training sets or even the benchmarks by which we judge progress, we find that there's an over-representation of men with 75 percent male for this National Benchmark from the US government, and 80 percent lighter-skinned individuals. So pale male data sets are destined to fail the rest of the world, which is why we have to be intentional about being inclusive.” – Joy Buolamwini
In 2015, Amazon experienced a similar situation. Recruiters at Amazon had built an experimental AI model to help streamline the company’s search for top talent. The tool took thousands of candidates' resumes, and would quickly identify top prospects, saving hiring managers countless hours. Even when the algorithm was designed to weigh gender neutrally, Amazon found it was heavily favoring men.
Why? The benchmark for top talent was developed by observing patterns in resumes Amazon had received over the previous 10 years, which belonged to, you guessed it, mostly men. The system learned to penalize resumes containing words like “women’s” as in “women’s college” or “women’s debate team” because they weren’t phrases likely to show up in previous applicants' resumes.
(08:40) Diversity of Perspective
It really comes down to the fact that you need more multidisciplinary people making these decisions, "Twitter was invented by a bunch of white guys at a table, and they never thought of any problems that wouldn't affect them as white guys." – Max Young
That's Max Young, a UX designer from the Kin + Carta UX team. Max says that often the simplest place to start is by looking at who is in the room. Deena agrees: "I would always like to see more people who look like me, in the workplace, doing tech work."