It seems that white beauty standards have been passed on from humans to robots after an AI robot judged dark skin as less attractive in a beauty pageant.
Though strange to judge beauty – an extremely subjective attribute – with objective algorithms, they gave it a go. It turned out that out of 44 winners, nearly all were white, a handful were Asian, and only one had dark skin, according to the Guardian.
Beauty.AI, the first international beauty contest judged by ‘machines’, was supposed to employ complex algorithms to determine human beauty in terms of objective factors such as facial symmetry and wrinkles.
Roughly 6,000 people from more than 100 countries submitted photos for the robot, created by Youth Laboratories, to judge.
Here are some of the winners:
Although the group did not build the algorithm to treat light skin as a sign of beauty, the input data effectively led the robot judges to reach that conclusion.
Alex Zhavoronkov, Beauty.AI’s chief science officer, said:
If you have not that many people of color within the dataset, then you might actually have biased results.
When you’re training an algorithm to recognize certain patterns … you might not have enough data, or the data might be biased.
The Guardian reports that: “The simplest explanation for biased algorithms is that the humans who create them have their own deeply entrenched biases. That means that despite perceptions that algorithms are somehow neutral and uniquely objective, they can often reproduce and amplify existing prejudices.”
An example of when this could be detrimental to people is when a ProPublica report found a computer-based crime forecasting tool, which aims to predict future crimes, to be biased against black people.
Last year, Google’s photo app labelled black people as gorillas due to an error in its AI software.
Is it coincidence that these AI gaffes never seem to be against white people? Bernard Harcourt, Columbia University professor of law and political science, said the The Beauty.AI results offer ‘the perfect illustration of the problem’.
Harcourt said:
The idea that you could come up with a culturally neutral, racially neutral conception of beauty is simply mind-boggling.
Humans are really doing the thinking, even when it’s couched as algorithms and we think it’s neutral and scientific.
Only a few months ago, Microsoft’s Tay Twitter robot became incredibly racist. Though the bouts of bot bigotry have prompted jokes and mockery, computer science experts and social justice advocates stress that prejudiced algorithms could have devastating effects for people of colour in many industries.