Robot racism: AI beauty judges preferred white contestants over those with dark skin

by WorldTribune Staff, September 9, 2016

The creators of the first international beauty contest judged by artificial intelligence were likely using the scratching your head emoticon after the robots chose mostly white contestants as the winners.

The robot judges of the Beauty.AI contest were “supposed to use objective factors such as facial symmetry and wrinkles to identify the most attractive contestants,” according to a report by the UK’s Guardian.

robot-beautyThe contest, created by a “deep learning” group called Youth Laboratories and supported by Microsoft, “relied on large datasets of photos to build an algorithm that assessed beauty.”

About 6,000 people from more than 100 countries submitted photos in the hopes that artificial intelligence, supported by complex algorithms, would determine that their faces most closely resembled “human beauty”.

Out of 44 winners, nearly all were white, a handful were Asian, and only one had dark skin. While the majority of contestants were white, many people of color submitted photos, including large groups from India and Africa, the Guardian report said.

Beauty.AI did not build the algorithm to treat light skin as a sign of beauty, but the input data effectively led the robot judges to reach that conclusion, the contest’s creators said.

“The idea that you could come up with a culturally neutral, racially neutral conception of beauty is simply mind-boggling.” said Bernard Harcourt, Columbia University professor of law and political science.

The case is a reminder that “humans are really doing the thinking, even when it’s couched as algorithms and we think it’s neutral and scientific,” he said.

The beauty contest’s results and the ensuing criticism have “sparked renewed debates about the ways in which algorithms can perpetuate biases, yielding unintended and often offensive results,” the Guardian report said.

The Guardian cited Microsoft’s release of the “millennial” chatbot named Tay in March. “It quickly began using racist language and promoting neo-Nazi views on Twitter. And after Facebook eliminated human editors who had curated ‘trending’ news stories last month, the algorithm immediately promoted fake and vulgar stories on news feeds.”

You must be logged in to post a comment Login