AI art project exposes questionable selfie app bias

Peering down into the light of a phone screen is now an essential part of most people’s daily routine. But what does it see when it looks back up at you?

An artificial-intelligence art project is revealing the racist and sexist bias of selfie apps and the troubling history of the data sets behind them.

Curated by the artificial intelligence researcher Kate Crawford and the artist Trevor Paglen, a new exhibition Training Humans and online app aim to lift the curtain on facial recognition technology. “We want to shed light on what happens when technical systems are trained on problematic training data,” they explained.

Training Humans at the Fondazione Prada in Milan analyses the use of data sets going all the way back to the 1960’s. An entire room in the exhibition is dedicated to exploring their web app, ImageNet Roulette, which matches your selfies to closely resembling images.

The website uses an existing database called ImageNet, an online library of 14 million photos. Originally created in 2009 by Stanford and Princeton scientists, Crawford describes it as “one of the most significant training sets in the history of AI.”

You might consider uploading a selfie to this website a seemingly harmless action. But once uploaded, your selfie is then assigned categorisations to help the software match it.

Cheerleader”, “heroine” and “zoo keeper” are amongst the most innocent categories on ImagNet. Yet the database includes racial slurs and other offensive terms like “first offender“, “rape suspect“, “spree killer“.

ImageNet also uses problematic socio-historical categories like “workers” and “leaders”, which can look drastically different across different cultures.

AI classifications of people are rarely made visible to the people being classified,” Paglen and Crawford revealed. “ImageNet Roulette provides a glimpse into that process – and to show the ways things can go wrong.

Guardian Technology reporter, Julia Carrie Wong, was derogatorily labelled “gook, slant-eye” by the database. Speaking about her experience with the app, Wong said “this is exactly the outcome that Crawford and Paglen were aiming for. ImageNet Roulette is not based on a magical intelligence that shows us who we are; it’s based on a severely flawed dataset labeled by fallible and underpaid humans that shows us its limitations.

In fact, the categories are actually assigned by human workers who are recruited via by Amazon. The human workers use another data set called WordNet to match images to their supposedly associated categories.

Crawford, who is the co-founder of the AI Now Institute at NYU, concluded that “no matter how you construct a system that says this is what a man, what a woman, what a child, what a black person, what an Asian person looks like, you’ve come up with a type of taxonomic organization that is always going to be political. It always has subjectivity to it and a way of seeing built into it.”

Training Humans is open from 12 September 2019 to 24 February 2020.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s