[ad_1]
In a time that is charged with racial awakening, a Twitter experiment that has gone viral since the past couple of days once again sheds light on how racism is not only prevalent in our social structures but also in Artificial Intelligence (AI). Going beyond overt human behaviour and human structures, racial discrimination is also very evident in the the technology we create.
On Saturday, a user tested out racial discrimination existing in Twitter’s AI photo tools using two strips of photos. Twitter crops pictures attached to a tweet, only showing the entire picture after a user clicks on it, to allow the tweet to be concise. Taking two photos—one of Mitch McConnell, a white US senator, and the other of former US president Barack Obama, who is Black—attached on a strip of white, the user tried to see which photos the microblogging platform will display in the tweet. In all his attempts, the user found that the AI displayed McConnell’s photo over Obama’s, even after changing all secondary features that could affect the order.
The only time the AI picked Obama was when their photos had a negative photo filter on it, making the two look essentially similar.
This comes after a white user noticed that Twitter’s AI had cropped out his black colleague in a photo where both of them were present, only displaying him. Ironically, the user’s tweets were about racial discrimination on Zoom.
Several other Twitter users tried the same experiment with different photos and showed similar results.
Twitter, later, apologised for a “racist” image cropping algorithm, saying it had tested the service for bias before it started using it, but admittedly didn’t go far enough. “Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing,” they said in a statement. “But it’s clear from these examples that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, and will open-source our analysis so others can review and replicate.”
However, this bias in twitter AI is not only prevalent for photos of people of different races, but is for photos of women. A user in 2019 had pointed out that Twitter’s preview feature automatically cropped the heads off of images of women, but not men. The AI, instead of focusing on the faces—as it does for men—chose to focus on the bodies of the women.
Twitter said in a blog post in 2018 that it previously used face detection to help figure out how to crop images for previews, but the face-detecting software was prone to errors. The company changed that to hone in on what’s known as “saliency” in pictures, or the area that’s considered most interesting to a person looking at the overall image. Unknowingly, human bias has crept into artificial systems, and multiple groups of researchers have found that such technologies, which usually rely on artificial intelligence, are prone to reflecting sociological biases.
Follow Satviki on Instagram.
On Saturday, a user tested out racial discrimination existing in Twitter’s AI photo tools using two strips of photos. Twitter crops pictures attached to a tweet, only showing the entire picture after a user clicks on it, to allow the tweet to be concise. Taking two photos—one of Mitch McConnell, a white US senator, and the other of former US president Barack Obama, who is Black—attached on a strip of white, the user tried to see which photos the microblogging platform will display in the tweet. In all his attempts, the user found that the AI displayed McConnell’s photo over Obama’s, even after changing all secondary features that could affect the order.
The only time the AI picked Obama was when their photos had a negative photo filter on it, making the two look essentially similar.
This comes after a white user noticed that Twitter’s AI had cropped out his black colleague in a photo where both of them were present, only displaying him. Ironically, the user’s tweets were about racial discrimination on Zoom.
Several other Twitter users tried the same experiment with different photos and showed similar results.
Twitter, later, apologised for a “racist” image cropping algorithm, saying it had tested the service for bias before it started using it, but admittedly didn’t go far enough. “Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing,” they said in a statement. “But it’s clear from these examples that we’ve got more analysis to do. We’ll continue to share what we learn, what actions we take, and will open-source our analysis so others can review and replicate.”
However, this bias in twitter AI is not only prevalent for photos of people of different races, but is for photos of women. A user in 2019 had pointed out that Twitter’s preview feature automatically cropped the heads off of images of women, but not men. The AI, instead of focusing on the faces—as it does for men—chose to focus on the bodies of the women.
Twitter said in a blog post in 2018 that it previously used face detection to help figure out how to crop images for previews, but the face-detecting software was prone to errors. The company changed that to hone in on what’s known as “saliency” in pictures, or the area that’s considered most interesting to a person looking at the overall image. Unknowingly, human bias has crept into artificial systems, and multiple groups of researchers have found that such technologies, which usually rely on artificial intelligence, are prone to reflecting sociological biases.
Follow Satviki on Instagram.
[ad_2]
Source link