Twitter users say the platform crops out Black faces

Twitter users say the platform crops out Black faces

Social media giant Twitter said it will test its photo cropping tool after some users complained last week that the feature prefers White faces.

Doctoral student Colin Madland tweeted last week that he recently posted a photo of his face alongside a Black colleague on Twitter and that the platform cropped the photo preview to include only his face (Madland is White).

Over the weekend, programmer Tony Arcieri reported a similar experience after posting a photo that included both former President Barack Obama and U.S. Senator Mitch McConnell. Arcieri said Twitter’s photo preview tool featured McConnell no matter how he altered the original image, while cropping out Obama.

When posting a picture on Twitter, the platform automatically makes a cropped version of the image for users to quickly view as they scroll through their timeline. A user can see the entire picture after clicking the center of the image.

Twitter said in a statement that it tested the picture preview algorithm before its launch and didn’t find preferential treatment. Still, “it’s clear that we’ve got more analysis to do,” the company acknowledged.

Twitter’s photo preview function is the latest incident involving claims of bias in AI, which is widely used today in a range of industries. Companies like Unilever and Ikea use the programs to sort through job applications, for example. Employers ask applicants to answer interview questions on video and an AI tool analyzes the footage, from facial gestures to vocabulary. But critics say the algorithms unfairly evaluates women and applicants of color.

Black and Hispanic homeowners also paid higher interest rates and more in refinance fees when they applied for a home loan using a website or app that was powered by AI, a University of California at Berkeley study of 7 million mortgages found.

AI technology adapted for schools

06:29

AI systems aren’t themselves biased but they’re “reflecting and amplifying historical patterns of discrimination” created by humans, said Sarah Myers West, who studies artificial intelligence bias at New York University.

AI systems used for facial recognition have also come under scrutiny as law enforcement agencies have employed them to help identify potential suspects. Facial-recognition software is plagued with inaccuracies, researchers at MIT have found.