The San Francisco tech firm said this would be “the industry’s first algorithmic bias bounty competition,” with prizes up to $3,500.
The competition is based on the “bug bounty” programmes some websites and platforms offer to find security holes and vulnerabilities, according to Twitter executives Rumman Chowdhury and Jutta Williams.
“Finding bias in machine learning models is difficult, and sometimes, companies find out about unintended ethical harms once they’ve already reached the public,” Chowdhury and Williams wrote in a blog post “We want to change that.”
They said the hacker bounty model offers promise in finding algorithmic bias.
“We’re inspired by how the research and hacker communities helped the security field establish best practices for identifying and mitigating vulnerabilities in order to protect the public,” they wrote “We want to cultivate a similar community… for proactive and collective identification of algorithmic harms.”
The move comes amid growing concerns about automated algorithmic systems, which, despite an effort to be neutral, can incorporate racial or other forms of bias.
Twitter, which earlier this year launched an algorithmic fairness initiative, said in May it was scrapping an automated image-cropping system after its review found bias in the algorithm controlling the function.
The messaging platform said it found the algorithm delivered “unequal treatment based on demographic differences,” with white people and males favored over Black people and females, and “objectification” bias that focused on a woman’s chest or legs, described as “male gaze.”