The questions surrounding bias in artificial intelligence are urgent and the answers lie in diversifying tech workforces, researchers say.
A yearlong look at the issue, which included poring through 150 previous studies, found that “bias in AI systems reflects historical patterns of discrimination,” a new report being released Wednesday says. The report finds that such technology is being created by large tech companies and a few universities, mostly by wealthy white men who benefit from such systems, which can harm people of color, gender minorities and other under-represented groups.
Only 15 percent of AI research employees at Facebook and 10 percent at Google are women, according to researchers at AI Now Institute at New York University, which published the report. The overall numbers of black workers at tech companies such as Google, Facebook and Microsoft range from 2.5 percent to 4 percent. Taken together, that constitutes what the researchers call a crisis, especially as AI is being used in determining loan or insurance approvals, who gets interviewed for a job, who gets bail, and in predictive policing and more.
The report also expresses “deep concern” about and urged rethinking of AI systems that classify, detect and predict race and gender. Examples it cites: Uber’s facial recognition system — which is made by Microsoft — did not recognize a transgender driver’s face last year, causing her to be locked out of the ride-hailing app and miss three days of work. A few years ago, Google Photos identified black people as gorillas, the report noted.
“There is an intersection between discriminatory workforces and discriminatory technology,” Sarah Myers West, a postdoctoral researcher at AI Now and lead author of the study, said on a Tuesday call with reporters. She also noted that there needs to be “a greater level of transparency” around AI, which she said is “largely obscured by trade secrets.”
It’s important to “look beyond technical fixes for social problems,” said Meredith Whittaker, co-founder and co-director of AI Now. She is also founder and lead of Google’s Open Research group.
The researchers recommend changes that tech companies have long been pushed to make: fix wage and opportunity inequality by race and gender, provide more transparency about hiring practices and wages, and get more members of underrepresented groups in positions of leadership.
“Existing methods have failed to contend with uneven distribution of power,” said Kate Crawford, co-founder and co-director of AI Now and research professor at New York University, on Tuesday. Crawford, who also is a principal researcher at Microsoft Research, added that “fixing the so-called pipeline problem (in tech) is not going to fix AI’s diversity problem.”
Crawford said focusing on the pipeline, or the supply of available tech workers, ignores deeper issues. Those include workplace culture, harassment, exclusionary hiring practices and tokenization, which could cause employees to leave companies or avoid AI altogether, she said. The report listed problems of harassment, discrimination or downplaying of diversity problems at big tech companies that are investing in AI, including Microsoft, Uber, Apple, Google, Facebook and Tesla.
Crawford also addressed internal backlash against the push to diversify tech workplaces. She referred to the memo by former Google engineer James Damore, who attributed the low numbers of women in tech to biological gender differences.
“It’s going to be important that people making those arguments aren’t making AI systems,” she said.
Is there incentive for tech companies to take the researchers’ recommendations to heart?
“Frankly, we’ve now reached a moment of serious reckoning,” Crawford said. “The call for accountability is coming from in the house.” She pointed to recent worker protests at Microsoft and Google over issues of harassment and discrimination, saying how the companies deal with those issues will determine how they retain and attract talent.
Whittaker mentioned other types of pressure, including the introduction of legislation to end forced arbitration in workplaces, which was helped by efforts of Google employees.
The researchers extended similar calls for changes in the academic world because research about bias in AI could also use different perspectives, and should keep in mind intersectionality — that is, people could face discrimination based on more than one factor. They called for more transparency, plus rigorous testing of AI systems that include pre-release trials, independent auditing and continued monitoring.
The AI Now Institute is a nonprofit organization that studies issues surrounding artificial intelligence. Its partners and funders include Google, Microsoft, the Ford Foundation, MacArthur Foundation and the ACLU.