Are human beings teaching artificial intelligence (AI) to discriminate against women? Unfortunately, yes they are. How is that possible? Artificial intelligence is programmed by human beings, often using biased datasets that don’t include accurate data for women, people of color, and non-binary individuals.

For example, in 2019, a woman named Jamie Hansson applied for a credit increase on her Apple Card. The application was denied. But when Ms. Hansson compared her Apple Card credit limit with that of her husband, David, the couple found that David’s Apple Card credit limit was 20 times higher, despite the fact that Jamie had a higher credit score than her husband, among other factors in her favor. While Apple insists that they do not base credit decisions on a customer’s gender, other men and women took to Twitter to share their own experiences with apparent gender bias in Apple’s credit assessment processes. It seems that in the early days of credit processing, factors like gender and marital status were taken into account, and those factors remain relevant to AI credit processing algorithms today. AI gender bias like this holds women back, and threatens to undo decades of progress for women’s rights. People of color and algornon-binary individuals are affected by bias in AI, too.

What Causes Gender Bias In AI?

A team of people looking at a laptop

If AI is biased against women, non-binary individuals, and people of color, it’s because the human beings creating the program wrote in their own biases. That’s not to say that the software developers creating AI and machine learning (ML) systems are a bunch of sexist, racist pigs. Developers aren’t necessarily making AI and ML algorithms biased on purpose. But they’re working within a system that is inherently biased towards men – the World Economic Forum (WEF) reports that women hold only 26 percent of AI and data science positions.

That’s not the only problem contributing to AI bias. Developers are also working with biased datasets that only represent a narrow portion of the population. For example, research has found that speech-to-text, text-to-speech, and voice recognition technology has historically failed to perform well for women, because the data sets used to create this technology included only white, male voices – therefore the technology was only accurate when being used by white men. The problem doesn’t just affect voice recognition technology. An algorithm intended to find the best candidates for a software developer job may be biased in favor of men because men spend more time writing and sharing code online through forums like GitHub – and the fact that many women developers pose as men online to avoid facing gender bias in the field doesn’t help. Developers have struggled to create facial recognition technology that can accurately detect the faces of women, non-binary people, and people of color. And, of course, in the medical field, most research is performed on male bodies, meaning that women don’t receive the same quality of care as men.

Solving Gender (And Other Bias) In AI

artificial intelligence

What can we do to solve the problem of gender (and other) bias in AI and ML? We can start by using more diverse training samples. Data that contains a truly diverse representation of men, women, non-binary and transgender individuals, and people of color will give AI and ML algorithms the foundation necessary to eliminate bias in AI. There should be roughly as many samples from women, people of color, and transgender or non-binary people as there are from white cis men. Leading women in AI also believe that hiring more women, non-binary people, transgender people, and people of color to work in AI can help reverse gender bias in the algorithms we depend on because a more diverse team of developers will be more aware of diversity issues in AI and better equipped to create systems free of bias.

artificial intelligence

ML developer teams need to be careful to test the accuracy of their algorithms for different groups of people, including women and minorities. If a bias is found, developers can solve that unfairness by collecting more training data from marginalized groups. Developers need to use modern de-biasing strategies to penalize ML systems not just for misattributing the primary variable, but for creating unfairness towards women and minorities.

Despite all the progress we’ve made towards gender equality over the past several decades, gender bias remains in AI and ML algorithms – and it can have a real effect on how women and minorities are able to live their lives. Solving gender bias in AI will require more diversity in the industry, and more care in collecting training data and labeling that data, but it’s worth it to create a virtual world that’s fair and balanced for everyone.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like