According to Hollywood, we are all just one short step away from living in a world run by computers. Artificial intelligence (AI) is advancing rapidly into territory that was previously in the realm of science fiction. Some see this future as a welcome utopia while others live in fear of life with Terminator- or Matrix-style outcomes. No matter which outcome you believe is most likely, the truth probably lies somewhere in between.
You see, the biggest problem with computers is also their greatest advantage – they are programmed by humans.
One way many companies are hoping to use AI is in creating greater hiring diversity that is free from bias. A recent Deloitte survey found that more than 30 percent of respondents were already utilizing some form of AI in their recruitment and hiring process.
While this seems like a boon for gender and racial diversity in general, there are still some problems.
Discrimination in the workplace
Removing bias in the workplace is a complicated issue. In part, this is because a person's perception of discrimination is largely determined by their own subjective experiences.
According to studies conducted by the Pew Research Center, an individual's perception and experience of discrimination can be influenced by their gender, race, and age. These studies demonstrate that even when people broadly agree that discrimination is occurring, the ways in which it is defined can be dramatically different.
Further complicating the issue for humans is the environmental context of the potential discrimination. A recent study showed that women working in male-dominated workplaces were more likely to report a higher rate of discrimination based on gender.
As humans, there are so many variables to be taken into consideration when defining and addressing discrimination that even with the best of intentions, it can be difficult.
When tech companies are creating the coding that governs the AI being used to make hiring decisions (and are made up of primarily white men), the issue becomes even more complicated to navigate successfully. Even with the best of intentions, the subjective experiences of these men may make it difficult for them to create products that are free from their own unconscious biases. [Interested in recruiting software? Check out our best picks.]
Joy Buolamwini brought a lot of attention to unintended bias, which can be perpetrated rapidly via software and AI. In her TED talk she explained, "Algorithmic bias, like human bias, results in unfairness. However, algorithms, like viruses, can spread bias on a massive scale at a rapid pace. Algorithmic bias can also lead to exclusionary experiences and discriminatory practices."
In her work as a graduate student at MIT, she discovered that facial recognition software was unable to recognize dark-skinned faces as effectively as those of lighter-skinned individuals.
She expanded her testing to include AI-powered systems from larger companies, including IBM and Microsoft, and found that while those systems were adept at identifying white male faces, they performed poorly when tasked with discerning between dark-skinned faces. The results were worst when the dark-skinned faces were of women, showing two biases at play.
The results of her work were independently verified by at least one of the companies and then rectified. Once coding changes were implemented, the AI performed better when presented with racial and gender diversity.
So, can AI help with hiring bias?
The answer is yes, with a small qualification.
During his speech at the Leverhulme Centre for the Future of Intelligence (CFI), Stephen Hawking said, "Success in creating AI could be the biggest event in the history of our civilization. But it could also be the last – unless we learn how to avoid the risks. Alongside the benefits, AI will bring dangers, like powerful autonomous weapons or new ways for the few to oppress the many."
If those creating the technology are not vigilant in testing and seeking feedback from outside their own gender and racial grouping, they could rapidly evolve AI into an oppressive weapon rather than a tool for liberation.
One of the ways scientists at DeepMind are attempting to remove the human bias element in creating new AI systems is through their new framework, the Generative Query Network (GQN). With this new process, researchers hope to remove the inherent bias that occurs when humans teach computers how to think.
The new process allows machines to learn in a similar way to humans. They are placed in environments and learn via personal observations of the world they inhabit. This gives AI the opportunity to learn without attaching a human element to their observation.
Will this prevent all bias from forming? Probably not. The environments are still created by humans and just as humans develop their own biases based on their personal experiences, it can be expected that the same behavior will naturally occur as AI progresses.
As more companies integrate advanced technology and artificially intelligent processes throughout their hiring processes, it is important to test and verify that these new innovations are truly unbiased. When this is accomplished, then AI can truly eradicate gender and racial bias in the workplace.