Skip to content

Hiring Bias Gone Wrong: Amazon Recruiting Case Study

Even though we fear artificial intelligence (AI) is sentient, it is far from it. 

A dystopian sci-fi universe in which computers rise up is quite removed from our present reality. AI is only as smart as the humans who program it, though it is quite efficient in its analytical processing efforts. 

This may seem hard to believe as we use AI daily:

  •  In our professional writing: Grammarly
  • While texting: smartphone sentence completion
  • When driving: parking assistance

AI is less obvious in these respects. Consider the spam folder in your email inbox. How does Gmail know what is junk and what is not? AI.

Amazon’s Biased Artificial Intelligence

It only makes sense that AI’s reach should extend further. And Amazon acted upon this in 2015. The basis of their technology was simple and seemingly practical. 

It started with the question: “How do you identify high-fit candidates?” and their answer was, “You look at your existing thriving employees.” 

This approach is the basis for many machine learning problems in the hiring industry, so it seemed like a standard protocol.  Not quite. Amazon should have assessed whether they were falling prey to hiring trends in Silicon Valley (then dubbed “Brotopia” by Emily Change) and tech at large. 

In 2015, Amazon was amongst the tech titans whose workforce was disproportionately high in male employees. 

What happens when you feed a biased dataset to an algorithm? Scaled bias, mostly. The data they used (resumes of current employees) inadvertently suggested that male candidates were the better picks, instilling hiring bias in their talent acquisition process. 

This pipeline of bad data input resulting in bad data output is commonly referred to as Garbage In, Garbage Out

Can Artificial Intelligence Be Trusted?

In practice, this means that Amazon’s shiny new recruiting tool (read: biased AI) penalized resumes that mentioned “Women” or “Women’s.” It biased their hiring process.

Thus, a person on the “Women’s Rugby team” or who went to a “Women’s College” was penalized. 

It was more pronounced if the person had various affiliations with organizations or universities that included the word “Women’s” in it. Consequently, male candidates exceedingly benefitted from the AI’s flawed training set. 

Does that mean AI is biased and can’t be trusted? I mean, Amazon couldn’t get it right. Surely this is a lost cause. 

Not at all. This was almost 7 years ago. And a lot happens in 7 years, especially in a fast-paced field like AI. 

Bias-free AI is a possibility. Cangrade holds patent 11429859 for our innovative process of mitigating and removing bias from AI

Cangrade’s Bias-Free Artificial Intelligence

Our AI is not only ethical, but it is also recently ADA-compliant. And while most organizations cover the current list of EOCC-protected groups, we also protect against adverse impacts for two more groups: marriage status and whether applicants or candidates have children. 

Hiring bias and discrimination are rampant and aren’t necessarily intentional. There are countless measures we can take to mitigate them. Consider adopting responsible AI to build a more diverse and stronger workforce. When AI’s power is unchecked, it can scale hiring bias, disproportionately affecting minority groups. 

However, when ethical AI is designed, it can elevate voices that aren’t typically heard. Cangrade offers a patented and science-backed ethical AI solution. 

Contact us today for your demo.