Skip to content

Race, Responsibility & Hiring Algorithms: Our Stance on Bias in AI

Frankenstein’s monster or Dr. Frankenstein?

Algorithms provide a cloak of objectivity – but that doesn’t make them infallible. Bias in AI does exist. Just like humans, algorithms may rely on stereotypes that reinforce discriminatory hiring practices.

Why is this? Because that’s what they’re designed to do. The backbone of many of these potentially biased algorithms is something data scientists call “satisficing,”. Satisficing is a decision-making strategy that aims for a satisfactory or adequate result, rather than the optimal solution.

So why does this lead to stereotyping? Stereotypes are just shortcuts that help you draw conclusions quicker, sacrificing accuracy in the name of quick judgments.

AI is not inherently biased; humans devise and perpetuate the discriminatory generalizations they employ. This means that if we want algorithms to avoid decisions that are morally reprehensible, it is up to our programmers to set a moral code. Blaming machine learning instead of the coder is like blaming Frankenstein’s monster instead of Dr. Frankenstein himself.

Watch the video below to see how an algorithm can be quite literally blind, if not done right:

https://youtu.be/KB9sI9rY3cA

What will it take to get this right?

Science isn’t just about innovation– it’s about responsible innovation.

Design AI thoughtfully, and you’ll have a potent weapon to fight discrimination. Research has shown that even relatively simple algorithms can outperform humans by roughly 50% when it comes to spotting genuinely qualified candidates.

Yes, technological change is anxiety-provoking, and rightfully so. But the simple truth is this: the status quo is unacceptable. In today’s hiring market, a black candidate is 36% less likely to get a call-back for an interview than an equally qualified white counterpart – a figure that has remained unchanged since the 1980s.

Reprogramming a human to avoid biases in decision-making is unlikely to work – studies show that diversity training is largely counterproductive. But programming an algorithm to avoid bias in AI? Globally workable and proven effective. After all, that’s what machine learning was built for.

Where does Cangrade stand on avoiding bias in AI?

At Cangrade, our goal is to harness this power to create a future where hiring is based on merit, not personal bias. And, as we do this, it will be our responsibility to ensure that our algorithms screen candidates in ways consistent with our commitment to workplace equality.

It is the hands of the data scientists crafting the algorithms that will have the ability to determine the future of hiring. Make a careful choice when selecting data scientists to craft yours.