Skip to content

Race, Responsibility & Hiring Algorithms

Frankenstein’s monster or Dr. Frankenstein?

Algorithms provide a cloak of objectivity – but that doesn’t make them infallible. Just like humans, hiring algorithms may rely on stereotypes that reinforce discriminatory hiring practices.

Why is this? Because that’s what they’re designed to do. The backbone of many of these potentially discriminatory algorithms is something data scientists call “satisficing,”. Satisficing is a decision-making strategy that aims for a satisfactory or adequate result, rather than the optimal solution.

So why does this lead to stereotyping? Stereotypes are just shortcuts that help you draw conclusions quicker, sacrificing accuracy in the name of quick judgments.

Algorithms are not inherently biased; the discriminatory generalizations they employ are devised and perpetuated by humans. This means that if we want hiring algorithms to avoid decisions that are morally reprehensible, it is up to our programmers to set a moral code. Blaming the machine instead of the coder is like blaming Frankenstein’s monster instead of Dr. Frankenstein himself.

Watch this video to see how an algorithm can be quite literally blind, if not done right:

What will it take to get hiring algorithms right?

Science isn’t just about innovation– it’s about responsible innovation.

Design hiring algorithms thoughtfully, and you’ll have a potent weapon to fight discrimination. Research has shown that even relatively simple algorithms can outperform humans by roughly 50% when it comes to spotting genuinely qualified candidates.

Yes, technological change is anxiety-provoking, and rightfully so. But the simple truth is this: the status quo is unacceptable. In today’s hiring market, a black candidate is 36% less likely to get a call-back for an interview than an equally qualified white counterpart – a figure that has remained unchanged since the 1980s.

Reprogramming a human to avoid biases in decision-making is unlikely to work – studies show that diversity training is largely counterproductive. But programming a machine to avoid biases? Globally workable and proven effective. After all, that’s what machine learning was built for.

Where does Cangrade stand on this?

At Cangrade, our goal is to harness this power to create a future where hiring is based on merit, not personal bias. And, as we do this, it will be our responsibility to ensure that our hiring algorithms screen candidates in ways consistent with our commitment to workplace equality. It is the hands of the data scientists crafting the algorithms that will have the ability to determine the future of hiring. Make a careful choice when selecting data scientists to craft yours.