Finding the Intersection of AI and DEI

Mateo Cavasotto

Mateo Cavasotto · Mar 29

The conversation around diversity, equity, and inclusion grew exponentially louder over the last year – and for a good reason. For too long, these efforts went unaided, with most of the work falling on the shoulders of DEI advocates and marginalized populations. As organizations woke up to the value and need for increased DEI, spurred on by increasing civil unrest and social actions, these initiatives swelled in size and scope. Employers are working to understand what DEI means in practice, starting with hiring.

Recruiting diverse candidates represents an early but essential step on the road to diversity maturity. As such, many organizations are eager to get diversity recruitment right, which often leads to questions about where technology fits into the process. Here is where things get tricky because when it comes to supporting DEI initiatives, the relationship with certain technologies, like artificial intelligence and machine learning, is … complicated.

While popular for automation and efficiency, research indicates that AI and ML-powered solutions sometimes come with bias baked in. As a result of this and a few high-profile incidents, many still see this as a buyer-beware market. But here’s the thing, AI and ML need diversity to overcome the possibility of bias because, by design, these technologies need data to get smarter. That’s why it’s time to set the record straight, learn what AI and ML mean for hiring, and debunk the belief that a single solution will either create – or remove – bias from recruiting.

How does bias happen?

This is a bit of a loaded question. Karen Hao for MIT Technology Review reminds us, “It’s not enough just to know that this bias exists. If we want to be able to fix it, we need to understand the mechanics of how it arises in the first place.” More than that, Hao says, “We often shorthand our explanation of AI bias by blaming it on biased training data. The reality is more nuanced: bias can creep in long before the data is collected as well as at manage other stages of the deep-learning process.”

Translation: bias is systemic. It can appear in how your organization frames a problem, how you collect your data, and how you prepare that data – in the technologies you use and in your interactions with other humans. Huma Abidi, senior director AI software products and engineering at Intel, explains, “Despite great intentions to build technology that works for all and serves all, if the group that’s responsible for creating the technology itself is homogenous, then it will likely only work for that particular group. Companies need to understand, that if your AI solution is not implemented in a responsible, ethical manner, then the results can cause, at best, embarrassment, but it could also lead to potentially having legal consequences if you’re not doing it the right way.”

Supporting DEI

The last part of Abidi’s statement is critical, especially for DEI. It acknowledges that while there is the possibility of bias in AI and ML, there are ethical solutions proactively working to support workforce diversity. “Bias is all of our responsibility,” says Harvard Business Review (HBR), and leveraging AI and ML-powered tools can help fight distorted results and mistrust in business and society. Whereas there’s no easy way to eradicate or erase bias from individual humans, AI and ML are malleable, able to be revised, rewritten, and reworked overtime to deliver objective outcomes. HBR details one such advancement, known as “counterfactual fairness,” a technique that ensures an AI model’s decisions remain the same in a “counterfactual” world, wherein sensitive attributes, such as race, gender, or sexual orientation, change.

While only one example, this speaks to the ability for AI and ML to relearn, or as DEI Strategist Jackye Clayton says, “It’s machine learning, not machine-learned.” So, what does this mean for diversity recruitment? Simply put, yes, there are critical factors to consider about how technology fits into your strategy. At the same time, there are also critical factors to consider about how humans fit into your strategy.

HBR offers, “AI can help humans with bias — but only if humans are working together to tackle bias in AI.” To find the intersection of AI and DEI, you need to accept the truth in that statement. No one solution will fix bias if it is already ingrained in your organization’s hiring processes. Instead, look to leverage a few tools in tandem to develop a deeper understanding of what’s happening and loop humans in to review recommendations along the way. Increase your use of AI and ML to learn about candidates and collect more data. AI needs DEI – and vice versa.

Request a demo.