Unbiased AI in Hiring

At Extract, the main focus of our redaction software has always been on shielding personally identifiable information (PII).  An increased prevalence in identity theft has underscored this decision as we eliminate PII from public records.  Some recent conversations we've had in the marketplace and some new reporting has been showing why other identifiable traits, not commonly associated with PII, may need to be removed as well.

The discussions have revolved around topics like race and gender, and how unconscious biases can become present in the hiring process.  It’s all well and good when a small company can view redacted versions of the resumes they’re sorting through, but things get more complicated with big multinational companies that are turning to computer intelligence to cull the hundreds of thousands of resumes they receive.

AI is no doubt a powerful tool when you’re working with large quantities of decisions that need to be made.  Extract uses AI to refine our redaction software with real world examples so we’re able to read documents that come to us even if we haven’t seen them before.

We know, though, that if good unbiased data isn’t used to train AI, that the result is, well, garbage.

This is why the Equal Employment Opportunity Commission (EEOC) has launched an initiative to help both public and private sector employers avoid automating systemic biases.  When AI is doing the initial screening of applicants, it’s possible that the pool of potential hires will be unnaturally reduced due to certain identifiers having an unjustified negative impact.

Currently, the scope of the project centers around education, alerting employers to things like that the third-party tools they’re using to screen resumes can reinforce bias rather than reduce it.  There are five initial goals for the initiative at the moment, all related to education versus enforcement:

  1. Establish an internal working group to coordinate the agency’s work on the initiative.

  2. Launch a series of listening sessions with key stakeholders about algorithmic tools and their employment ramifications.

  3. Gather information about the adoption, design, and impact of hiring and other employment-related technologies.

  4. Identify promising practices.

  5. Issue technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions.

It’s worth noting that the goal of the initiative isn’t to reduce the use of AI in the hiring process, but just to ensure that it’s used in a responsible way.   There are existing guidelines for responsibility that can be used, but employers need to understand how they apply when using automatic screening tools.

For us, removing information that could lead to identity theft or cause bias is a snap.  We both use and offer our proprietary ID Shield software that OCRs documents, identifies information to be removed, and then permanently redacts it.  If you’re interested in learning more, please send us a note.


About the Author: Chris Mack

Chris is a Marketing Manager at Extract with experience in product development, data analysis, and both traditional and digital marketing. Chris received his bachelor’s degree in English from Bucknell University and has an MBA from the University of Notre Dame. A passionate marketer, Chris strives to make complex ideas more accessible to those around him in a compelling way.