Large, popular companies today often wade through millions of job applications each year. That duty itself could fill several full-time jobs, which is why they’ve all trained algorithms to winnow at least the first round of candidates. It’s an approach that’s spawned a thriving side-economy, “work tech.” But research shows this automated screening process can reject qualified workers who don’t instantly meet the machine’s programmed-in criteria—which, it so happens, also tend to be based on past strong applicants, who were often white, American, and male. Today, some of America’s largest companies are vowing to implement new safeguards that seek to eliminate this type of bias.
It’s the first initiative by a new group called the Data & Trust Alliance, and the aim is to offer companies that rely on AI hiring a tool kit to identify then eliminate unfair bias. The business partners include 21 large corporations—among them Walmart, Nike, Meta, IBM, American Express, Mastercard, CVS, Deloitte, General Motors, Humana, Nielsen, and Under Armour. The alliance itself was formed last year by former American Express CEO Ken Chenault and former IBM CEO Sam Palmisano, who felt that letting AI solve all of business’s problems was starting to have risks.
This initiative’s corporate partners employ almost 4 million people and have a combined market value of more than $3 trillion. Its “Algorithmic Bias Safeguards” include a set of 55 questions for evaluating AI hiring software. The alliance says this tool can be used to spot AI’s unintended discrimination in everything from a company’s training data and hiring model design, to its bias remediation methods, commitments to diversity, and transparency about the whole process. The criteria were reportedly developed by a working group of professionals from HR, AI, IT, law, and diversity, equity, and inclusion, then refined using input from hundreds of outside academic experts and business leaders.
Critics of AI hiring bias will likely counter that this sounds great, but would be more appropriate in an independent authority’s hands, as opposed to businesses’. The Data & Trust Alliance is addressing that in part by stressing the group has no plans to become a think tank or influence policy, and will share its tools and best practices with anyone trying to advance responsible use of data and algorithms.