When it comes to finding the right hire, gone are the days when an army of secretaries thumbed through paper applications. Now, algorithms can help us zip through resumes and backgrounds to find the perfect candidate.
But hidden in the promise of technology-aided efficiency is also the danger of creating a new class of technology-aided messes. Last week, Data & Society Research Institute technologists Alex Rosenblat, Tamara Kneese, and danah boyd published a paper looking at how popular hiring technology can actually perpetuate discrimination, iterate on mistakes until they become invisible to spot, and keep the resumes of qualified people from reaching an employer’s inbox.
Here are four ways the paper outlines how too much reliance on this sort of software could make your organization sloppy:
The lives we lead online can be goldmines of personal information. That’s part of the reason why some companies make their bread-and-butter on social media background checks. But when algorithms assess a person based on what he or she posts online, those algorithms can also insert errors that, in turn, perpetuate harmful biases.
Take a look at the list of red flags that a company called Social Intelligence uses to assess potential employees:
Notice the sexually explicit material tab? Now consider this: You’re in a relationship. You upload a photo of you and your partner kissing next to Niagara Falls. And let’s say you’re in a same-sex relationship. Sometimes, social media platforms flag images as inappropriate even when they’re perfectly beautiful. The paper’s authors cite one example in which Facebook flagged a photo of two men kissing as “graphic sexual content” until other users petitioned to have the flag removed. But would that flag have shown up on your social media background check? Will it indirectly agree with whatever riotous homophobe flagged the image, and assert that you’re a flawed candidate?
That kind of subtle discrimination can persist well beyond homophobia. Federal Trade Commission chief technologist Latanya Sweeney ran a study in which she found that “black-sounding names” pulled up Google ads for criminal background checks, but names like “Geoffrey” didn’t. If you’re an employer considering Geoffrey over, say, Tyrone, and Google searches routinely pull up criminal background check ads for one candidate, it might push you to hire someone with a cleaner-seeming record.
Algorithmic discrimination can extend to your health, too. It can be illegal to discriminate against a candidate because of a medical condition, but in theory, algorithms can also make it easier to hire cheaper employees. Data broker Acxiom, for example, tagged a man named Dan Abate as “of diabetes interest,” which led to his name and address being added to a diabetes mailing list. He didn’t, as it happened, have diabetes. Getting spam seems harmless, but the process by which all this data is chewed up and interpreted is also incredibly obscure. If you are being discriminated against, there’s hardly any way to tell.
Sometimes, companies attempt to use Big Data to right wrongs. The authors cite the example of a company called Entelo that actually offers a hiring algorithm to encourage more diversity. Which sounds great! But it could also be dangerous, the authors write, if the algorithm shifts the conversation away from why the company was implicitly or explicitly rejecting diversity before. “In other words, erasing an embarrassing absence of diversity through a ‘hack’ will not automatically generate the conditions that breed equality in the workforce,” the authors write. “A quick fix is potentially a disincentive to examine more closely the issues that create an environment that is hostile to diversity in the first place.”
Not all employees have to be great at the Internet to do their jobs well. If you’re a custodian, a line cook, or a maid, you could have lots of experience in the service industry, but little experience working with computers. If hiring managers use applicant tracking systems (ATS) to process job applications, preferences will skew to people who do have more Internet expertise. You could be throwing out all that experience to favor primarily young employees who have less job know-how and more knowledge of how to slap together an irrelevant PowerPoint.
In addition, some ATS programs rule out formatted text. If part of your resume is in italics, you might get passed over for jobs constantly and never find out why.
Small mistakes in recruiting databases can become magnified over time, leaving a permanent “black mark” on a person’s record, the paper’s authors write. But sometimes, databases include information that might not even be relevant–or worse, false. The authors cite a program subscribed to by retailers like CVS and Target that includes allegations of theft.
These are serious charges, but the records don’t necessarily include whether the accusations were found to be true in the end. Plus, there’s no way for employees to know these kinds of records exist, or how to contest them. “ The accuracy and implications of these records can be harder to contest than the veracity of information in one’s credit file because there are no legally-mandated procedures for doing so,” the authors write.
There’s little doubt that humans make bad decisions all the time. But even though Big Data practices aim to help us make wildly more efficient and informed ones, technology built by humans is often subject to bad decision-making, too.