Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
The Great AI Myth: These 3 Misconceptions Fuel It
 Originally published in Forbes, July 29, 2024 The hottest thing...
Where FICO Gets Its Data for Screening Two-Thirds of All Card Transactions
 Originally published in The European Business Review, March 21,...
How to Sell a Machine Learning Project
 Originally published in Built In, February 6, 2024. Never...
The 3 Things You Need To Know About Predictive AI
 Originally published in Forbes, June 29, 2024. Some problems are...
SHARE THIS:

5 years ago
The Problem with Hiring Algorithms

 

Originally published in EthicalSystems.org, December 1, 2019

In 2004, when a “webcam” was relatively unheard-of tech, Mark Newman knew that it would be the future of hiring. One of the first things the 20-year old did, after getting his degree in international business, was to co-found HireVue, a company offering a digital interviewing platform. Business trickled in. While Newman lived at his parents’ house, in Salt Lake City, the company, in its first five years, made just $100,000 in revenue. HireVue later received some outside capital, expanded and, in 2012, boasted some 200 clients—including Nike, Starbucks, and Walmart—which would pay HireVue, depending on project volume, between $5,000 and $1 million. Recently, HireVue, which was bought earlier this year by the Carlyle Group, has become the source of some alarm, or at least trepidation, for its foray into the application of artificial intelligence in the hiring process. No longer does the company merely offer clients an “asynchronous” interviewing service, a way for hiring managers to screen thousands of applicants quickly by reviewing their video interview —HireVue can now give companies the option of letting machine-learning algorithms choose the “best” candidates for them, based on, among other things, applicants’ tone, facial expressions, and sentence construction.

If that gives you the creeps, you’re not alone. A 2017 Pew Research Center report found few Americans to be enthused, and many worried, by the prospect of companies using hiring algorithms. More recently, around a dozen interviewees assessed by HireVue’s AI told the Washington Post that it felt “alienating and dehumanizing to have to wow a computer before being deemed worthy of a company’s time.” They also wondered how their recording might be used without their knowledge. Several applicants mentioned passing on the opportunity because thinking about the AI interview, as one of them told the paper, “made my skin crawl.” Had these applicants sat for a standard 30-minute interview, comprised of a half-dozen questions, the AI could have analyzed up to 500,000 data points. Nathan Mondragon, HireVue’s chief industrial-organizational psychologist, told the Washington Post that each one of those points “become ingredients in the person’s calculated score,” between 1 and 100, on which hiring decisions can depend. New scores are ranked against a store of traits—mostly having to do with language use and verbal skills—from previous candidates for a similar position, who went on to thrive on the job.

HireVue wants you to believe that this is a good thing. After all, their pitch goes, humans are biased. If something like hunger can affect a hiring manager’s decision—let alone classism, sexism, lookism, and other “isms”—then why not rely on the less capricious, more objective decisions of machine-learning algorithms? No doubt some job seekers agree with the sentiment Loren Larsen, HireVue’s Chief Technology Officer, shared recently with the Telegraph: “I would much prefer having my first screening with an algorithm that treats me fairly rather than one that depends on how tired the recruiter is that day.” Of course, the appeal of AI hiring isn’t just about doing right by the applicants. As a 2019 white paper, from the Society for Industrial and Organizational Psychology, notes, “AI applied to assessing and selecting talent offers some exciting promises for making hiring decisions less costly and more accurate for organizations while also being less burdensome and (potentially) fairer for job seekers.”

Do HireVue’s algorithms treat potential employees fairly? Some researchers in machine learning and human-computer interaction doubt it. Luke Stark, a postdoc at Microsoft Research Montreal who studies how AI, ethics, and emotion interact, told the Washington Post that HireVue’s claims—that its automated software can glean a workers’ personality and predict their performance from such things as tone—should make us skeptical:

Systems like HireVue, he said, have become quite skilled at spitting out data points that seem convincing, even when they’re not backed by science. And he finds this “charisma of numbers” really troubling because of the overconfidence employers might lend them while seeking to decide the path of applicants’ careers.

The best AI systems today, he said, are notoriously prone to misunderstanding meaning and intent. But he worried that even their perceived success at divining a person’s true worth could help perpetuate a “homogenous” corporate monoculture of automatons, each new hire modeled after the last.

Eric Siegel, an expert in machine learning and author of Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, echoed Stark’s remarks. In an email, Siegel told me, “Companies that buy into HireVue are inevitably, to a great degree, falling for that feeling of wonderment and speculation that a kid has when playing with a Magic Eight Ball.” That, in itself, doesn’t mean HireVue’s algorithms are completely unhelpful. “Driving decisions with data has the potential to overcome human bias in some situations, but also, if not managed correctly, could easily instill, perpetuate, magnify, and automate human biases,” he said.

To continue reading this article click here.