Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
The Rise Of Large Database Models
 Originally published in Forbes Even as large language models have...
3 Predictions For Predictive AI In 2025
 Originally published in Forbes GenAI’s complementary sibling, predictive AI, makes...
The Quant’s Dilemma: Subjectivity In Predictive AI’s Value
 Originally published in Forbes This is the third of a...
To Deploy Predictive AI, You Must Navigate These Tradeoffs
 Originally published in Forbes This is the second of a...
SHARE THIS:

5 years ago
An Algorithm That ‘Predicts’ Criminality Based on a Face Sparks a Furor

 
Originally published in Wired.com, June 24, 2020

Its creators said they could use facial analysis to determine if someone would become a criminal. Critics said the work recalled debunked “race science.”

In early May, a press release from Harrisburg University claimed that two professors and a graduate student had developed a facial-recognition program that could predict whether someone would be a criminal. The release said the paper would be published in a collection by Springer Nature, a big academic publisher.

With “80 percent accuracy and with no racial bias,” the paper, A Deep Neural Network Model to Predict Criminality Using Image Processing, claimed its algorithm could predict “if someone is a criminal based solely on a picture of their face.” The press release has since been deleted from the university website.

Tuesday, more than 1,000 machine-learning researchers, sociologists, historians, and ethicists released a public letter condemning the paper, and Springer Nature confirmed on Twitter it will not publish the research.

But the researchers say the problem doesn’t stop there. Signers of the letter, collectively calling themselves the Coalition for Critical Technology (CCT), said the paper’s claims “are based on unsound scientific premises, research, and methods which … have [been] debunked over the years.” The letter argues it is impossible to predict criminality without racial bias, “because the category of ‘criminality’ itself is racially biased.”

Advances in data science and machine learning have led to numerous algorithms in recent years that purport to predict crimes or criminality. But if the data used to build those algorithms is biased, the algorithms’ predictions will also be biased. Because of the racially skewed nature of policing in the US, the letter argues, any predictive algorithm modeling criminality will only reproduce the biases already reflected in the criminal justice system.

To continue reading this article click here.

Leave a Reply