Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
How Generative AI Helps Predictive AI
 Originally published in Forbes, August 21, 2024 This is the...
4 Ways Machine Learning Can Perpetuate Injustice and What to Do About It
 Originally published in Built In, July 12, 2024 When ML...
The Great AI Myth: These 3 Misconceptions Fuel It
 Originally published in Forbes, July 29, 2024 The hottest thing...
Where FICO Gets Its Data for Screening Two-Thirds of All Card Transactions
 Originally published in The European Business Review, March 21,...
SHARE THIS:

This excerpt is from TechCrunch. To view the whole article click here.  

9 years ago
Machine Learning And Human Bias: An Uneasy Pair

 

“We’re watching you.” This was the warning that the Chicago Police Department gave to more than 400 people on its “Heat List.” The list, an attempt to identify the people most likely to commit violent crime in the city, was created with a predictive algorithm that focused on factors including, per the Chicago Tribune, “his or her acquaintances and their arrest histories – and whether any of those associates have been shot in the past.”


Use Code PATIMES15 for 15% off a two day pass or combo pass. (Excludes workshops & All Access)

Algorithms like this obviously raise some uncomfortable questions. Who is on this list and why? Does it take race, gender, education and other personal factors into account? When the prison population of America is overwhelmingly Black and Latino males, would an algorithm based on relationships disproportionately target young men of color?

There are many reasons why such algorithms are of interest, but the rewards are inseparable from the risks. Humans are biased, and the biases we encode into machines are then scaled and automated. This is not inherently bad (or good), but it raises the question: how do we operate in a world increasingly consumed with “personal analytics” that can predict race, religion, gender, age, sexual orientation, health status and much more.

Humans are biased, and the biases we encode into machines are then scaled and automated.

I’d wager that most readers feel a little uneasy about how the Chicago PD Heat List was implemented – even if they agree that the intention behind the algorithm was good. To use machine learning and public data responsibly, we need to have an uncomfortable discussion about what we teach machines and how we use the output.

What We Teach Machines

Most people have an intuitive understanding of categories concerning race, religion and gender, yet when asked to define them precisely, they quickly find themselves hard-pressed to pin them down. Human beings can’t agree objectively on what race a given person is. As Sen and Wasow (2014) argue, race is a social construct based on a mixture of both mutable and immutable traits including skin color, religion, location and diet.

As a result, the definition of who falls into which racial category varies over time (e.g. Italians were once considered to be black in the American South), and a given individual may identify with one race at one time and with another race a decade later. This inability to precisely define a concept such as race represents a risk for personal analytics.

Any program designed to predict, manipulate and display racial categories must operationalize them both for internal processing and for human consumption. Machine learning is one of the most effective frameworks for doing so because machine learning programs learn from human-provided examples rather than explicit rules and heuristics.

So let’s say a programmer builds an algorithm that makes perfect racial predictions based on the categories known to an average American — what is called a “common-knowledge test.” Many of its outputs will be strange from other perspectives. Many Brazilians who are considered white in their home country would be recognized as black in the United States.

We need to have an uncomfortable discussion about what we teach machines and how we use the output.

Biracial Americans and individuals from places such as India, Turkey and Israel often challenge racial categorization, at least as Americans understand it. The algorithm will thus necessarily operationalize the biases of its creators, and these biases will conflict with those of others.

The result is a machine learning program that treats race as its creators do — not necessarily as the individuals see themselves or as the users of the program conceive of race. This may be relatively unproblematic in use cases like marketing and social science research, but with the Chicago PD Heat List, ‘No Fly Lists’ and other public safety applications, biases and misperceptions could have severe ramifications at scale.

How We Use The Data

On an individual scale, any algorithm for personal analytics will make errors. A person is multi-faceted and complex, and rarely do we fit neatly into clearly delineated groups. Nonetheless, when individual-level predictions are aggregated, they can support better understanding of groups of people at scale, help us identify disparities, and inform better decisions about how to transform our society for the better.

So if knocking on the doors of potential criminals seems wrong, do we have alternatives?

With the Chicago PD’s algorithm, one option is to generate a ‘Heat Map’ based on the locations of high-risk populations and activities. Los Angeles, Atlanta, Santa Cruz and many other police jurisdictions already do something similar using a predictive policing tool called PredPol. It allows police departments to increase their presence in crime-prone areas, at the right times, without using any personal data. It strictly looks at type, place and time of crimes.

But is profiling by location another form of discrimination? Would police inevitably stop and ticket more people in heat map areas? If I can only afford to live in an economically depressed area, will I be stopped and questioned by police more often than individuals living in a wealthy area? Could a targeted, predictable police presence drive crime into locations where police are unprepared, and thus expand the geography of crime in a city?

There is a huge opportunity to help rather than harm people.

Perhaps there is a net good, instead. With police strategically located and working with communities, there is an opportunity to reduce crime and create greater opportunity for residents. An algorithm has the potential to discriminate less than human analysts. PredPol reports double-digit crime reduction in cities that implement the software. The Chicago PD hasn’t released any data on the Heat List’s effectiveness yet.

The Chicago PD and PredPol models are important reminders that personal analytics aren’t the only option. Before we operationalize identity – and certainly before we target individuals and knock on doors – we have to consider the ethics of our approach, not just the elegance of the solution.

By: Jason Baldridge, co-founder, People Pattern
Originally published at http://techcrunch.com

Leave a Reply