Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Why You Must Twist Your Data Scientist’s Arm To Estimate AI’s Value
 Originally published in Forbes, June 11, 2024. If you’ve...
3 Ways Predictive AI Delivers More Value Than Generative AI
 Originally published in Forbes, March 4, 2024. Which kind...
AI Success Depends On How You Choose This One Number
 Originally published in Forbes, March 25, 2024. To do...
Elon Musk Predicts Artificial General Intelligence In 2 Years. Here’s Why That’s Hype
 Originally published in Forbes, April 10, 2024 When OpenAI’s...
SHARE THIS:

1 month ago
AI Success Depends On How You Choose This One Number

 

Originally published in Forbes, March 25, 2024.

To do its job, AI needs your help. It has the potential to drive millions of operational decisions—such as whom to contact, approve, investigate, incarcerate, set up on a date or medicate. But there’s a critical project step that’s often overlooked: Choosing the decision threshold, a single number that differentiates the yeses from the nos. For example, market only to those customers more than 30% likely to buy. Or approve only those credit applicants more than 95% likely to repay.

Setting the decision threshold represents a strategic business choice, not a technical one. It isn’t the “rocket science” part—it’s the part that aims the rocket. In making this choice, every predictive AI project must wrestle with striking practical tradeoffs between competing objectives. Let me explain.

Why AI Boils Down To A ‘Likelihood Calculator’

“AI” can mean many things, but for organizations using artificial intelligence to improve existing, large-scale operations, the applicable technology is machine learning, which is a central basis for—and what many people mean by—AI. ML learns from data to predict outcomes—whether each individual will click, buy, lie or die—and these predictions drive all kinds of operational decisions.

The type of AI that’s dominated headlines for the past year and a half is called generative AI. That’s not what we’re talking about here. The ML use cases I’m referring to are also called predictive AI, or predictive analytics. By improving an enterprise’s largest-scale operations, predictive AI has the potential to deliver the greatest impact on enterprise efficiencies—often more so than generative AI.

Let’s be real. Nothing can predict perfectly—there’s no magic crystal ball—but ML offers the next best thing: likelihoods. Each predictive model that ML generates from data can be thought of as a “likelihood calculator.” It predictively scores each individual case, telling you the chances of the outcome that you’ve enlisted ML to predict.

To turn these likelihood scores (aka probabilities) into actions, many projects need only choose a single decision threshold (aka confidence threshold). For example, a marketing project might set the threshold at 30%: If a customer is more than 30% likely to buy, then send them a marketing brochure.

Voila—with this one choice, you’ve empowered the computer to drive a million decisions.

The Decision Threshold: Where You Draw The Line

Picking the decision threshold makes all the difference. It determines exactly how a predictive model will drive decisions and it exerts a major influence on the bottom-line improvements to decision making that the model will deliver. In fact, the choice of threshold often has a greater potential effect on project value than changes to the model development itself.

Take payment card fraud detection, which uses a predictive model to decide whether to authorize or hold each transaction. Clearly, it’s the transactions predicted most likely to be fraudulent that should be held. But exactly how likely—where do you draw the line?

For starters, a medium-sized bank might choose a threshold that blocks, say, the 3% most risky transactions, that is, the 3% most likely to be fraudulent. This could successfully block 220,000 cases of fraud per month—but it would simultaneously inconvenience customers 77,000 times by erroneously blocking legitimate transactions.

Alternatively, if the bank went with a more lenient threshold and only blocked the top 2%, there’d be fewer such inconveniences—only 17,600—but this would block a bit less fraud, around 180,000 cases (these are approximate results based on using XGBoost, a popular ML solution, on a public dataset of credit card transactions, scaled to an assumed 10 million transactions per month; see a more detailed breakdown of fraud detection’s value here).

Each predictive AI project offers this kind of optionality, a range of available trade-offs. You choose which trade-off to strike by setting the decision threshold. Navigating this choice demands strategic thinking. Regardless of how well the data scientists develop a predictive model and how well the ML software works, before operationalizing the predictions and (hopefully) capturing value, business leaders must weigh in on the pivotal choice of setting the decision threshold.

One Model, Many Options

By tuning the decision threshold, a business may use each predictive model developed by its data scientists in many different ways. When a company sets the threshold, it is essentially determining how it will triage. How many transactions are worth interrupting as possible fraud? How many customers are worth spending a marketing budget to contact, keeping in mind that lowering the threshold means contacting more prospects but also spending money to market to some who are less likely to buy? How many credit applications should a bank approve, knowing that the more lenient it is, the riskier the debtors it accepts?

By turning the confidence-threshold knob, the business makes a strategic choice as to how aggressively to deploy. Navigating these tradeoffs hinges on business pragmatics and strategic priorities.

Understanding the impact of the confidence threshold is one of several areas where business professionals must ramp up on a semi-technical understanding of how ML drives operations. To collaborate effectively with data professionals, they must gain a concrete understanding of how ML projects work from end to end: What will the model predict? Precisely how will its predictions affect operations? Which metrics meaningfully track how well it predicts? And what kind of data is needed? These essentials make or break each initiative—getting them right paves the way for ML’s value-driven deployment. This know-how does not demand a fully technical understanding of the core number crunching, but it’s still a book’s worth.

Follow me on Twitter or LinkedIn. Check out my website or some of my other work here.

About the author
Eric Siegel is a leading consultant and former Columbia University professor who helps companies deploy machine learning. He is the founder of the long-running Machine Learning Week conference series and its new sister, Generative AI Applications Summit, the instructor of the acclaimed online course “Machine Learning Leadership and Practice – End-to-End Mastery,” executive editor of The Machine Learning Times and a frequent keynote speaker. He wrote the bestselling Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, which has been used in courses at hundreds of universities, as well as The AI Playbook: Mastering the Rare Art of Machine Learning Deployment. Eric’s interdisciplinary work bridges the stubborn technology/business gap. At Columbia, he won the Distinguished Faculty award when teaching the graduate computer science courses in ML and AI. Later, he served as a business school professor at UVA Darden. Eric also publishes op-eds on analytics and social justice. You can follow him on LinkedIn.

Comments are closed.