Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
How Generative AI Helps Predictive AI
 Originally published in Forbes, August 21, 2024 This is the...
4 Ways Machine Learning Can Perpetuate Injustice and What to Do About It
 Originally published in Built In, July 12, 2024 When ML...
The Great AI Myth: These 3 Misconceptions Fuel It
 Originally published in Forbes, July 29, 2024 The hottest thing...
Where FICO Gets Its Data for Screening Two-Thirds of All Card Transactions
 Originally published in The European Business Review, March 21,...
SHARE THIS:

10 months ago
A University Curriculum Supplement to Teach a Business Framework for ML Deployment

 

 

In 2023, as a visiting analytics professor at UVA Darden School of Business, I developed and “field tested” a curriculum supplement designed to augment introductory data science courses so that they cover the business-side execution of machine learning projects – the know-how needed to ensure successful deployment. In this article, I motivate and outline the curriculum supplement, and provide access to the course materials. 

This curriculum supplement corresponds with my new book, The AI Playbook: Mastering the Rare Art of Machine Learning Deployment, and the business process presented therein, bizML

“I believe the greatest challenge of the 21st century is to make an increasingly fast-moving technical world ‘legible’ to a large number of people. My belief is that by solving these information asymmetries, we will naturally build the infrastructures necessary to maintain stability in an era of great change and possibility.”

–Jack Clark, former Policy Director of OpenAI

While there are so many how-to courses for hands-on techies, there are practically none that also serve the business leadership side of machine learning – a striking omission, since success with machine learning relies on a very particular business leadership practice just as much as it relies on adept number crunching. Without that leadership, most machine learning projects fail.

In most cases, fledgling data scientists are effectively taught that a project revolves around the number crunching rather than the intended business deployment. They jump almost invariably into hands-on courses and books that presume the training data is already prepared — and that the preproduction business steps that would flesh out a viable deployment plan and associated prediction goal were completed elsewhere; the prediction goal (dependent variable) is manifested only by way of preparing that data.

Moreover,  the vast majority of instructors and authors egg them on: The first step in the coursework is to load data into the modeling software. This supports a false narrative that condones skipping right past the earlier steps of a project. As a result, jumping straight into the core ML itself before establishing a path to operational deployment is the most common mistake that derails ML projects.

Every machine learning project potentially faces deployment challenges that might derail the entire project. To deploy means endeavoring to introduce probabilities – namely the scores output by a predictive model – into an existing, mission-critical business process. This is often a qualitative unknown to business stakeholders. It requires a new kind of socialization and education so that key personnel across the enterprise can provide informed input and authorization.

To be specific, business stakeholders must upskill on three semi-technical aspects that drive the deployment of a predictive model:

1) What does it predict? The prediction goal – technically known as the dependent variable – must be defined in great detail, and those details are informed by business-side considerations. For example, for churn modeling, it is rarely sufficient to stipulate “Will the customer buy if contacted?” as the goal. Rather, the goal must stipulate qualifiers and caveats, such as, “If sent a brochure, will the customer buy within 13 business days with a purchase value of at least $125 after shipping and not return the product for a refund within 45 days?”

2) How well does it predict? Business stakeholders must work with data scientists to agree on the metrics. This involves some ramp-up. Accuracy is generally the wrong measure. There is usually both a technical performance measure (such as lift) and a business metric (such as profit or customer attrition rate) – and forecasting the expected beneficial effect on the latter based on an offline assessment of the former is not always straightforward.

3) How are those predictions acted upon? This means understanding the precise way in which probabilities affect existing operations in order to improve them. For example, “If the model scores an individual as more than 40% likely to churn, send a retention discount offer.”  More than determining a threshold on probabilistic scores, all aspects of how deployment will affect change – including change to daily processes conducted by operational staff – must be well understood.

In practice, business stakeholders neither learn about nor involve themselves in these details, relegating them to data scientists. With that silo in place, the project is viewed as a technical endeavor rather than a business endeavor with a technical component. Further down the line, when it comes to deploying a model, data professionals find that decision makers have gotten cold feet.

A Curriculum Supplement to Address ML’s Deployment Challenge 

“What are some options for making sure data scientists are well-prepared to deploy models on the job? Traditional education has (so far) shown some shortcomings, leaving employers and data scientists largely to rely on themselves. But as the field matures, savvy educators who are aware of the ‘deployment gap’ can address it in curricula.” 

–Thomas Davenport and Katie Malone, “Deployment as a Critical Business Data Science Discipline,” https://hdsr.mitpress.mit.edu/pub/2fu65ujf/release/3

All professionals who will be involved in ML projects – both data professionals and business professionals – must learn a business paradigm for running ML projects. This is the “whole other half” to an ML project besides its technical execution.

In the curriculum supplement, I present a six-step business practice for ML projects that I call bizML. As important as the steps themselves is the requirement that business stakeholders deeply collaborate throughout – and upskill on pertinent semi-technical knowledge beforehand.

Here are the six steps of bizML:

1) Value: Establish the deployment goal. This step defines the business value proposition: how ML will affect operations in order to improve them by way of the final step, model deployment.

2) Target: Establish the prediction goal. This step defines exactly what the model will predict for each individual case. Each detail of this matters from a business perspective.

3) Performance: Establish the evaluation metrics. This step defines which measures matter the most and what performance level must be achieved – how well the model must predict – for project success.

4) Fuel: Prepare the data. This step defines what the data must look like and gets it into that form.

5) Algorithm: Train the model. This step generates a predictive model from the data. The model is the thing that’s “learned.”

6) Launch: Deploy the model. This step uses the model to render predictions (probabilities) – thereby applying what’s been learned to new cases – and then uses those predictions to improve business operations.

While bizML is more comprehensively introduced in my  book, The AI Playbook: Mastering the Rare Art of Machine Learning Deployment (MIT Press), the curriculum supplement that I share here covers the fundamentals – mostly by way of a series of videos.

This supplement empowers students to generate value with machine learning – whether they’ll do so as a techie, a business leader, or some combination of the two. For the more technical students, before jumping straight into the hands-on, as quants are inclined to do, they should consider one thing: This curriculum provides complementary know-how that all great techies also need to master. It contextualizes the core technology, guiding students on the end-to-end process required to successfully deploy a predictive model so that it delivers a business impact.

The Teaching Philosophy Embodied by This Supplement 

This topic must be fully accessible to non-technical learners – otherwise, it entirely misses the point. Since this supplement covers the business side of ML projects, it is much less technical than typical data science fare. There are no technical hands-on exercises nor heavy math. For techies, this is an important break from and augmentation to traditional ML curricula.

Yet the content is concrete and can seem very technical from the vantage of business-side learners – I therefore qualify it as semi-technical. For example, this involves the math translating from model lift to profit, sometimes by way of false positive and false negative costs. For the most part, the material involves how a model is used, not how it works on the inside or how an ML algorithm generates it. This isn’t the rocket science — it’s how to capitalize on the rocket science. In the end, this often winds up providing practical material many data scientists – who largely focus on the algorithms and hands-on number crunching – have never learned.

To engage business learners in these details, the material must be put into their terms in as fascinating way as possible – leading with a teaser such as the promise to answers questions like:

  • Why must machines learn in order to predict?
  • How much does ML increase profit?
  • How accurately can ML predict?
  • How valuable is an imperfect model?
  • How can you trust ML? (Train/test split)
  • Why call it “predictive” when, rather than predicting the future, it performs diagnosis, object recognition, or fraud detection?
  • Is true “artificial intelligence” coming or is all that “AI” stuff a myth?

In general, curricula have been slow to introduce this level of material. In an article on “the deployment gap,” Thomas Davenport and Katie Malone emphasize the need for data science training to include project experience that involves deployment, such as by way of capstone projects. Their article delves into related pedagogical strategy and example training programs that follow suit.

Implementation 

I tested out this supplement twice at the University of Virginia School of Data Science course “Business Analytics for Data Scientists” (DS5400) – with the great help and generous participation of the course’s professor, Marc Ruggiano, Inaugural Director of the UVA Darden Collaboratory for Applied Data Science and Lecturer at the University of Virginia School of Data Science.

His course is part of a dual degree program between the School of Data Science and the Darden School of Business. The sessions were during Summer 2022 and Spring 2023. Student feedback was largely positive.

The Curriculum Supplement – Complete Details 

I first provided a live overview presentation that you can watch in full here. Note that the second half of that video is the most salient – the first half is a brief introduction to ML and its business use cases in general.

That presentation introduces the fundamentals and prompts the students to then view 88 minutes of videos for further details. This was akin to a reading assignment, except presentational rather than words in print. This consisted of 14 short videos selected from my online course, (which has 144 videos in total). If you are an instructor and would like access for evaluation purposes, please reach out to me.

Finally, the students were then assigned a group project that involved the application of the concepts covered.

This opening lecture (video linked above) is entitled “Most Models Don’t Deploy, But They Could.” It provides a high-level overview meant to compel the business professional, generating enough excitement for her to dive into the semi-technical material. Here is an outline of that lecture:


OPENING VIDEO OUTLINE:

Today’s ML hype is unwieldy and greatly misleading. By fetishizing the tech and paying little attention to its launch, it’s like cultivating excitement about rocket science but forgetting to televise the lift-off.

Most models developed for deployment don’t get there (show survey results)

The remedy is a leadership discipline for ML projects

This is the much-neglected “other half” of what it takes to run an ML project

Most know ML projects must lead with the business objective, but that’s only the bumper-sticker version of what must be a discipline

The practice consists of six Steps that are fairly standard and well-known, but are rarely executed effectively

It also requires business-stakeholder understanding of (training for) and collaboration on semi-technical material that spans the gap:

How probabilities will mechanically change an operation/decision

The fully-detailed prediction goal definition (depvar)

Model performance measures and KPIs – and how they relate

Conceptual data requirements (on a relatively non-technical level, but pertinent to both the tech- and business-side considerations)

This “no-mans-land” of material is eschewed by both the tech and biz sides. It’s not only that these two species fail to connect. It’s that both fail to take ownership of both this gap-spanning material and the proper leadership practice. Nobody is connecting the hose to the faucet.


Here is a list of the 14 short videos the students are then assigned to watch, grouped by topic area.

Machine Learning Business Applications – and Lift

The ingredients of a machine learning application
The Prediction Effect: A little prediction goes a long way
Model lift and response modeling

Leadership Process: How to Manage Machine Learning Projects

Project management overview
The six steps for running a ML project
Running and iterating on the process steps
How long a machine learning project takes
Refining the prediction goal

Project Scoping and Greenlighting

Where to start – picking your first ML project
Strategic objectives and key performance indicators
Greenlighting: Internally selling a machine learning initiative

Data Prep and the Dependent Variable

Data prep for-the-win – why it’s absolutely crucial
Defining the dependent variable
Refining the predictive goal statement in detail


Homework Assignment: Group Project on ML Project Leadership 

The student project included a case-study problem-solving challenge wherein the students would form an elevator pitch for a machine learning project by answering the following questions:

What are we trying to predict and for which customers? 

How will the output (e.g., scores) be used and by whom (marketing, store managers, etc)? 

What is the expected value to the company and what metrics will we use to track it? 

How will we use small experiments to validate our findings prior to recommending CFI implement them at scale? 

For peer-reviewing, rubrics such as the following can be used.

What are we trying to predict and for which customers? 

As an example, here is a possible prediction goal: “For each new customer who has conducted their first purchase, will the customer make another purchase within six months?” Note that there are plenty of other viable prediction goals for various operational purposes – this is only one example. Keep an open mind for unexpected, creative ideas.

In your opinion, does the prediction goal submitted appear potentially valuable for its intended operationally purpose? For example, it is meant to target a marketing campaign, is it conceivable that it could be used to do so in a way that helps?

Does the submission convey something that can be predicted for an individual customer, that is, a behavior or outcome that either would or would not apply for each customer? Do not concern yourself with whether the right data may or may not be available, only with whether the submission describes something to be predicted about a customer.

How will the output (e.g., scores) be used and by whom (marketing, store managers, etc)? 

Does the submission convey specifically how operations will make use of model predictions?

What is the expected value to the company and what metrics will we use to track it? 

Does the submission convey some type of value or improvement for the business?

Does the submission provide a reasonable explanation as to why the named KPI is a good choice for the project proposal? Even if you do not think it is the best idea or a “great” idea, does the explanation given sound at all reasonable to you?

How will we use small experiments to validate our findings prior to recommending the company implement them at scale? 

Does the idea submitted seem at least worthy of consideration to you – is there some reasonable logic behind the idea?

Fellow educators, let’s train data scientists so they’ll run ML projects that successfully deploy and deliver business value!

This article is a product of the author’s work while he held a one-year position as the Bodily Bicentennial Professor in Analytics at the UVA Darden School of Business.

About the Author

Eric Siegel, Ph.D., is a leading consultant and former Columbia University professor who helps companies deploy machine learning. He is the founder of the long-running Machine Learning Week conference series and its new sister, Generative AI World, the instructor of the acclaimed online course “Machine Learning Leadership and Practice – End-to-End Mastery,” executive editor of The Machine Learning Times, and a frequent keynote speaker. He wrote the bestselling Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, which has been used in courses at hundreds of universities, as well as The AI Playbook: Mastering the Rare Art of Machine Learning Deployment. Eric’s interdisciplinary work bridges the stubborn technology/business gap. At Columbia, he won the Distinguished Faculty award when teaching the graduate computer science courses in ML and AI. Later, he served as a business school professor at UVA Darden. Eric also publishes op-eds on analytics and social justice.

Eric has appeared on Bloomberg TV and Radio, BNN (Canada), Israel National Radio, National Geographic Breakthrough, NPR Marketplace, Radio National (Australia), and TheStreet. Eric and his books have been featured in Big Think, Businessweek, CBS MoneyWatch, Contagious Magazine, The European Business Review, Fast Company, The Financial Times, Forbes, Fortune, GQ, Harvard Business Review, The Huffington Post, The Los Angeles Times, Luckbox Magazine, MIT Sloan Management Review, The New York Review of Books, The New York Times, Newsweek, Quartz, Salon, The San Francisco Chronicle, Scientific American, The Seattle Post-Intelligencer, Trailblazers with Walter Isaacson, The Wall Street Journal, The Washington Post, and WSJ MarketWatch.

5 thoughts on “A University Curriculum Supplement to Teach a Business Framework for ML Deployment

  1. While the Mazda3 has generally been well-received for its sporty design and engaging driving experience, there are a few model years that have been associated with certain issues. Specifically, the 2010 and 2011 model years have been noted for potential transmission problems, including issues with shifting and hesitation. Additionally, some users have reported rust problems in the rear wheel wells of Mazda3s from the 2004 to 2009 model years, particularly in regions with harsh winters and exposure to road salt. It’s important for prospective buyers to conduct thorough research and consider individual vehicle histories when looking at used Mazda3s, as maintenance and care can significantly impact a car’s reliability regardless of the model year.

    https://www.autonationx.com/mazda3-years-to-avoid/

     
  2. 我们的代写 http://www.emwchinese.com/ 服务保证100%原创,绝不抄袭。每一篇作业在交付前都会经过严格的查重系统检测,确保符合学术诚信的标准。此外,我们还提供灵活的服务选项,无论是短期的紧急任务,还是长期的学期项目,我们都能帮助你按时完成,并达到预期的成绩目标。

     

Leave a Reply