Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
The Great AI Myth: These 3 Misconceptions Fuel It
 Originally published in Forbes, July 29, 2024 The hottest thing...
Where FICO Gets Its Data for Screening Two-Thirds of All Card Transactions
 Originally published in The European Business Review, March 21,...
How to Sell a Machine Learning Project
 Originally published in Built In, February 6, 2024. Never...
The 3 Things You Need To Know About Predictive AI
 Originally published in Forbes, June 29, 2024. Some problems are...
SHARE THIS:

6 hours ago
The Great AI Myth: These 3 Misconceptions Fuel It

 
Originally published in Forbes, July 29, 2024

The hottest thing in tech isn’t even a thing at all. Artificial intelligence is nothing but an ill-defined buzzword searching for meaning. The term may sound like it must mean something particular, but it does not consistently identify any specific technology, cohesive idea or agreed-upon value proposition.

While some use “AI” to simply refer to machine learning, it’s generally meant to convey something more. What exactly? There are two main camps. One accepts the vagueness of the word “intelligence” in their conception of AI. The other seizes upon a goal that’s unambiguous yet quixotic: They proclaim that AI is meant to be capable of no less than everything.

Enter artificial general intelligence (AGI), software capable of any intellectual task humans can do. When a company takes it on as its stated goal, as OpenAI consistently does, AGI resolves AI’s identity crisis—but only by making a deal with the devil. Pursuing AGI accepts as unwieldy and speculative a mission as there could be.

This is divisive indeed. Many insiders prefer more modest goals for AI as a field. But since more tame goals for AI defy clear definition, AI’s identity always tends to revert to AGI. AGI is always lurking, underlying most conversations about AI, so its feasibility is the perennial elephant in the room.

The belief that AGI will be here soon abounds, even among AI researchers. A 2023 survey showed that, in the aggregate, they believe there’s a 50% chance of achieving AGI by 2047.

But the promise of achieving AGI within mere decades is as dubious as it is unfounded. It leads to poor planning, inflates an AI bubble that threatens to incur great cost, gravely misinforms the public and misguides legislation.

With AGI positioned as the last great hope for AI, it’s no wonder thought leaders place their bets on it. But some common misconceptions also contribute to the mistaken belief that we’re headed toward AGI. Here’s why AGI is such an irresistible and yet unrealistic expectation of technology—and three widespread fallacies that fuel the AGI leap of faith.

The Very Most Seductive Story About Technology

“The first ultraintelligent machine is the last invention that man need ever make…”

—Irving John Good, a coworker of Alan Turing, 1965

“The aspiration to solve everything instead of something is important to friends of AGI.”

—Richard Heimann

The wish fulfillment promised by AGI is so seductive and theatrical, that it’s nearly irresistible. By creating the ultimate power, we achieve the ultimate ego satisfaction as scientists and futurists. By building a system that sets its own goals and autonomously pursues them as effectively as a person, we externalize our proactive volition, transplanting it into a new best friend for humankind for whom we hold the very highest regard and with whom we can potentially empathize. By creating a new life form, we realize every last bit of the as-yet-unrealized potential of our general-purpose machines known as computers. By recreating ourselves, we gain immortality.

By creating a single solution to all problems, we transcend any measure of financial reward to gain infinite wealth. Machine learning thought leader and executive Richard Heimann calls this the single solution fallacy.

Fallacy #1: The Single Solution Fallacy. A single solution could render human problem-solving activities unnecessary and obsolete.

This overzealous narrative says that, rather than solving the world’s multitude of problems one at a time, we could solve them all in one fell swoop with the ultimate silver bullet. This means that we need not fret about global issues like climate change, political instability, poverty or health crises. Instead, once an artificial human comes into existence, it will continue to advance itself to become at least as capable a problem-solver as the human race could ever be.

This fallacy is antithetical to prudent business. It promises the luxury of never facing real problems. As Heimann writes in his book, Doing AI, proponents of AGI “feel the need to decouple problems from problem-solving.” He adds, “Well-defined problems are of little interest to the most ardent supporters of AI… what narrow AI lacks in intelligence it makes up for by being boring. Boring means serving customers with commercially viable solutions.”

The single-solution story sells for the same reason that science-fiction movie tickets sell: It’s compelling as hell. Arthur C. Clarke, the author of 2001: A Space Odyssey, made a great point: “Any sufficiently advanced technology is indistinguishable from magic.” Agreed. But that does not mean any magic that we can imagine—or include in science fiction—could eventually be achieved by technology. AGI evangelists often invoke Clarke’s point, but they’ve got the logic reversed.

Fallacy #2: The Sci-fi AI Fallacy. Since much science fiction has become true, science fiction serves as evidence for what is plausible.

My iPhone seems very “Star Trek” to me, but that’s no reason to believe that advancements will bring everything from that TV show into reality, including teleportation, time travel, faster-than-light spaceflight and AGI.

The Compelling Myth Behind AGI: “Better Means Smarter”

“Thinking that such incremental progress on narrow tasks will eventually ‘solve intelligence’ is like thinking that one can build a ladder to the moon.”

—Daniel Leufer et al, summarizing a point made by Gary Marcus

“Intelligence is not a scalar quantity. The space of problems is gigantic and… any intelligent system will only excel at a tiny subset of them.”

Yann LeCun

The most compelling fiction operates by exaggerating the truth. The “AGI is nigh” narrative builds on ML’s real, valuable advancements. Since technology is getting better, it must be getting “smarter” overall. It’s progressing along a spectrum of increasing intelligence. Therefore, it will eventually meet and then surpass human intelligence.

That line of thinking underlies all the AI hype, both the promises of greatness and the warnings of a robopocalypse. It buys into intelligence as a one-dimensional continuum along which human intelligence lies. And it buys into the presumption that we’re moving along that continuum toward blanket human-level equivalence.

I call this The Great AI Myth: Technological progress is advancing along a continuum of better and better intelligence to eventually surpass all human intellectual abilities, i.e., achieve superintelligence.

The media has retold this myth many times. Kelsey Piper wrote in Vox, “AI experts are increasingly afraid of what they’re creating… make it bigger, spend longer on training it, harness more data—and it does better, and better and better. No one has yet discovered the limits of this principle…” Similarly, OpenAI CEO Sam Altman believes the company’s developments such as ChatGPT are bringing us closer to AGI, possibly this decade. The company teases that its next iteration, GPT-5, could be superintelligent. Meanwhile, many can’t help but anthropomorphize such language models.

It’s a fallacy to interpret advances with ML as evidence that we are proceeding toward AGI:

Fallacy #3: The Intelligence Spectrum Fallacy. Better means smarter; improvements with ML or other advanced computer science represent progress along some sort of spectrum toward AGI. This is also known as the first-step fallacy.

I’m in good company. Other data scientists also vehemently push back on the Myth. Books have been popping up, including The AI Delusion; Evil Robots, Killer Computers and Other Myths; and L’Intelligence Artificielle N’Existe Pas. Eight months after I released an online course that first presented The Great AI Myth, computer scientist and tech entrepreneur Erik Larson published a book with almost the same name, The Myth of AI, which opens with much the same reasoning: “The myth of AI is that its arrival is inevitable… that we have already embarked on a path that will lead to human-level AI and then superintelligence.”

Despite the hype spread by some powerful companies, it only makes sense that many data scientists would come to the same rebuttal: AGI’s impending birth is a story of wish fulfillment that lacks concrete evidence.

 

About the author
Eric Siegel is a leading consultant and former Columbia University professor who helps companies deploy machine learning. He is the founder of the long-running Machine Learning Week conference series and its new sister, Generative AI Applications Summit, the instructor of the acclaimed online course “Machine Learning Leadership and Practice – End-to-End Mastery,” executive editor of The Machine Learning Times and a frequent keynote speaker. He wrote the bestselling Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, which has been used in courses at hundreds of universities, as well as The AI Playbook: Mastering the Rare Art of Machine Learning Deployment. Eric’s interdisciplinary work bridges the stubborn technology/business gap. At Columbia, he won the Distinguished Faculty award when teaching the graduate computer science courses in ML and AI. Later, he served as a business school professor at UVA Darden. Eric also publishes op-eds on analytics and social justice. You can follow him on LinkedIn.

Comments are closed.