When a Google engineer recently declared Google’s AI chatbot a person, pandemonium ensued. The chatbot, LaMDA, is a large language model (LLM) that is designed to predict the likely next words to whatever lines of text it is given. Since many conversations are somewhat predictable, these systems can infer how to keep a conversation going productively. LaMDA did this so impressively that the engineer, Blake Lemoine, began to wonder about whether there was a ghost in the machine.
Reactions to Lemoine’s story spanned the gamut: some people scoffed at the mere idea that a machine could ever be a person. Others suggested that this LLM isn’t a person, but the next perhaps might be. Still others pointed out that deceiving humans isn’t very challenging; we see saints in toast, after all.
But the diversity of responses highlights a deeper problem: as these LLMs become more common and powerful, there seems to be less and less agreement over how we should understand them. These systems have bested many “common sense” linguistic reasoning benchmarks over the years, many which promised to be conquerable only by a machine that “is thinking in the full-bodied sense we usually reserve for people.” Yet these systems rarely seem to have the common sense promised when they defeat the test and are usually still prone to blatant nonsense, non sequiturs and dangerous advice. This leads to a troubling question: how can these systems be so smart, yet also seem so limited?
To continue reading this article, click here.
You must be logged in to post a comment.
I agree. Make sure it’s safe to use.
You must come up with a new word using the same three letters as the previous word to win the game weaver game wordle
Thank you !