Originally published in Wired.com, March 15, 2020.
At its core, “artificial intelligence” is a perilous belief that fails to recognize the agency of humans.
A leading anxiety in both the technology and foreign policy worlds today is China’s purported edge in the artificial intelligence race. The usual narrative goes like this: Without the constraints on data collection that liberal democracies impose and with the capacity to centrally direct greater resource allocation, the Chinese will outstrip the West. AI is hungry for more and more data, but the West insists on privacy. This is a luxury we cannot afford, it is said, as whichever world power achieves superhuman intelligence via AI first is likely to become dominant.
If you accept this narrative, the logic of the Chinese advantage is powerful. What if it’s wrong? Perhaps the West’s vulnerability stems not from our ideas about privacy, but from the idea of AI itself.
After all, the term “artificial intelligence” doesn’t delineate specific technological advances. A term like “nanotechnology” classifies technologies by referencing an objective measure of scale, while AI only references a subjective measure of tasks that we classify as intelligent. For instance, the adornment and “deepfake” transformation of the human face, now common on social media platforms like Snapchat and Instagram, was introduced in a startup sold to Google by one of the authors; such capabilities were called image processing 15 years ago, but are routinely termed AI today. The reason is, in part, marketing. Software benefits from an air of magic, lately, when it is called AI. If “AI” is more than marketing, then it might be best understood as one of a number of competing philosophies that can direct our thinking about the nature and use of computation.
A clear alternative to “AI” is to focus on the people present in the system. If a program is able to distinguish cats from dogs, don’t talk about how a machine is learning to see. Instead talk about how people contributed examples in order to define the visual qualities distinguishing “cats” from “dogs” in a rigorous way for the first time. There’s always a second way to conceive of any situation in which AI is purported. This matters, because the AI way of thinking can distract from the responsibility of humans.
To continue reading this article, click here.
You must be logged in to post a comment.
Pingback: AI is an Ideology, Not a Technology – Machine Learning Times – Warta Saya
Pingback: AI is an Ideology, Not a Technology – Machine Learning Times | | News For Tomorrow
Pingback: AI is an Ideology, Not a Technology – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times – Wisdoltd
Pingback: AI is an Ideology, Not a Technology – Machine Learning Times – Som2ny Network