Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
The Quant’s Dilemma: Subjectivity In Predictive AI’s Value
 Originally published in Forbes This is the third of a...
To Deploy Predictive AI, You Must Navigate These Tradeoffs
 Originally published in Forbes This is the second of a...
Data Analytics in Higher Education
 Universities confront many of the same marketing challenges as...
How Generative AI Helps Predictive AI
 Originally published in Forbes, August 21, 2024 This is the...
SHARE THIS:

4 years ago
Who Is Responsible When Autonomous Systems Fail?

 
Originally published in CIGIonline.org, June 15, 2020

Elaine Herzberg was killed on the night of March 18, 2018, after she was struck by a self-driving Uber car in Tempe, Arizona. Herzberg was crossing the street with her bike when the vehicle, which was operating in its autonomous mode, failed to accurately classify her moving body as an object to be avoided. Rafaela Vasquez, a backup safety driver who was tasked with monitoring the self-driving car, did not see Herzberg crossing the street. Following the accident and Herzberg’s death, Uber resumed testing its vehicles on public roads nine months later, and subsequently has been cleared of all criminal wrongdoing. More than two years later, Vasquez, the safety driver of the purported autonomous vehicle, continues to face the prospect of vehicular manslaughter charges.

As more autonomous and artificial intelligence (AI) systems operate in our world, the need to address issues of responsibility and accountability has become clear. However, if the outcome of the Uber self-driving accident is a harbinger of what lies ahead, there is cause for concern. Is it an appropriate allocation of responsibility for Rafaela Vasquez alone — and neither Uber, the actor who developed and deployed the technology, nor the state of Arizona, which allowed the testing to be conducted in the first place — to be held accountable?

Both dynamics, human as backup and human as overseer, co-exist within a long history of automation that consistently overestimates the capacities and capabilities of what a machine can do.

Notably, Vasquez was the “human in the loop,” whose role as backup driver was to ensure the safe functioning of the system, which, while autonomous, was not necessarily accurate 100 percent of the time. Such a role is increasingly common, in which humans are required to “smooth over the rough edges” of automated technologies. Scholars continue to document the myriad forms of human labour, from media platforms to online delivery services, that are required to keep intelligent systems operating “intelligently.”

To continue reading this article click here.