Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Why You Must Twist Your Data Scientist’s Arm To Estimate AI’s Value
 Originally published in Forbes, June 11, 2024. If you’ve...
3 Ways Predictive AI Delivers More Value Than Generative AI
 Originally published in Forbes, March 4, 2024. Which kind...
AI Success Depends On How You Choose This One Number
 Originally published in Forbes, March 25, 2024. To do...
Elon Musk Predicts Artificial General Intelligence In 2 Years. Here’s Why That’s Hype
 Originally published in Forbes, April 10, 2024 When OpenAI’s...
SHARE THIS:

4 years ago
Who Is Responsible When Autonomous Systems Fail?

 
Originally published in CIGIonline.org, June 15, 2020

Elaine Herzberg was killed on the night of March 18, 2018, after she was struck by a self-driving Uber car in Tempe, Arizona. Herzberg was crossing the street with her bike when the vehicle, which was operating in its autonomous mode, failed to accurately classify her moving body as an object to be avoided. Rafaela Vasquez, a backup safety driver who was tasked with monitoring the self-driving car, did not see Herzberg crossing the street. Following the accident and Herzberg’s death, Uber resumed testing its vehicles on public roads nine months later, and subsequently has been cleared of all criminal wrongdoing. More than two years later, Vasquez, the safety driver of the purported autonomous vehicle, continues to face the prospect of vehicular manslaughter charges.

As more autonomous and artificial intelligence (AI) systems operate in our world, the need to address issues of responsibility and accountability has become clear. However, if the outcome of the Uber self-driving accident is a harbinger of what lies ahead, there is cause for concern. Is it an appropriate allocation of responsibility for Rafaela Vasquez alone — and neither Uber, the actor who developed and deployed the technology, nor the state of Arizona, which allowed the testing to be conducted in the first place — to be held accountable?

Both dynamics, human as backup and human as overseer, co-exist within a long history of automation that consistently overestimates the capacities and capabilities of what a machine can do.

Notably, Vasquez was the “human in the loop,” whose role as backup driver was to ensure the safe functioning of the system, which, while autonomous, was not necessarily accurate 100 percent of the time. Such a role is increasingly common, in which humans are required to “smooth over the rough edges” of automated technologies. Scholars continue to document the myriad forms of human labour, from media platforms to online delivery services, that are required to keep intelligent systems operating “intelligently.”

To continue reading this article click here.