Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
The Quant’s Dilemma: Subjectivity In Predictive AI’s Value
 Originally published in Forbes, September 30, 2024 This is the...
To Deploy Predictive AI, You Must Navigate These Tradeoffs
 Originally published in Forbes, August 27, 2024 This is the...
Data Analytics in Higher Education
 Universities confront many of the same marketing challenges as...
How Generative AI Helps Predictive AI
 Originally published in Forbes, August 21, 2024 This is the...
SHARE THIS:

10 months ago
‘We Definitely Messed Up’: Why Did Google AI Tool Make Offensive Historical Images?

 
Originally published in The Guardian, March 8, 2024. 

Experts say Gemini was not thoroughly tested, after image generator depicted variety of historical figures as people of color.

Google’s co-founder Sergey Brin has kept a low profile since quietly returning to work at the company. But the troubled launch of Google’s artificial intelligence model Gemini resulted in a rare public utterance recently: “We definitely messed up.”

Brin’s comments, at an AI “hackathon” event on 2 March, follow a slew of social media posts showing Gemini’s image generation tool depicting a variety of historical figures – including popes, founding fathers of the US and, most excruciatingly, German second world war soldiers – as people of color.

The pictures, as well as Gemini chatbot responses that vacillated over whether libertarians or Stalin had caused the greater harm, led to an explosion of negative commentary from figures such as Elon Musk who saw it as another front in the culture wars. But criticism has also come from other sources including Google’s chief executive, Sundar Pichai, who described some of the responses produced by Gemini as “completely unacceptable”.

So what happened? Clearly, Google wanted to produce a model whose outputs avoided some of the bias seen elsewhere in AI. For example, the Stable Diffusion image generator – a tool from the UK-based Stability AI – overwhelmingly produced images of people of color or who were darker-skinned when asked to show a “person at social services”, according to a Washington Post investigation last year, despite 63% of the recipients of food stamps in the US being white.

To continue reading this article, click here.

31 thoughts on “‘We Definitely Messed Up’: Why Did Google AI Tool Make Offensive Historical Images?

  1. 澳洲代写服务已经成为越来越多留学生的选择,它们为留学生提供了一个充满学术机遇的平台。通过澳洲代写,留学生可以克服学术难题,获得宝贵的学术支持和指导,并最终实现学术成功。然而,在利用澳洲代写 http://www.pnstudy.com/ 的同时,学生们也应该保持学术诚信,将其视为学习的辅助工具,而不是取而代之的手段。

     
  2. Pingback: Incredibox Links - Incredibox Sprunki

  3. Pingback: Sprunki Phase Links - Sprunki Phase

  4. sprunki A super fun music-making website where you can create your own beats by dragging and dropping different characters. Each character makes unique sounds like beatbox, vocals, or effects. You can mix up to 8 different sounds to make cool music loops. It’s really easy to use – no music skills needed! You can even record and save your creations to share with others. Perfect for killing time and getting creative with music.

     

Leave a Reply