Put Glue To Pizza And Eat Rocks, Google Scrambles For A Fix By Manually Removing Weird AI Answers


AI promises a lot of things. But that many things depend on how good the AI is trained, and the software itself.

Since OpenAI introduced ChatGPT, the tech world was quick in realizing its potential. Sooner than later, they began an arms race, where companies either use the OpenAI product, or develop their own.

Google opts for the latter, and it experienced a few hiccups at first.

Nothing too serious because after all, Google Bard, which was later renamed to Gemini, was initially introduced to a small group of people.

But when Google finally launched its AI Overviews to its search engine, by promising that the AI search tool would “do the work for you” and make finding information online quicker and easier, hells breaks loose.

This is because days after the launch, the company is already walking back some factually incorrect results.

AI Overviews is a tool that can summarize Google Search's search results, so users don't have to click through multiple links to get quick answers to their questions.

But the feature came under fire after it provided false or misleading information to some users’ questions.

For example, several users posted on X that Google’s AI summary said that former President Barack Obama is a Muslim, a common misconception. In fact, Obama is a Christian.

Another user posted that a Google AI summary said that “none of Africa’s 54 recognized countries start with the letter ‘K’” — clearly forgetting Kenya.

Then, there was one moment when the AI responded to a question about how to pass kidney stones, suggesting users to drink urine.

And in another example, the AI said that doctors recommend pregnant women to smoke 2-3 cigarettes per day.

Another example, is when the Google AI recommended users to add non-toxic glue to pizza to make the cheese stick better.

It also suggested users to ingest rocks for vitamins and minerals.

And in another example, Andrew Jackson, the 7th U.S. President, who was born in 1767 and died in 1845, graduated from college in 2005.

And in yet another example, having a cockroach living in ones' penis is "normal."

It also referred a snake as a mammal.

Then, the AI is found suggesting that analingus, which is an act of oral-anal sex act in which one person stimulates the anus of another by using their tongue or lips, can boost the immune system due to enticing the creation of truffle butter in their saliva.

It's worth noting that the term "truffle butter" is a sexual slang for the secretions left on a woman after anal then vaginal sex, popularized by rapper Nicki Minaj's 2014 song of the same name.

The AI also once responded to a user query, saying that "astronauts have met cats on the moon, played with them, and provided care."

Not to mention, there was also an instance where the AI said that two U.S. Presidents were petrol heads.

It didn't take long until the weird, nonsense answers caused furor online.

What happens here is that, AI Overviews, which combines statements generated from its language models with snippets from live links across the web, can summarize information, cite sources, but doesn't really know when its sources are correct.

For example, the instruction to add glue into their pizza sauce to prevent the cheese from sliding off, can be traced back to a decade-old Reddit post meant to be a joke. And as for the suggestion to eat a rock a day for nutrients originated from a post from a digital media company and newspaper organization that publishes satirical articles only.

Lara Levin, a Google spokeswoman, said in a statement that the vast majority of AI Overviews queries resulted in “high-quality information, with links to dig deeper on the web.”

The system was designed to answer more complex and specific questions than regular search. The result, the company said, was that the public would be able to benefit from all that Gemini could do, taking some of the work out of searching for information.

But in this case, the AI was dealing with something different.

"Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce," she added.

Google spokesperson Colette Garcia added that some other viral examples of Google AI weird answers were actually manipulated images.

"We conducted extensive testing before launching this new experience, and as with other features we’ve launched in Search, we appreciate the feedback. We’re taking swift action where appropriate under our content policies," she said.

Regardless, the issue is a public relation nightmare.

The company plans to use "isolated examples" of problematic answers to refine its system.

The backlash demonstrated that Google is under more pressure to safely incorporate AI into its search engine.

The launch also extends a pattern of Google’s having issues with its newest AI features immediately after rolling them out.

While issues like this hurt the company, like during the Bard mishap that costed Google $100 billion being wiped from its valuation, Google has no choice but to push forward.

In the tech world where generative AI makes frequent headlines, Google needs to move as quickly as it can to keep up with rivals.

Since OpenAI released its ChatGPT chatbot and it became an overnight sensation, it was a "code red" at Google.

Issues like this can be hard to replicate, part of which because the answers are inherently randomly generated.

Generative AI tools work by predicting what words would best answer the questions asked of them based on the data they’ve been trained on.

They’re prone to making things up - a widely studied problem known as hallucination.