Google Search AI wrongly says Obama is a Muslim. Now it turns off some results

Google Search AI wrongly says Obama is a Muslim. Now it turns off some results

Google promises its new artificial intelligence search tool will “do the work for you” and make finding information online faster and easier. But just a few days after the launch, the company is already walking back some decisions that were actually incorrect.

Google earlier this month introduced an AI-generated search results overview tool, which summarizes search results so users don’t have to click through multiple links to get quick answers to their questions. But the feature came under fire this week after it provided false or misleading information to some user questions.

For example, some users posted on X that Google’s AI summary said that former President Barack Obama was a Muslim, a misconception. In fact, Obama is a Christian. Another user posted that the Google AI summary said that “none of the 54 recognized countries in Africa begin with the letter ‘K'” — clearly forgetting Kenya.

Google confirmed to CNN on Friday that the AI overview for both queries had been removed for violating company policy.

“Most AI Surveys provide high-quality information, with links to dig deeper on the web,” Google spokeswoman Colette Garcia said in a statement, adding that several other viral examples of Google AI flubs appear to have been image manipulation. “We ran extensive tests before launching this new experience and like other features we’ve rolled out in Search, we appreciate the feedback. We take swift action where appropriate under our content policy.”

The bottom of every Google AI search overview acknowledges that “Generative AI is experimental.” And the company says it runs tests designed to mimic potential bad actors in an effort to prevent false or low-quality results from appearing in AI summaries.

Google’s search overview is part of the company’s larger push to incorporate its Gemini AI technology across all of its products as it tries to compete in an AI arms race with rivals like OpenAI and Meta. But this week’s debacle shows the risk that the addition of AI – which has a tendency to confidently state false information – could undermine Google’s reputation as a trusted source for finding information online.

Even on less serious searches, Google’s AI overview seems to sometimes provide incorrect or misleading information.

In one test, CNN asked Google, “how much sodium is in orange juice.” The AI overview answers that an 8-fluid-ounce serving of orange juice contains 342 milligrams of sodium but a serving less than half the size (3 fluid ounces) contains more than twice as much sodium (690 milligrams). (Best Maid orange juice, for sale at Walmart, lists 250 milligrams of sodium in just 1 ounce.)

CNN also searched for: “data used for google ai training.” In its response, the AI overview acknowledged that it was “unclear whether Google prevents copyrighted material from being included” in online data scraped to train its AI models, citing key concerns about how AI firms operate.

This isn’t the first time Google has had to scale back the capabilities of its AI tools over an embarrassing problem. In February, the company paused its AI photo generator’s ability to create images of people after it came under fire for producing historically inaccurate images that largely showed people of color replacing White people.

The Google Search Lab webpage allows users in areas where the AI search overview has been launched to turn the feature on and off.

About Kepala Bergetar

Kepala Bergetar Kbergetar Live dfm2u Melayu Tonton dan Download Video Drama, Rindu Awak Separuh Nyawa, Pencuri Movie, Layan Drama Online.

Leave a Reply

Your email address will not be published. Required fields are marked *