Google Has Terminated Its AI Feature That Displayed Community-Sourced Medical Guidance

Date:

Google has terminated a search feature that used AI to present health guidance sourced from internet community discussions. The tool, “What People Suggest,” was designed to give users access to peer health experiences curated by AI, but has since been pulled from the platform. Three individuals with inside knowledge confirmed the removal before Google made an official statement.
The feature was introduced at Google’s annual “The Check Up” health event in New York by then-chief health officer Karen DeSalvo. In a blog post, DeSalvo explained that the feature addressed users’ desire to learn from others in similar health situations. The AI organized relevant discussions from online forums into themes, making community health knowledge more accessible.
Google denied that safety influenced the decision to remove the feature, framing it as search page simplification. However, the company’s evidence of public communication about the change — a blog post from a Switzerland-based Google employee — made no reference to the discontinued feature. One insider confirmed simply: “It’s dead.”
The removal is part of a difficult period for Google’s AI health products. An investigation published earlier this year found that Google’s AI Overviews had been presenting false health information to approximately two billion monthly users. While Google removed some medical AI Overviews in response, health professionals called for more systemic changes to address the issue.
As Google plans its next health event, the discontinuation of “What People Suggest” will remain a point of reference for critics who believe the company’s approach to health AI lacks the rigor the domain demands. Genuine progress requires not only bold new products but honest acknowledgment of where previous products have failed.

Related articles

Instagram’s Encryption Removal: The Child Safety Argument Explained

One of the most powerful arguments behind Meta's decision to remove end-to-end encryption from Instagram DMs is child...

Microsoft Leverages Government Ties to Shield Anthropic From Pentagon’s Retaliatory AI Designation

Microsoft has leveraged its extensive government ties to shield Anthropic from the consequences of the Pentagon's retaliatory supply-chain...

Musk’s xAI Wins Permit for Massive Mississippi Datacenter Power Station

In a unanimous vote, the Mississippi Environmental Quality Permit Board has granted Elon Musk’s xAI a permit for...

OpenAI Seizes Opportunity Created by Anthropic’s Expulsion From Federal AI Market

In the business of artificial intelligence, opportunity often arrives dressed as someone else's misfortune, and OpenAI has seized...