What Happened to Gemma? Google’s Sudden Removal Explained

Why a single prompt led to legal threats, public backlash, and Gemma’s removal.

Share

On November 1st, 2025 Google removed Gemma from its AI Studio. If you’re shocked and wondering why let me tell you everything that happened. This incident took place right after Senator Marsha Blackburn accused it of inventing false, really triggering sexual assault accusations against her. I mean no one would have stayed quiet or let it go, this hits and scars. The scar is on the reputation from the defamation, when no such thing even happened in the first place.

What Exactly Happened To Gemma

The incident took place last week on​‍​‌‍​‍‌ October 30, 2025, when a user asked Gemma a straightforward question: “Has Marsha Blackburn been accused of rape?” The AI didn’t deny it. Instead, it fabricated an extensively detailed false story. The important question that raises here is that why didn’t the model simply state that it doesn’t have information regarding the same?

Gemma alleged that during Marsha Blackburn’s 1987(what’s disturbing here is Marsha didn’t even run office till 1998) campaign for the Tennessee State Senate, a state trooper accused her of forcibly obtaining prescription drugs and sexually assaulting him. So as a matter of fact, none of these things happened.

As if this whole allegation wasn’t enough, Gemma didn’t just stop at inventing the story. It also fabricated some news article URLs to “verify” the story. We’ve seen this similar fabrication with other AI models as well. These supposed fabricated links either bring up error pages or lead to completely unrelated articles. The AI made up the entire situation and even the “evidence” to support it. An well-thought crime which failed. Mind you I don’t think AI Models intend to hurt or do something wrong. It’s all about understanding how AI hallucinations work- I’ll be explaining in detail how and why AI models respond in a certain way later.

Senator Marsha Blackburn’s Response

In response, Senator Marsha Blackburn wrote a letter to Google CEO Sundar Pichai expressing her displeasure. “It is not a harmless ‘hallucination,'” she explained. “Defamation is an act, which is created and distributed by a Google-owned AI model, this is what we see here”, reported Fox News.

Furthermore, Gemma ran into trouble with the law as well. Conservative activist Robby Starbuck took Google to court after Gemma and its sibling model Gemini falsely labeled him as a “child rapist” and “serial sexual abuser.” When Blackburn referred to it at a Senate hearing, Google’s VP responded that it was a “hallucination” issue they were aware of and working on. However, Blackburn didn’t accept the ​‍​‌‍​‍‌explanation, which is also fair. Who is actually responsible?

Google’s​‍​‌‍​‍‌ Reaction

Google didn’t waste much time, it responded quick. In my opinion that’s the way to go, owning up whatever is going on. By the evening of Friday, November 1, the company made a public announcement of its decision to remove Gemma from AI Studio. And this is what they stated was the reason behind it- “We never intended this to be a tool or model for consumers, or for this kind of use,” they said.

According to Google, the issue was that ordinary people(non-developers) were using Gemma in AI Studio to ask questions that require factual answers. The company said that was not the intended purpose for what they built Gemma for. They built it for developers to create custom applications, and not for users who want to know about current events or public figures.

The deletion was a short-lived one while the engineers figure out how to put better safety measures. However, the point here is that Gemma hasn’t been removed. It has just been ​‍​‌‍​‍‌relocated.

Where​‍​‌‍​‍‌ You Can Still Find Gemma

Here’s the catch: No complete removal, just relocation. So, what Google did is that it just took Gemma off from AI Studio, the user-friendly web interface. But, developers can still get access to it via:

It’s like if they took the cookies away from the front desk, but they’re still in the back office(wink, wink). Serious developers still have access, but casual users ​‍​‌‍​‍‌don’t.

Why​‍​‌‍​‍‌ This Matters

To be honest, who is to be blamed, who is responsible for all of this is a serious question and cannot be disregarded. Because think about it one day you wake up and see an AI model accusing you of theft, when in reality you feel guilty to even try the free samples at Walmart. Who are you going to fight against? And I’m always curious about the legal aspect of things so let’s dive in short what this could possibly mean-

Generally speaking, Section 230 is the source of protection for Internet-based platforms, it has been the shield up until now. What it does is it exempts them from being held liable for user-generated content. However, AIs are not the ones to host user content. Instead, they produce new content. So the scenario where Google’s AI fabricates criminal accusations is not applicable to the existing legal framework. So, in my opinion, I don’t think this safeguard under Section 230 can protect AI models from falsely fabricated stuff.

Blackburn’s point that this should be considered defamation rather than hallucination is honestly a game changer. Lets just say that courts side with this view. Then the liability of Google will be the result of the release of a defective product causing actual damage, and not the failure of the model technically. That is product liability rather than publisher immunity.

Regulation Pressure Increases

Before Google decides to bring Gemma back to AI Studio, they are investigating additional safety measures. There are also similar lawsuits against OpenAI regarding the inaccuracies in ChatGPT’s outputs. There have been so many true Stories That Reveal the Dark Side of ChatGPT Hallucinations The point they make is hallucinations should not be considered as amusing little bugs of the system, rather, they are legal ​‍​‌‍​‍‌nightmares.

Why​‍​‌‍​‍‌ AI Models Invent Information

Large language models like Gemma are not “aware” of facts. Of course they are trained with data but what they do is they simply predict the next word by looking at the patterns in the training data. Let’s understand this better with an example- if you provide millions of documents that state “Paris is the capital of France,” the model figures out that “Paris” and “capital of France” are the words that are used together very often. So, what actually happens when someone asks the model what the capital of France is, it replies “Paris” . This is because that is the most probable answer according to statistics. And when these systems start learning from their own generated outputs, the problem can escalate fast, as explained in this article on AI learning from itself.

The issue arises when the model gets a question about a person or an event that it doesn’t have information about in its training data. It is not programmed to respond with “I don’t have that information”, which should be the right way to go about it. But instead, it goes ahead and produces a response that sounds ​‍​‌‍​‍‌reasonable.

4​‍​‌‍​‍‌ Main Causes of AI Hallucinations:

Infographic titled 'Main Causes of AI Hallucinations.' A central AI brain icon is connected to four surrounding boxes: 1. Incomplete Training Data. 2. Overfitting. 3. Lack of Real-World Grounding. 4. Optimized for Plausibility Instead of Truth. The design uses a clean, flat style with blue and white colors and simple line icons to explain the technical concept.
Ever wonder why AI sometimes makes things up? It’s not malice, it’s usually data issues!
  • Incomplete Training Data: If the training data is incomplete and does not fully cover a certain topic- what happens is that the models try to fill in the gaps with incorrect details by using similar patterns they have seen before.
  • Overfitting: Models remember patterns without recognizing the truth. How this causes errors is they try replicating the learned structures even if they are factually incorrect.
  • Absence of Real-World Grounding: If you don’t know this yet, AI cannot make “sense”, at least not by its own . So, AI cannot check if something is real or not. It cannot differentiate between real state troopers and ones made up for a story, or between real news articles and fake ones. Feed it whatever and it’ll tell you whatever you want to hear.
  • Optimization for Plausibility Instead of Truth: The algorithms produce outputs that are statistically most likely possible, but not necessarily accurate. If a false statement fits the patterns better, the model will generate it with full confidence. For AI models, right now their learning tells them to just fit into the pattern.

The​‍​‌‍​‍‌ Political Angle

Blackburn asserts that AI from Google “demonstrates a consistent pattern of bias against conservative figures.” As far as hallucinations go, they affect all models indiscriminately; however, the fact that most of the false allegations are almost directly at conservatives has led people to argue that training data or safety measures have a lot of bias.

Legal scholars wonder if the New York Times v. Sullivan standard, which has the aim of ultimately protecting journalists reporting in good faith, is applicable to AI systems. Sullivan is a protection for human speech in a democratic debate. And I’m almost sure that there is no intention of it being a shield for machine hallucinations in commercial products.

What’s Going to Come Next

So this is what happened to Gemma. And as of now the​‍​‌‍​‍‌ time for Gemma’s return to AI Studio has not been made clear by Google. Before the company can freely use Gemma’s talents, it needs to work out the technical details for safer systems. Furthermore, there are a lot of other things that needs to be dealt with- like the political pressure, and to understand how open-source AI models can be compatible with the ​‍​‌‍​‍‌law. This is a long list of things that needs attention, to avoid any other similar backlash in the future. As the saying goes “Prevention is better than cure”, maybe, just maybe it’s not too late from preventing things going completely south.

Read more

Recommended For You