Skip to content

Southern Fried Science

Over 15 years of ocean science and conservation online

  • Home
  • About SFS
  • Authors
  • Support SFS

Phantom science – how “AI slop” is making environmental policy

Posted on March 30, 2026March 31, 2026 By Chris Parsons 1 Comment on Phantom science – how “AI slop” is making environmental policy
Academic life, Policy, Science

There’s a new specter haunting environmental governance, and it doesn’t rattle ghostly chains, it’s its generating phantom science.

Recently, I was reading a government report trying to find scientific justifications for environmental actions when I ran into some citations that looked interesting. So, I tried to look them up. Despite a full, official looking citation in the reference list, with link, none of them existed. The links typically did work but took me to a completely unrelated, but real scientific paper. Having been in academia, I was used to scientists inserting dubious, unrelated citations of their own work in order to get their citation rate up. But this was completely different. So, I tried using google scholar and then google AI to find papers on that topic. While there was nothing on google scholar, Google AI was showing me citation and links that looked very similar to the phantom one that I had found… which also didn’t exist. After pointing out the fake to google AI I asked for a real citation. Up came yet another phantom citation. I repeated this three times more times and each time it was a phantom citation/paper. Going back to the Government report (upon which a major project was based) it looked like it was riddled with phantom citations, all providing fake support and backing up what the project developers wanted (and “proving” there would be no environmental damage). It looked like the whole report had been written with AI.

Across academia, law, and now government, generative AI systems are quietly reshaping how reports are written. They promise speed, efficiency, and cost savings. But they also come with a well-documented flaw: they make things up. Not in the obvious, sloppy way of a student padding a bibliography 5 minutes before a submission deadline, but in a far more insidious fashion, by producing polished, plausible but entirely fictional scientific references. Increasingly, those phantom citations are haunting official documents.

The rise of the phantom menace

We’ve moved well beyond hypothetical concerns. AI hallucinations (confidently presented false information) are now empirically documented across multiple domains.

  • Studies show AI systems fabricate or corrupt a significant proportion of references, with one analysis finding only ~26% of AI-generated citations fully accurate and nearly 40% outright fabricated.
  • Another study found hallucination rates in references ranging from 14% to over 90% depending on the model and context.
  • Even in elite scientific venues, dozens to hundreds of fake citations have slipped through peer review and into the published record.

This is not a fringe academic issue. It’s now systemic. What’s more is it gets worse when AI is asked to support a pre-determined conclusion. It’s been noted that models are especially likely to invent sources when prompted to support a specific point. Sound familiar?

When cheating on a homework essay becomes a policy crisis

If this were confined to student essays, sloppy conference presentations or papers in low tier journals, it would be embarrassing. But it isn’t. There are now documented cases of AI-generated hallucinations appearing in Government and reports and key policy documents:

  • A major U.S. policy report on public health included nonexistent studies, repeated citations, and clear markers of AI-generated references.
  • A Government-commissioned consulting report had to be partially refunded after it was found to contain fabricated quotes and fictitious academic sources .

These are not harmless typos. These are structural failures in evidence-based policymaking.

Because once a fabricated citation enters an official report, it gains legitimacy. It gets cited again. It enters the grey literature. It becomes “fact” by repetition.

This is how scientific understanding erodes—not with a bang, but with a bibliography.

“So tell me what you want, what you really, really want.”

The Spice Girls

Let’s be blunt: this isn’t just about technology. It’s about incentives. Government agencies are under pressure to:

  • justify predetermined policy positions;
  • produce reports quickly; and
  • to do so with shrinking budgets and staff.

Generative AI is perfectly suited to this environment. Not because it finds truth, but because it produces convincing narratives on demand. Ask it for the state of the science, and you might get something reasonable. Ask it to support a conclusion, and you will almost certainly get something compliant. AI doesn’t “lie” in the human sense. It optimizes for plausibility. What is more, in the current political climate, a plausible lie is often more useful than facts. Therin lies the danger…

“You want the truth? You can’t handle the truth!”

A Few Good Men

If this feels abstract, consider what’s happening in the legal system.

There are now hundreds of documented cases of lawyers submitting filings containing entirely fabricated case law generated by AI. Courts have issued sanctions, fines, and public reprimands.

Judges have been clear: submitting hallucinated citations is not a technical glitch, but rather it’s professional misconduct. Now translate that standard to environmental governance. What happens when:

  • an environmental impact assessment cites nonexistent studies?
  • a fisheries management plan relies on fabricated population data? or
  • a climate risk report includes invented supporting literature?

At that point, we are no longer dealing with bad science. We are dealing with legally actionable failure.

“You shall not pass!”

Lord of the Rings

Environmental NGOs, political watchdog groups, and investigative journalists should take this both seriously and strategically.  They need to get serious about hunting down AI generated government reports and policy document and blocking decisions based on these in the courts.

“The 600 series had rubber skin. We spotted them easy. But these are new… they look human.”

The Terminator

Here are some suggestions for NGOs to test whether government reports and policy documents or other scientific documents are bona fide or “body snatchers”.

1. Check the references

Take major agency reports and randomly sample citations. Verify DOIs, authors and even the existence of the cited journal.  You don’t need AI expertise, you just need patience and Google Scholar.

2. Look for tell-tale AI signatures

There is AI detection software, but this often hinges on whether report writers actually know grammar. AI detection software is often triggered by consistent correct usage and the assumption that real people do not know the difference between (or use) an em-dash and an en-dash (both of which AI loves), or the Oxford comma. Try checking for repeated citation structures or identical phrasing across sections. In particular look for references that almost (but don’t quite) exist. These are well-documented artifacts of AI generated text.

3. Use the Freedom of Information Act (FOIA)

Request information on a document’s drafting processes, internal communications about report preparation and the agency’s AI usage policies. If AI was used without disclosure or verification protocols, that matters.

4. Use the law

By 2027, expect this to hit the courts in a meaningful way. Potential legal angles for environmental NGOs include:

  • Administrative Procedure Act (APA) challenges (i.e., decisions based on flawed evidence);
  • Freedom of Information violations (i.e., failure to disclose methodology); and
  • Scientific integrity policies (many agencies have them but few enforce them).

If a report underpinning a regulatory decision contains fabricated evidence, that decision becomes vulnerable.

The uncomfortable truth is out there

We are at the early stage of a credibility crisis. Right now, AI hallucinations are treated as quirks or bugs to be ironed out. But the evidence suggests they are a structural feature, not a temporary glitch. When those hallucinations enter the scientific record, the legal system and the policy process they stop being technical issues and become governance failures. The uncomfortable truth is we are already making environmental decisions based, in part, on things that do not exist!

“Nobody trusts anybody now… and we’re all very tired.”

The Thing

None of this means AI has no place in science or policy. It can summarize, translate, and assist. But it cannot be treated as a source of truth. Because it isn’t one. Until agencies build robust verification pipelines (and until there are consequences for failing to use them) the burden will fall on NGOs, journalists and on scientists willing to check the references and footnotes.

The next environmental lawsuit might not hinge on the presence of a threatened species or a habitat model. It might hinge on cited science that was never real in the first place.

Share this:

  • Share on Bluesky (Opens in new window) Bluesky
  • Share on Facebook (Opens in new window) Facebook
  • Share on Reddit (Opens in new window) Reddit
  • Share on Threads (Opens in new window) Threads
  • Share on WhatsApp (Opens in new window) WhatsApp
  • Share on Mastodon (Opens in new window) Mastodon

Related

Tags: AI claude documents gemini google grok hallucinations laws policy Science writing

Post navigation

❮ Previous Post: We Need a “Starfleet” for the Oceans

You may also like

Uncategorized
Welcome to the Future: Three Rules for Artificially Intelligent Underwater Robots.
January 27, 2016
Uncategorized
State of the Field: Policy is not just for policymakers anymore
February 1, 2011
Education
That’s not a blobfish: Deep Sea Social Media is Flooded by AI Slop
December 19, 2025
Popular Culture
I was the entertainment at a 5th birthday party: A new favorite science communication gig
January 24, 2024

One thought on “Phantom science – how “AI slop” is making environmental policy”

  1. Angelo Villagomez says:
    March 30, 2026 at 12:15 pm

    I’m pretty sure the mockups we’ve seen of the Trump Epstein Ballroom are just AI renderings and have not been vetted by an architect

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Popular Posts

What Ocean Ramsey does is not shark science or conservation: some brief thoughts on "the Shark Whisperer" documentaryWhat Ocean Ramsey does is not shark science or conservation: some brief thoughts on "the Shark Whisperer" documentaryJuly 2, 2025David Shiffman
That's not a blobfish: Deep Sea Social Media is Flooded by AI SlopThat's not a blobfish: Deep Sea Social Media is Flooded by AI SlopDecember 19, 2025Andrew Thaler
Shark of Darkness: Wrath of Submarine is a fake documentaryShark of Darkness: Wrath of Submarine is a fake documentaryAugust 10, 2014Michelle Jewell
We Need a "Starfleet" for the OceansMarch 30, 2026Chris Parsons
Join Me at Upwell: A Wave of Ocean Justice — Our Fourth Year!Join Me at Upwell: A Wave of Ocean Justice — Our Fourth Year!March 24, 2026Angelo Villagomez
Urea and Shark OsmoregulationUrea and Shark OsmoregulationNovember 15, 2010David Shiffman
What is a Sand Shark?What is a Sand Shark?November 12, 2017Chuck Bangley
I turned my woodshop into a personal solar farm.I turned my woodshop into a personal solar farm.June 21, 2021Andrew Thaler
The Trouble with Teacup PigsThe Trouble with Teacup PigsOctober 14, 2012Andrew Thaler
Phantom science - how "AI slop" is making environmental policyPhantom science - how "AI slop" is making environmental policyMarch 30, 2026Chris Parsons
Subscribe to our RSS Feed for updates whenever new articles are published.

We recommend Feedly for RSS management. It's like Google Reader, except it still exists.

Southern Fried Science

  • Home
  • About SFS
  • Authors
  • Support SFS


If you enjoy Southern Fried Science, consider contributing to our Patreon campaign.

Copyright © 2026 Southern Fried Science.

Theme: Oceanly Premium by ScriptsTown