Scripli logoScripli
Back to Scripli

Wall of Precedent

Documented cases — not anecdotes

These are real, publicly reported cases of human writers falsely accused of using AI. Every case is sourced from credible news outlets or peer-reviewed research. Nothing here is fabricated or anonymised from unverifiable sources.

Systematic findings

Individual cases are symptoms. These are the underlying numbers — from independent evaluations and peer-reviewed studies.

50%+

Misclassification in journalistic test

The Washington Post ran a journalistic investigation testing Turnitin's AI detector and found it misclassified over 50% of the samples tested. Note: this was a small-sample journalistic test, not a peer-reviewed study. Turnitin's own published claim is a false positive rate of under 1%.

Washington Post, June 2023

6.8%

Average false positive rate across detectors

A peer-reviewed study testing six AI detectors against known human-written text found an average false positive rate of 6.8% — with individual detectors ranging from 1% to 32%. Formal, edited academic prose showed the highest risk.

Weber-Wulff et al., 2023

3–4×

Higher risk for non-native writers

Peer-reviewed research found non-native English speakers are flagged at 3–4 times the rate of native speakers, even when all writing is human-authored.

Liang et al., 2023 (arXiv / PNAS)

32%

Worst-case false positive rate

The worst-performing AI detector tested in a systematic peer-reviewed evaluation flagged 32% of human-written academic essays as AI-generated.

Weber-Wulff et al., 2023

Documented cases

8 cases. All sourced. Outcomes reported where known.

Texas A&M — Entire class accused using ChatGPT as detector

Texas A&M University · 2023

2023

Who

Multiple students, agriculture course

What happened

Instructor Jared Mumm emailed the entire class claiming they had all used AI on their essays. His detection method: he pasted student writing into ChatGPT and asked whether it had written the text. ChatGPT is not a detection tool and is known to produce unreliable claims about authorship.

Evidence / defence

Students disputed the accusations directly. The university assigned temporary "X" (incomplete) grades pending investigation.

Outcome

All but one student were exonerated. The one sanctioned student had admitted to using ChatGPT. No students were prevented from graduating.

Why this matters

Widely cited as an early example of unvalidated AI detection methods causing mass false accusations. Reported in the Washington Post, NBC News, Rolling Stone, and others.

UC Davis — History major accused via GPTZero, exonerated

UC Davis · 2023

2023

Who

William Quarterman, senior history major

What happened

A professor ran Quarterman's exam answers through GPTZero, which returned a positive result. The professor gave him a failing grade on that basis.

Evidence / defence

Quarterman contested the accusation. University officials conducted an investigation.

Outcome

The case was dropped approximately one month after the accusation. Quarterman was exonerated.

Why this matters

One of the first widely reported cases of GPTZero producing a false positive that led to a failing grade. Reported in Rolling Stone.

UC Davis — Student flagged by Turnitin, cleared but investigation remained on record

UC Davis · 2023

2023

Who

Louise Stivers, UC Davis student

What happened

Hours after uploading a brief summarising a Supreme Court case, Stivers received an email from a professor stating Turnitin had flagged a portion of her work as AI-written.

Evidence / defence

Stivers contested the finding. UC Davis administration investigated.

Outcome

Cleared by UC Davis. However, the investigation remained on her academic record, which she stated she would need to self-report to law schools and state bar associations — a lasting consequence despite exoneration.

Why this matters

Illustrates that exoneration does not always prevent ongoing harm. The record of investigation persisted even after the accusation was found unsubstantiated.

University of North Georgia — Student on academic probation for using Grammarly

University of North Georgia · 2023

2023

Who

Marley Stevens, student

What happened

Stevens was falsely accused of AI use and placed on academic probation. She had used Grammarly — a grammar and spelling checker — to fix spelling and punctuation. Grammarly is not an AI text generator.

Evidence / defence

Stevens explained she had used only Grammarly. The case was documented in Bloomberg's October 2024 investigation.

Outcome

Academic probation was imposed. Specific resolution not reported.

Why this matters

Demonstrates that AI detectors cannot reliably distinguish AI-generated text from text edited with basic grammar tools. Documented in Bloomberg's major investigation of AI detection failures.

Central Methodist University — Student accused while seven months pregnant

Central Methodist University · 2023

2023

Who

Moira Olmsted, online student studying to become a teacher; seven months pregnant at the time

What happened

Olmsted submitted a reading summary assignment and received a grade of zero. Her professor told her an AI detection tool had flagged her work as likely AI-generated.

Evidence / defence

Olmsted disputed the accusation. Her case was featured in Bloomberg's investigation.

Outcome

Not fully resolved in available reports.

Why this matters

Featured as a central case in Bloomberg's October 2024 investigation concluding that AI detection systems "regularly falsely accuse students of cheating, resulting in anxiety, paranoia and time-wasting."

Liberty University — Student received failing grades on three assignments

Liberty University · 2024

2024

Who

Brittany Carr, student

What happened

Carr received failing grades on three separate assignments after they were flagged by an AI detector.

Evidence / defence

She showed her revision history and demonstrated that one assignment had been written by hand in a notebook first before being typed.

Outcome

Not fully specified in available reports.

Why this matters

Illustrates how revision history and physical drafts can constitute strong evidence of authentic authorship. Documented in Bloomberg's investigation.

Purdue — Autistic professor accused of being an AI bot

Purdue University · 2023

2023

Who

Rua Williams (they/them), professor of user experience and design

What happened

Williams received an email from a fellow researcher accusing them of being an AI bot. The researcher stated their email "lacked warmth." Williams responded: "It's not an AI. I'm just autistic."

Evidence / defence

Williams publicly explained that their direct, precise writing style is a characteristic of being autistic, not a marker of AI generation.

Outcome

No formal disciplinary action. The incident went viral in July 2023.

Why this matters

Widely cited as evidence that autistic and neurodivergent people's natural writing patterns are mistakenly flagged as AI-generated. Williams explicitly noted that neurodivergent people and non-native English speakers face disproportionate risk from AI detection — a concern backed by subsequent peer-reviewed research.

Yale School of Management — Student sues Yale over GPTZero-based suspension

Yale University (School of Management) · 2025

2025

Who

"John Doe" (pseudonym), Executive MBA student

What happened

A student in Yale's Executive MBA program was accused of violating Yale SOM's Honor Code by improperly using AI on a final exam. Yale used GPTZero to flag the exam. Following disciplinary proceedings, the student was suspended.

Evidence / defence

The student filed a lawsuit alleging GPTZero is unreliable and contains implicit bias. He also alleged a university official made multiple attempts to coerce a false confession and alleged discrimination based on national origin under the Civil Rights Act.

Outcome

Lawsuit filed February 2025. Outcome not yet reported as of March 2026.

Why this matters

Described in reporting as potentially the first lawsuit of its kind — a student suing a university specifically over an AI detection tool's unreliability. Illustrates that the false positive problem has now entered the legal system.

About this page

This page documents publicly reported cases only — from credible news outlets (Washington Post, Bloomberg, NPR, Rolling Stone, Yale Daily News, Purdue Exponent) or peer-reviewed academic research. No case is included without a verifiable source.

Outcomes are reported as described in source articles. Where outcomes are unknown or unresolved, that is stated explicitly. Nothing is fabricated or inferred beyond what sources report.

Know of a publicly reported case not listed here? Send us the source.

Prevent this from happening to you

Scripli records your writing session and issues a certificate before you submit. A verified writing record is what these students didn't have.