The premise is flawed from the start
AI content detectors work by analysing the statistical characteristics of finished text. They look for patterns — predictability, perplexity, burstiness — that tend to differ between human-written and AI-generated content.
The problem is that these are probabilistic signals, not proof. A detector can tell you that a piece of text resembles AI output. It cannot tell you whether a human was present and writing.
That distinction matters more than it might seem.
False positives are well documented
Studies and real-world cases have repeatedly shown that AI detectors flag innocent human writers. The groups most affected include:
- Non-native English speakers, whose writing tends to be more structured and predictable, which detectors often interpret as AI-like.
- Students who revise carefully, since multiple editing passes can produce polished prose that triggers the same patterns.
- Writers in technical or formal registers, where plain, direct language is a stylistic requirement, not a sign of AI.
Turnitin, GPTZero, Copyleaks, and others have all been documented making incorrect calls on human-written text. The developers of these tools acknowledge the issue. The false-positive rate is a known limitation, not a fixable bug.
Why output-based detection cannot be fixed
The fundamental issue is not the quality of the detector. It is the approach itself.
Detecting AI from output requires the detector to make a judgement based only on the final text. But the final text is a poor proxy for the process that produced it. Human writing and AI output can be remarkably similar in surface form, especially as AI models improve.
As models become more capable, the distinction between AI and human output becomes harder to detect. The gap will not close over time. It will widen.
Detectors built on this approach are chasing a moving target.
What actually works
The reliable answer to “did a human write this?” is not to examine the output after the fact. It is to record the process as it happens.
This is the approach Scripli takes. Rather than attempting to infer authorship from finished text, Scripli records the writing session and issues a certificate linked to that session. The evidence exists before any question is raised.
A certificate from Scripli does not depend on statistical patterns in the text. It does not change as AI models improve. It does not produce false positives.
It is a record of what took place, not a guess about what probably happened.
The practical implication for writers
If you write — for school, for clients, or for publication — you are already exposed to the risk of false accusation. The tools that institutions use to assess authorship are imprecise by design.
The most effective protection is not to argue after the fact. It is to have a record before anyone asks.
Start a Scripli session the next time you write. When the session ends, your certificate is issued automatically. If your authorship is ever questioned, you can answer with evidence rather than explanation.