AI Detector vs. Writing Proof: The Fundamental Difference
The core distinction
AI detection tools work by examining the finished text. They analyse statistical properties — word choice patterns, sentence structure, perplexity scores — and compare them to patterns in AI-generated and human-written training data. The output is a probability: this text is X% likely to have been AI-generated.
Writing proof works the other way. Instead of looking at the output after the fact, it records the process as it happens. The evidence is created during the writing session — not afterwards. The result is a certificate, not an estimate.
The word “proof” is used precisely: a certificate backed by a recorded writing session is not subject to probabilistic error in the same way a text analysis is. The session either happened or it did not.
Side by side
| AI Detector | Writing Proof (HAC) | |
|---|---|---|
| What it analyses | The finished text | The writing process as it happens |
| When it works | After the document is complete | During the writing session |
| What it produces | A probability estimate (e.g. "87% AI") | A cryptographic certificate |
| Can it produce false positives? | Yes — documented and acknowledged | No — the session either happened or it didn't |
| Can be gamed by AI? | Yes — text humanisers exist | No — you can't fake the process of a human writing |
| Independent verification | No — depends on the detector's proprietary model | Yes — anyone can verify the certificate |
| Stores document content? | Usually yes — the full text is analysed | No — only behavioural metadata |
Why AI detectors have false positives by design
AI detectors produce false positives not because they are poorly built, but because the approach has a structural flaw. Text-based classification cannot determine how a piece of text was produced from the text alone.
A highly edited, formally structured human essay looks statistically similar to AI output. A lightly edited AI draft that has been reviewed by a human looks statistically less AI-like. The detector is measuring text properties, not authorship. No amount of training data or model improvement can eliminate this structural limitation.
Additionally, AI humanisation tools already exist that modify AI output to reduce detection scores. As these tools improve, AI detector accuracy will decrease. Writing proof is not vulnerable to this arms race — you cannot make AI text “more human” by making it look like it was written in a recording session.
Why writing proof cannot produce false positives
A Human Authenticity Certificate is issued during a writing session. If a human was present, typing, and writing during the session, the certificate accurately reflects that. It cannot flag a human writer as AI-generated, because it is not looking at the text — it is recording the act of writing.
There are two ways a certificate could be incorrect: if someone used a recording tool that falsified the session data (which requires active fraud against Scripli's cryptographic system, not just passive detection evasion), or if someone retypes AI-generated text (which produces a certificate for a retyping session, not a composing session — visible in the behavioural metrics).
Neither of these is a false positive in the traditional sense. An innocent writer cannot be falsely accused by a certificate that accurately recorded their session.
Move from inference to proof
Scripli records your writing process and issues a certificate before any question is asked. Free to start.