Scripli logoScripli
Back to Scripli

AI Detector vs. Writing Proof: The Fundamental Difference

Both approaches answer the same question: was this document written by a human? But they answer it in structurally different ways — and only one of them can be wrong about a human writer.

The core distinction

AI detection tools work by examining the finished text. They analyse statistical properties — word choice patterns, sentence structure, perplexity scores — and compare them to patterns in AI-generated and human-written training data. The output is a probability: this text is X% likely to have been AI-generated.

Writing proof works the other way. Instead of looking at the output after the fact, it records the process as it happens. The evidence is created during the writing session — not afterwards. The result is a certificate, not an estimate.

The word “proof” is used precisely: a certificate backed by a recorded writing session is not subject to probabilistic error in the same way a text analysis is. The session either happened or it did not.

Side by side

AI DetectorWriting Proof (HAC)
What it analysesThe finished textThe writing process as it happens
When it worksAfter the document is completeDuring the writing session
What it producesA probability estimate (e.g. "87% AI")A cryptographic certificate
Can it produce false positives?Yes — documented and acknowledgedNo — the session either happened or it didn't
Can be gamed by AI?Yes — text humanisers existNo — you can't fake the process of a human writing
Independent verificationNo — depends on the detector's proprietary modelYes — anyone can verify the certificate
Stores document content?Usually yes — the full text is analysedNo — only behavioural metadata

Why AI detectors have false positives by design

AI detectors produce false positives not because they are poorly built, but because the approach has a structural flaw. Text-based classification cannot determine how a piece of text was produced from the text alone.

A highly edited, formally structured human essay looks statistically similar to AI output. A lightly edited AI draft that has been reviewed by a human looks statistically less AI-like. The detector is measuring text properties, not authorship. No amount of training data or model improvement can eliminate this structural limitation.

Additionally, AI humanisation tools already exist that modify AI output to reduce detection scores. As these tools improve, AI detector accuracy will decrease. Writing proof is not vulnerable to this arms race — you cannot make AI text “more human” by making it look like it was written in a recording session.

Why writing proof cannot produce false positives

A Human Authenticity Certificate is issued during a writing session. If a human was present, typing, and writing during the session, the certificate accurately reflects that. It cannot flag a human writer as AI-generated, because it is not looking at the text — it is recording the act of writing.

There are two ways a certificate could be incorrect: if someone used a recording tool that falsified the session data (which requires active fraud against Scripli's cryptographic system, not just passive detection evasion), or if someone retypes AI-generated text (which produces a certificate for a retyping session, not a composing session — visible in the behavioural metrics).

Neither of these is a false positive in the traditional sense. An innocent writer cannot be falsely accused by a certificate that accurately recorded their session.

Move from inference to proof

Scripli records your writing process and issues a certificate before any question is asked. Free to start.