How anti-bot systems detect timing patterns (and what actually works)
- • Poisson-distributed request timing (exponential inter-arrivals) is detectable by a Kolmogorov-Smirnov test at p < 0.001.
- • The signature: finite variance. Real browsing has a heavy tail (long pauses). Poisson doesn't.
- • Lévy stable distributions (α = 1.5, infinite variance) pass the same KS test at p = 0.68.
Most timing jitter implementations use exponential inter-arrival times. A Poisson process. Random delays between requests, no two at the same interval. It looks random. It isn't — not statistically.
200 intervals from a Poisson implementation. 200 from real Firefox browsing (Wikipedia, email, HN). Kolmogorov-Smirnov two-sample test. Rejected at p < 0.001. The distributions are trivially distinguishable.
Why it fails
Poisson has finite variance. The delays cluster around the mean. Most intervals fall between 0.5× and 2× the mean. Outliers beyond 5× are vanishingly rare.
Real browsing: 40 seconds reading, three clicks in rapid succession, 2 minutes on a page, tab switch, 8 minutes gone, two fast clicks, 20 minutes idle. Heavy tail. Possibly infinite variance in the mathematical sense. The KS test catches the difference because Poisson is too well-behaved — too regular in its irregularity.
Lévy stable distributions
Stability parameter α < 2 gives infinite variance by construction. Most samples are small, but large values occur often enough to match the bursty-then-idle pattern of real sessions. α = 1.5 balances realism against extreme pauses (Cauchy at α = 1 produces 30-minute gaps regularly).
Sampling via Chambers-Mallows-Stuck (1976):
const alpha = 1.5 const beta = 1.0 // right-skewed (positive delays only) const U = (Math.random() - 0.5) * Math.PI const W = -Math.log(Math.random() || 1e-10) const phi0 = Math.atan(beta * Math.tan(Math.PI * alpha / 2)) / alpha const factor = Math.pow( Math.cos(U - alpha * phi0) / W, (1 - alpha) / alpha ) * Math.sin(alpha * (U - phi0)) / Math.pow(Math.cos(U), 1 / alpha) const delay = Math.abs(factor) * scale + minMs
Two uniform inputs, one trigonometric transform. The 1976 paper proved exact sampling — no approximation.
Results
| Distribution | KS statistic | p-value | Detected? |
|---|---|---|---|
| Poisson (exponential) | 0.31 | < 0.001 | yes |
| Lévy stable (α = 1.5) | 0.07 | 0.68 | no |
The Lévy sample is statistically indistinguishable from the Firefox reference. The tradeoff: occasional 15-second pauses where Poisson would give 2. A real person pauses for 15 seconds. A Poisson process almost never does. The long pauses are what makes it work.
Where this applies
Any system where request timing is an observable signal. Anti-bot systems at Cloudflare and DataDome use timing analysis alongside TLS fingerprinting and behavioural heuristics. Tor Browser adds padding. Mixnets batch and shuffle. VPNs rely on encryption, not timing indistinguishability.
I built this for threadr (an OSINT tool for security assessments). The scanner makes requests to external APIs during reconnaissance — the timing distribution determines whether those requests are flagged as automated. The Lévy sampler, the KS test, and the full self-audit are in the repo.
Limitations
KS only tests the marginal distribution. An adversary using autocorrelation (sequential dependency between intervals) or spectral analysis (frequency-domain periodicity) might catch patterns that KS misses. The threadr self-audit runs all three — KS, autocorrelation lag-1, and Wald-Wolfowitz runs test — but timing alone is one layer of many.
Lévy sampler, KS test, and anonymity self-audit: threadr repo under packages/shared/src/anonymity/. 59 tests.