Templates pending legal review. This page is drafted by the
TokenPak team as an honest, plain-English summary of current data-handling
behavior. It is not a substitute for legal advice and has not been
reviewed by counsel. Authoritative contract terms will land in
the DPA once legal review completes.
Privacy
Last updated: 2026-04-23 (draft).
What runs where
TokenPak is a local proxy. Every compression, routing, caching, and telemetry decision happens on your machine. Your prompts, completions, code, and business data are not sent to TokenPak infrastructure by default, at any point, for any reason.
What we collect by default
Nothing. Zero opt-in-required telemetry. No usage pings. No install counters.
What TokenPak stores locally on your machine
- A SQLite ledger at
~/.tokenpak/monitor.dbwith per-request metadata (model, token counts, latency, cost, cache-origin). Default retention: 90 days of rolling history. - Configuration at
~/.tokenpak/config.yaml. Contains profile selection, port, compression knobs. Never contains API keys or prompts. - Optional license key at
~/.tokenpak/license.key(mode 600). Applies only if you've activated a Pro tier. Never transmitted except to the license server for signature verification.
Optional debug/logging escape hatches — full disclosure
Several opt-in controls will expand what is stored locally if you turn them on. They are off by default. We disclose them here so you know exactly what each knob does before enabling it.
TOKENPAK_DEBUG=1(env var)- Enables verbose debug output on stdout/stderr and debug-level entries in local logs. Debug output may include request headers (never credential values — those are redacted at the logging boundary). Off by default.
TOKENPAK_LOG_ENABLED=1+TOKENPAK_LOG_DESTINATION(env vars)- Enables the structured request logger. Destinations:
file(default path),stdout,syslog. Logger records request metadata — not prompt/response bodies unlessstore_promptsis also enabled. SeeTOKENPAK_LOG_LEVELandTOKENPAK_LOG_RETENTION_DAYSfor tuning. telemetry.store_prompts = true(config flag)- This flag stores prompt and response bodies to local disk. Off by default. Exists for debugging and benchmarking. When enabled, prompt/response content is written to the same retention store as metadata. Clients with regulated data should leave this off.
What leaves your machine
Only what you ask TokenPak to proxy:
- LLM requests — forwarded to the provider you configured (Anthropic, OpenAI, etc.), using your credentials on the way out. TokenPak does not intercept, log to our infrastructure, or mirror the payload.
- License verification — if you activated a Pro license, the signed license token is presented to
https://pypi.tokenpak.aito authorize package downloads and to the license server for tier checks.
Third parties
See Sub-processors.
Your controls
rm -rf ~/.tokenpakremoves every local artifact TokenPak has ever created.- Unset any optional escape-hatch env var to disable that source of data collection.
- Uninstalling the package (
pip uninstall tokenpak) stops all TokenPak execution on your machine.
Contact
Questions: hello@tokenpak.ai.