A CLI that wraps any OpenAI-compatible call and produces a signed, verifiable record — hash of the request, hash of the response. Offline. No server required. No vendor trust required.
OpenAI silently updated gpt-4o last Tuesday. Your evals look fine. But the response to your exact prompt is different. Your observability tool shows the call happened. It doesn't show what the model returned.
"Summarise the key risks of deploying LLMs in production in one sentence."
Key risks include non-deterministic outputs, prompt injection vulnerabilities, hallucinated facts, and the absence of tamper-evident logging.
"Summarise the key risks of deploying LLMs in production in one sentence."
Primary risks are hallucination, data leakage, adversarial prompt injection, and unpredictable behaviour changes following silent model updates.
The same text always produces the same hash. Paste two responses from the same prompt and you'll know immediately whether the model returned identical output. Runs in your browser, no install needed.
Your existing code stays exactly the same. AELITIUM intercepts the call, hashes everything, and writes a bundle you can verify forever — with or without internet.
One line at the top of your existing code. Nothing else changes. No config, no API key, no SDK swap.
SHA-256 of the request. SHA-256 of the response. A binding_hash links both. Ed25519 seals it so you can't quietly edit it later.
Hand the bundle to your auditor, your client, your lawyer. They can verify it themselves without calling any server — including yours.
Same request, different response? AELITIUM tells you exactly when that started — by comparing bundle hashes, not log text.
Langfuse, Helicone, LangSmith — great for debugging latency and costs. But they run on servers. You can't use them to prove to an auditor what a model returned six months ago.
| Capability | Langfuse / Helicone / LangSmith | AELITIUM |
|---|---|---|
| Traces, metrics & dashboards | ✓ | — |
| Cryptographic proof of LLM output | ✗ | ✓ |
| Tamper-evident record | ✗ | ✓ |
| Offline verification — no server | ✗ | ✓ |
| Drift detection by hash comparison | ✗ | ✓ |
| Scan codebase for uninstrumented calls | ✗ | ✓ |
| Requires vendor trust | ✓ server-based | ✗ offline-first |
If you're building AI into anything regulated, someone is going to ask you to prove what the model returned. AELITIUM's bundles are the answer.
Open source · Apache 2.0 · 177 tests · If it doesn't do what it says, open an issue.
pip install aelitium
Also on PyPI