v0.2.4 · 177 tests passing · Apache 2.0

Your LLM changed last week.
Can you prove when?

A CLI that wraps any OpenAI-compatible call and produces a signed, verifiable record — hash of the request, hash of the response. Offline. No server required. No vendor trust required.

See it live → View on GitHub
CHANGED aelitium compare ./bundle_last_week ./bundle_today
$
$ pip install aelitium Copied!

Here's what actually
happens with a live model.

OpenAI silently updated gpt-4o last Tuesday. Your evals look fine. But the response to your exact prompt is different. Your observability tool shows the call happened. It doesn't show what the model returned.

Run 1 — March 3

"Summarise the key risks of deploying LLMs in production in one sentence."

Key risks include non-deterministic outputs, prompt injection vulnerabilities, hallucinated facts, and the absence of tamper-evident logging.

response_hash: 2f1563cc8c0b7b71…
Run 2 — March 10 · same prompt

"Summarise the key risks of deploying LLMs in production in one sentence."

Primary risks are hallucination, data leakage, adversarial prompt injection, and unpredictable behaviour changes following silent model updates.

response_hash: c41d8e3b5f9a2201…
request_hash = SAME. Identical prompt, identical parameters. response_hash = DIFFERENT. The model returned something else. The change came from the model — not your code. AELITIUM is how you prove that.

SHA-256 is deterministic.
Edit anything — watch what changes.

The same text always produces the same hash. Paste two responses from the same prompt and you'll know immediately whether the model returned identical output. Runs in your browser, no install needed.

Live · SHA-256 comparison running in your browser — this is exactly how aelitium compare works

One import.
Nothing else changes.

Your existing code stays exactly the same. AELITIUM intercepts the call, hashes everything, and writes a bundle you can verify forever — with or without internet.

1
Add one import

One line at the top of your existing code. Nothing else changes. No config, no API key, no SDK swap.

from aelitium import capture_openai
2
Bundle written to disk

SHA-256 of the request. SHA-256 of the response. A binding_hash links both. Ed25519 seals it so you can't quietly edit it later.

request_hash + response_hash
→ binding_hash → signature
3
Verify anywhere, offline

Hand the bundle to your auditor, your client, your lawyer. They can verify it themselves without calling any server — including yours.

aelitium verify-bundle ./evidence
→ STATUS=VALID
4
Catch silent model changes

Same request, different response? AELITIUM tells you exactly when that started — by comparing bundle hashes, not log text.

aelitium compare ./run_a ./run_b
→ STATUS=CHANGED

Observability tools
weren't built for this.

Langfuse, Helicone, LangSmith — great for debugging latency and costs. But they run on servers. You can't use them to prove to an auditor what a model returned six months ago.

CapabilityLangfuse / Helicone / LangSmithAELITIUM
Traces, metrics & dashboards
Cryptographic proof of LLM output
Tamper-evident record
Offline verification — no server
Drift detection by hash comparison
Scan codebase for uninstrumented calls
Requires vendor trust✓ server-based✗ offline-first

Regulators are
already asking for this.

If you're building AI into anything regulated, someone is going to ask you to prove what the model returned. AELITIUM's bundles are the answer.

EU AI Act · Art. 12
Logging & traceability
Requires logged, traceable records of high-risk AI outputs. AELITIUM bundles are tamper-evident by design — you can't quietly edit a hash.
SOC 2 · CC7
System monitoring & integrity
SOC 2 auditors ask whether your logs could have been altered. Offline verification is the cleanest answer to that question.
ISO 42001
AI management auditability
Third-party auditors don't need access to your infrastructure. They get the bundle and verify it themselves.
NIST AI RMF · MG 2.2
Traceability of AI decisions
Full record: canonicalized request, response hash, timestamp, optional Ed25519 signature. Everything a reviewer needs.

Try it on your
next API call.

Open source · Apache 2.0 · 177 tests · If it doesn't do what it says, open an issue.

View on GitHub Read the docs

pip install aelitium

Also on PyPI