April 17, 2026 · 4 min read
Introducing Aira Gateway — Audit Every LLM Call with One URL Change
Your LLM calls have no audit trail, no policy enforcement, and no proof of what was sent or received. Aira Gateway fixes this with a transparent proxy — one URL change, zero code changes.
The Problem: LLM Calls Are a Black Hole
Every day your agents and applications make thousands of LLM calls. Prompts containing customer data, financial details, and PII fly to third-party APIs with no record of what was sent, what came back, or whether it complied with your policies.
When the auditor asks "what data did your AI system send to OpenAI on March 12?" you have nothing. Log files are mutable, self-attested, and inadmissible. You have no proof.
The Solution: A Transparent Proxy
Aira Gateway sits between your application and the LLM provider. It intercepts every call, applies your policies, and mints a cryptographic receipt — then forwards the request unchanged. Your application doesn't know it's there.
The flow for every call:
- Authorize — evaluate the request against your active policies (PII detection, cost limits, content rules)
- Scan — check the payload for sensitive content, credentials, and toxic language
- Forward — proxy the request to the LLM provider with zero modification
- Notarize — mint an Ed25519-signed receipt covering the full request-response pair
Two Lines of Config
Point your base URL at Aira Gateway. That's it.
# Python (OpenAI SDK)
import openai
client = openai.OpenAI(
base_url="https://gateway.airaproof.com/v1", # was: https://api.openai.com/v1
api_key="sk-...", # your existing OpenAI key
)// TypeScript (OpenAI SDK)
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://gateway.airaproof.com/v1", // was: https://api.openai.com/v1
apiKey: "sk-...", // your existing OpenAI key
});No SDK to install. No wrapper functions. No code changes beyond the URL. Your existing error handling, retries, and streaming all work exactly as before.
What You Get
- Ed25519 receipt per call — cryptographic proof of what was sent and received, independently verifiable at
/verify/gateway/{receipt_id} - Policy gating — block or flag calls that violate your rules before they reach the provider. Configure policies in the dashboard, not in code.
- Content scanning — automatic detection of PII, credentials, and sensitive data in prompts and responses
- Full audit trail — every call logged with timestamp, token count, latency, policy evaluations, and scan results
- RFC 3161 timestamps — independent proof of when each call occurred
Provider Support
Gateway supports any provider that uses the OpenAI-compatible API format:
- OpenAI — GPT-4.1, GPT-5, o3, o4-mini
- Anthropic — Claude Sonnet 4, Claude Opus 4
- Any OpenAI-compatible — Azure OpenAI, Together, Groq, local vLLM/Ollama
Set the target provider in the dashboard endpoint settings. The gateway handles routing, auth header translation, and receipt minting for each provider.
Get Started
Full setup guide with policy configuration and receipt verification: /docs/guides/gateway