Back to Aira

April 17, 2026 · 5 min read

DORA Compliance for AI Systems — What Banks Need Now

The Digital Operational Resilience Act (DORA) has been in force since January 17, 2025. It applies to over 22,000 EU financial entities — banks, insurers, investment firms, payment processors, and their critical ICT third-party providers. If your AI systems touch financial services in the EU, DORA applies to you.

Three Obligations That Matter for AI

DORA is broad, but three areas directly impact how you build and operate AI systems:

1. ICT Incident Reporting (Articles 17–19)

Major ICT-related incidents must be classified, reported to competent authorities, and documented with root cause analysis. For AI systems, this means: if your model produces a harmful output, makes an erroneous decision that affects customers, or suffers an availability failure, you need a structured incident lifecycle with tamper-proof documentation.

Most teams today track incidents in Jira or PagerDuty. Neither produces the kind of evidence a regulator accepts — signed, timestamped, immutable records that prove what happened and when.

2. Third-Party Risk Register (Articles 28–44)

DORA requires a structured register of all ICT third-party service providers, including sub-outsourcing chains. Every LLM provider your AI calls — OpenAI, Anthropic, Azure — is an ICT third-party provider under DORA. You need to document: what services they provide, what data they process, where they operate, and what your exit strategy is.

Article 28(3) explicitly requires that contractual arrangements with ICT providers include audit rights, exit plans, and incident notification obligations. If you can't produce this register on demand, you're non-compliant.

3. Resilience Testing (Articles 24–27)

Financial entities must test the operational resilience of their ICT systems, including AI components. This includes scenario-based testing, vulnerability assessments, and — for significant institutions — threat-led penetration testing (TLPT). Every test must be documented with results, remediation actions, and evidence of follow-through.

What Aira Does for Each

Incident Lifecycle with Signed PDFs

Aira's incident management captures the full lifecycle: detection, classification, escalation, resolution, and root cause analysis. Every major incident produces an Ed25519-signed PDF report with RFC 3161 trusted timestamps — the same signing infrastructure used for EU AI Act governance receipts.

# Incident report output:
#
# incident_id: INC-2026-0412
# classification: major (customer impact > 1,000 accounts)
# detection: 2026-04-12T08:14:00Z (automated anomaly detection)
# resolution: 2026-04-12T09:41:00Z (model rollback to v2.3.1)
#
# PDF report: Ed25519 signed, RFC 3161 timestamped
# Verifiable at: /verify/incident/INC-2026-0412
# No authentication required. Regulator can verify independently.

The signed PDF is what you hand to your national competent authority. It's not a self-attested log — it's cryptographic proof of your incident response timeline.

Structured Vendor Register

Every LLM provider your system calls through Aira is automatically tracked in a structured third-party register. The register includes: provider identity, services consumed, data categories processed, geographic locations, contractual metadata, and sub-processor chains.

# Third-party register entry (auto-populated):
#
# provider: OpenAI (via Aira Gateway)
# services: text generation, embedding
# data_categories: customer queries, product descriptions
# processing_locations: US (Azure West), EU (Azure Sweden)
# contract_ref: MSA-2026-0044
# exit_strategy: documented (switch to Anthropic via gateway URL change)
# last_audit: 2026-03-15

When the regulator requests your ICT third-party register, you export it from the Aira dashboard. Every entry is backed by actual usage data, not a spreadsheet someone updated six months ago.

Append-Only Test Log

Resilience tests run through Aira produce append-only, cryptographically signed test records. Each record captures: test scenario, inputs, expected vs. actual outputs, pass/fail determination, and remediation actions. The log is immutable — entries can be added but never modified or deleted.

# Resilience test record:
#
# test_id: RT-2026-0087
# type: scenario_based
# scenario: "Model receives adversarial prompt attempting data exfiltration"
# input: [redacted adversarial prompt]
# expected: DENY with content scan flag
# actual: DENY — PII exfiltration blocked by content scan policy
# result: PASS
#
# Ed25519 signed. Appended to immutable test log.
# Covers DORA Art. 25 — proportionate testing requirements.

Same Infrastructure, Multiple Regulations

The signing infrastructure behind DORA compliance is the same one Aira uses for EU AI Act governance receipts. Ed25519 signatures, RFC 3161 timestamps, and append-only audit trails serve both frameworks. You don't need separate tools for each regulation — one governance layer covers both.

This matters because DORA and the EU AI Act overlap significantly for financial AI systems. Article 14 of the AI Act (human oversight) and Article 17 of DORA (incident management) both require documented, verifiable evidence of how your AI systems operate. Aira produces that evidence as a byproduct of normal operation.

Get Started

Full DORA compliance guide with setup instructions: /docs/guides/dora-compliance