Regulatory Compliance
AI regulation is here. Your agents need governance now.
The EU AI Act enforces high-risk requirements in August 2026. Colorado's AI Act sets the US template. Building compliance after the deadline means building it too late.
Maximum penalty
€35M
Or percentage of global turnover
7%
High-risk enforcement deadline
Aug 2026
Regulatory Timeline
The clock is ticking
Aug 2024
EU AI Act enters into force
Feb 2025
Prohibited AI practices apply
Aug 2025
GPAI model obligations apply
Aug 2026
High-risk AI system requirements apply
High-risk enforcementAug 2027
Full enforcement for all AI systems
Aug 2024
EU AI Act enters into force
Feb 2025
Prohibited AI practices apply
Aug 2025
GPAI model obligations apply
Aug 2026
High-risk AI system requirements apply
High-risk enforcementAug 2027
Full enforcement for all AI systems
High-risk AI system requirements apply from August 2026.
Compliance
What the law requires — and how TES answers
Two jurisdictions, one governance layer. TES provides immutable audit trails that satisfy both frameworks from a single integration.
EU AI Act
Enacted — enforces Aug 2026The world's first comprehensive AI regulation. High-risk AI systems — including autonomous commerce agents — must demonstrate conformity before deployment. Penalties reach €35M or 7% of global turnover.
"High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately."
— EU AI Act, Article 13(1), Regulation (EU) 2024/1689 of the European Parliament and of the Council, entered into force 1 August 2024.
Key Requirements
- 01Risk classification for AI systems
- 02Documented risk management systems
- 03Conformity assessments before deployment
- 04Human oversight mechanisms (Art. 14)
- 05Ongoing monitoring and reporting
- 06Transparency and documentation (Art. 13)
// Export compliance report for EU AI Act
const report = await tes.audit({
jurisdiction: "EU",
regulation: "AI_ACT",
period: "2026-Q1",
include: [
"risk_classification",
"human_oversight_events",
"conformity_checks",
"transparency_logs"
]
});
// report.compliant → true
// report.artifacts → 47 exportable records// Generate impact assessment for Colorado
const assessment = await tes.impact_assessment({
jurisdiction: "US-CO",
system_id: "agent-commerce-v2",
include: [
"consequential_decisions",
"disclosure_records",
"risk_management_log"
]
});
// assessment.decisions_audited → 1,204
// assessment.disclosures_sent → 1,204
// assessment.gaps → []Colorado AI Act
First US state-level AI lawThe first US state to regulate AI systems making consequential decisions. Establishes the template other states are expected to follow — impact assessments, disclosure obligations, and risk management programmes.
Key Requirements
- 01Impact assessments for high-risk AI systems
- 02Disclosure when AI is making consequential decisions
- 03Risk management programmes for deployers
Architecture
VI is the receipt.
TES is the CCTV.
Verifiable Intent (VI)
The per-transaction receipt. SD-JWT delegation chains that prove a specific action was authorised by a human principal. Covers identity (L1) and intent constraints (L2).
Thing Event System (TES)
The continuous governance layer. An immutable event log of everything the agent did — before, during, and after. Wraps around all three layers, recording what actually happened at L3.
Related reading
How to comply with the EU AI Act using event logs
Article-by-article compliance checklist with code examples.
BlogYour AI agent needs an audit trail, not just guardrails
Why prevention is only half the equation.
Use CaseAgentic commerce infrastructure
How TES provides the system of record for AI agent transactions.
Time is running out
August 2026 is closer than you think
Start building governance into your AI agent infrastructure today.