Thing Event SystembyPentatonic

Regulatory Compliance

AI regulation is here. Your agents need governance now.

The EU AI Act enforces high-risk requirements in August 2026. Colorado's AI Act sets the US template. Building compliance after the deadline means building it too late.

Maximum penalty

€35M

Or percentage of global turnover

7%

High-risk enforcement deadline

Aug 2026

Regulatory Timeline

The clock is ticking

Aug 2024

EU AI Act enters into force

Feb 2025

Prohibited AI practices apply

Aug 2025

GPAI model obligations apply

Aug 2026

High-risk AI system requirements apply

High-risk enforcement

Aug 2027

Full enforcement for all AI systems

High-risk AI system requirements apply from August 2026.

Compliance

What the law requires — and how TES answers

Two jurisdictions, one governance layer. TES provides immutable audit trails that satisfy both frameworks from a single integration.

01

EU AI Act

Enacted — enforces Aug 2026

The world's first comprehensive AI regulation. High-risk AI systems — including autonomous commerce agents — must demonstrate conformity before deployment. Penalties reach €35M or 7% of global turnover.

"High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately."

— EU AI Act, Article 13(1), Regulation (EU) 2024/1689 of the European Parliament and of the Council, entered into force 1 August 2024.

Key Requirements

  1. 01Risk classification for AI systems
  2. 02Documented risk management systems
  3. 03Conformity assessments before deployment
  4. 04Human oversight mechanisms (Art. 14)
  5. 05Ongoing monitoring and reporting
  6. 06Transparency and documentation (Art. 13)
tes.audit() — EU AI Act compliance export
// Export compliance report for EU AI Act
const report = await tes.audit({
  jurisdiction: "EU",
  regulation: "AI_ACT",
  period: "2026-Q1",
  include: [
    "risk_classification",
    "human_oversight_events",
    "conformity_checks",
    "transparency_logs"
  ]
});

// report.compliant → true
// report.artifacts → 47 exportable records
tes.impact_assessment() — Colorado compliance
// Generate impact assessment for Colorado
const assessment = await tes.impact_assessment({
  jurisdiction: "US-CO",
  system_id: "agent-commerce-v2",
  include: [
    "consequential_decisions",
    "disclosure_records",
    "risk_management_log"
  ]
});

// assessment.decisions_audited → 1,204
// assessment.disclosures_sent → 1,204
// assessment.gaps → []
02

Colorado AI Act

First US state-level AI law

The first US state to regulate AI systems making consequential decisions. Establishes the template other states are expected to follow — impact assessments, disclosure obligations, and risk management programmes.

Key Requirements

  1. 01Impact assessments for high-risk AI systems
  2. 02Disclosure when AI is making consequential decisions
  3. 03Risk management programmes for deployers

Architecture

VI is the receipt.
TES is the CCTV.

L1: Identity
VI + TES
L2: Intent & Constraints
VI + TES
L3: Action
TES only

Verifiable Intent (VI)

The per-transaction receipt. SD-JWT delegation chains that prove a specific action was authorised by a human principal. Covers identity (L1) and intent constraints (L2).

Thing Event System (TES)

The continuous governance layer. An immutable event log of everything the agent did — before, during, and after. Wraps around all three layers, recording what actually happened at L3.

Time is running out

August 2026 is closer than you think

Start building governance into your AI agent infrastructure today.