EU AI Act Alignment

The EU created a mandate.
Enforcement infrastructure is needed.
We're building it.

HAREProtocol provides runtime enforcement infrastructure designed to support EU AI Act compliance workflows—producing cryptographically verifiable artifacts, not just documentation of intent.

The Challenge

Enforcement Begins 2025-2027. Infrastructure Is Needed.

The EU AI Act mandates compliance for high-risk AI systems. Enterprises deploying AI cannot proceed without governance infrastructure that produces verifiable evidence—not just policy documents.

What the Market Offers

  • Compliance dashboards that track intent
  • Audit checklists completed after the fact
  • Policy documents that aren't machine-readable
  • Risk assessments without enforcement
  • Logs that can be modified or deleted

What the Act Requires

  • Data governance with traceable lineage
  • Technical documentation of AI operations
  • Automatic recording of relevant events
  • Transparency to users and authorities
  • Human oversight with intervention capability

The Gap

Everyone builds documentation tools. The Act requires enforcement infrastructure. HARE is designed to be an enforcement layer intended to support compliance workflows. Note: HARE is not a certification authority and does not itself confer legal compliance. It provides technical mechanisms intended to support compliance efforts when fully implemented and independently validated.

Designed Alignment

How HARE Is Designed to Support Each Article

HARE mechanisms are designed to align with EU AI Act requirements through technical implementation. Regulatory interpretation and compliance determination remain the responsibility of deployers. Independent legal and technical validation is required.

Article 10 Data Governance
Requirement: Training, validation, and testing data sets shall be subject to appropriate data governance practices.
HARE Implementation:
  • Capsule format carries data lineage from creation
  • Container-scoped indexes with governed query
  • Every data access produces Evidence Artifact
  • Retrieval lineage tracked across all operations
Article 11 Technical Documentation
Requirement: Technical documentation shall be drawn up before the AI system is placed on the market.
HARE Implementation:
  • Retrieval-lineage Capsules with chunk hashing
  • Policy-at-time-of-action binding
  • Complete audit bundles exportable on demand
  • Cryptographic proof of system behavior
Article 12 Record-Keeping
Requirement: High-risk AI systems shall technically allow for automatic recording of events (logs) relevant to identifying risks.
HARE Implementation:
  • Evidence Artifacts as byproducts—not logs
  • Cryptographically signed, tamper-evident
  • Chained with Merkle continuity
  • Immutable without destroying the chain
Article 13 Transparency
Requirement: High-risk AI systems shall be designed to enable deployers to interpret outputs and use appropriately.
HARE Implementation:
  • Disclosure-minimized audit bundles
  • Evidence of what data informed each output
  • Policy decisions traceable to rules
  • Deny-no-signal protects sensitive content
Article 14 Human Oversight
Requirement: High-risk AI systems shall be designed to be effectively overseen by natural persons.
HARE Implementation:
  • PLAN/EVAL/EXECUTE mode separation
  • Intent-scoped elevation with plan visibility
  • Human approval required for EXECUTE operations
  • Dual control for high-risk decisions
Article 15 Accuracy & Security
Requirement: High-risk AI systems shall achieve appropriate levels of accuracy, robustness, and cybersecurity.
HARE Implementation:
  • Attestation-gated execution
  • Fail-closed on uncertainty
  • Substrate independence (software to ASIC)
  • Validation Gate checks against ground truth

Enforcement Timeline

EU AI Act Deadlines

Aug 2024
Act Enters Force
EU AI Act officially published and effective
Feb 2025
Prohibited AI
Prohibitions on unacceptable risk AI systems apply
Aug 2025
GPAI Rules
General Purpose AI model obligations apply
Aug 2026
High-Risk AI
Most high-risk AI system requirements apply
Aug 2027
Full Enforcement
All remaining provisions apply

August 2026 is the Critical Date

High-risk AI systems must demonstrate compliance. Infrastructure must be in place before deployment—not built during enforcement.

Capabilities

What HARE Provides for EU AI Act

01

Designed to Support Compliance

HARE is designed to support compliance workflows through runtime enforcement. Operations that would violate policy are intended to be denied before execution. Regulatory interpretation and ultimate compliance determination remain deployer responsibility.

02

Evidence for Authorities

When regulators ask for proof, you have cryptographic Evidence Artifacts—not logs. Signed, chained, tamper-evident records of every governed operation.

03

Audit Bundle Export

Export complete audit bundles on demand: all Evidence Artifacts, policy snapshots, lineage chains. Ready for regulatory inspection.

04

Data Lineage

Track what data your AI accessed for every operation. Not just "who accessed"—exactly which data elements informed which outputs.

05

Human Oversight Proof

PLAN/EVAL/EXECUTE separation with evidence. Prove that humans reviewed AI proposals before execution. Dual control for high-risk decisions.

06

Cross-Border Ready

Jurisdiction-aware routing. EU data stays in EU. Policy enforcement respects data sovereignty requirements across member states.

Risk Classification

Where HARE Applies in the Risk Pyramid

Risk Level Examples HARE Role
Unacceptable Social scoring, real-time biometric ID N/A (prohibited)
High-Risk Healthcare AI, legal AI, HR systems, credit scoring Primary target - Full enforcement infrastructure
Limited Risk Chatbots, emotion recognition Transparency compliance, audit trails
Minimal Risk Spam filters, game AI Optional governance for best practices

HARE is designed for high-risk AI systems where compliance is mandatory and evidence must survive regulatory scrutiny.

Interested in EU AI Act Alignment?

Contact us to discuss how HARE is designed to support the enforcement infrastructure your AI systems may need.

eu@hareprotocol.ai