University Early Access Program
Free academic access to HARE's runtime enforcement infrastructure for universities running on-site LLMs.
The Challenge
You run an on-site LLM. Students and researchers use it daily. That's the hard part, and you've solved it. But you have a governance problem you can't solve with infrastructure alone.
When your provost asks "Can we demonstrate FERPA compliance for the AI system?" — you cannot. When your IRB asks "How do we know research subject data was protected during model inference?" — you cannot. When EU AI Act requirements arrive and auditors ask "Show me the evidence trail for this decision" — you cannot.
The Offering
Full access to 171+ specification files: Capsule format, Universal Adapter, Evidence Artifact schema, Policy DSL, and conformance test suite.
A hosted Arbiter endpoint your team can integrate against for research and evaluation. Policy evaluation with evidence emission capabilities.
TypeScript and Python SDK specifications and reference implementations in development. Designed to integrate with existing LLM serving frameworks.
HARE engineering works with your team during integration: architecture review, SDK guidance, policy design, and troubleshooting.
Pre-built policy templates for GDPR, CCPA, FERPA scenarios. Define rules in a readable, auditable format using the HARE Policy DSL.
49 test vectors to verify your integration works correctly. Validate your adapter against the specification.
No license fee for academic deployment. No grant application required. No reporting obligations to funding agencies.
How It Works
HARE sits in front of your model, not inside it. Your existing LLM pipeline, serving framework, and model weights remain unchanged.
| Component | Before | After |
|---|---|---|
| Your LLM | Unchanged | Unchanged |
| Your model weights | Unchanged | Unchanged |
| Your serving framework | Unchanged | Unchanged |
| Query path | Direct to model | Through HARE governance layer |
| Evidence | None | Cryptographic proof for every operation |
| Policy enforcement | None | Non-bypassable Arbiter evaluation |
Partnership
Run the integration on your existing LLM infrastructure for research and evaluation purposes. Early Access deployments are experimental and non-production.
Tell us what worked, what didn't, where the specs were unclear, where the SDK was insufficient. This is why we do early access.
Let us say publicly: "[University] runs HARE-governed inference on their on-site LLM." We coordinate messaging with your communications team.
Any adapter code or policy templates you create go back to the open spec library (Layer 1 and Layer 2 only — never Layer 3 commercial components).
Optional: If the results are academically interesting, publish them. We provide co-authorship support if desired.
Value
You have a compliance gap today. Every query is ungoverned. HARE closes that gap with cryptographic proof.
Your administration, IRB, and legal team want documentation. HARE produces artifacts they can verify.
Evidence Artifacts are signed, chain-linked, tamper-evident proofs intended to support audit and review.
If EU AI Act or similar regulation arrives, you have infrastructure designed to support compliance workflows.
Students who build on HARE learn a governance system that enterprise and government will need.
Become a proof point: "HARE governance works at scale on real university LLM infrastructure."
Eligibility
Contact us to discuss whether your institution is a good fit for the Early Access Program.
university@hareprotocol.io