University Early Access Program

You have the LLM.
We have the governance.
Let's put them together.

Free academic access to HARE's runtime enforcement infrastructure for universities running on-site LLMs.

The Challenge

The Problem You Already Have

You run an on-site LLM. Students and researchers use it daily. That's the hard part, and you've solved it. But you have a governance problem you can't solve with infrastructure alone.

What You Have

  • A working LLM on university hardware
  • Users querying it every day
  • Research workflows touching sensitive data
  • Administrators asking compliance questions
  • The model

What You're Missing

  • Proof of what data the model accessed for any given query
  • Evidence that FERPA-protected student data was governed
  • Documentation for your IRB showing research data was policy-controlled
  • Artifacts you can hand to auditors or regulators
  • The proof

The gap isn't compute. It's accountability.

When your provost asks "Can we demonstrate FERPA compliance for the AI system?" — you cannot. When your IRB asks "How do we know research subject data was protected during model inference?" — you cannot. When EU AI Act requirements arrive and auditors ask "Show me the evidence trail for this decision" — you cannot.

The Offering

What We're Providing — Free

📚

HARE Spec Library

Full access to 171+ specification files: Capsule format, Universal Adapter, Evidence Artifact schema, Policy DSL, and conformance test suite.

🔧

Sandbox Arbiter

A hosted Arbiter endpoint your team can integrate against for research and evaluation. Policy evaluation with evidence emission capabilities.

💻

SDK Specifications

TypeScript and Python SDK specifications and reference implementations in development. Designed to integrate with existing LLM serving frameworks.

🤝

Direct Technical Support

HARE engineering works with your team during integration: architecture review, SDK guidance, policy design, and troubleshooting.

📝

Policy Templates

Pre-built policy templates for GDPR, CCPA, FERPA scenarios. Define rules in a readable, auditable format using the HARE Policy DSL.

Conformance Testing

49 test vectors to verify your integration works correctly. Validate your adapter against the specification.

No license fee for academic deployment. No grant application required. No reporting obligations to funding agencies.

How It Works

Your LLM Stays Exactly As-Is

HARE sits in front of your model, not inside it. Your existing LLM pipeline, serving framework, and model weights remain unchanged.

Before HARE

  • User → API → LLM → Response
  • No governance layer
  • No evidence generation
  • No policy enforcement
  • No proof of compliance

After HARE

  • User → HARE SDK → Adapter → Arbiter → LLM → Evidence
  • Every step governed
  • Every decision recorded
  • Every artifact verifiable
  • Cryptographic proof for auditors
Component Before After
Your LLM Unchanged Unchanged
Your model weights Unchanged Unchanged
Your serving framework Unchanged Unchanged
Query path Direct to model Through HARE governance layer
Evidence None Cryptographic proof for every operation
Policy enforcement None Non-bypassable Arbiter evaluation

Partnership

What We Ask In Return

🚀

Research Deployment

Run the integration on your existing LLM infrastructure for research and evaluation purposes. Early Access deployments are experimental and non-production.

💬

Share Feedback

Tell us what worked, what didn't, where the specs were unclear, where the SDK was insufficient. This is why we do early access.

📢

Public Reference

Let us say publicly: "[University] runs HARE-governed inference on their on-site LLM." We coordinate messaging with your communications team.

🔄

Contribute Back

Any adapter code or policy templates you create go back to the open spec library (Layer 1 and Layer 2 only — never Layer 3 commercial components).

Optional: If the results are academically interesting, publish them. We provide co-authorship support if desired.

Value

Why This Is Worth Your Time

Solve a Real Problem

You have a compliance gap today. Every query is ungoverned. HARE closes that gap with cryptographic proof.

Demonstrate Governed AI

Your administration, IRB, and legal team want documentation. HARE produces artifacts they can verify.

Evidence, Not Logs

Evidence Artifacts are signed, chain-linked, tamper-evident proofs intended to support audit and review.

Regulation-Ready

If EU AI Act or similar regulation arrives, you have infrastructure designed to support compliance workflows.

Student Learning

Students who build on HARE learn a governance system that enterprise and government will need.

Reference Deployment

Become a proof point: "HARE governance works at scale on real university LLM infrastructure."

Eligibility

Is Your Institution a Good Fit?

Must Have

  • On-site LLM already running (vLLM, Ollama, TGI, llama.cpp, etc.)
  • At least one person (faculty/staff) who can dedicate time to integration

Should Have

  • Existing compliance concern (FERPA, GDPR, IRB requirements)
  • CS or Data Science department willing to assign student projects

Nice to Have

  • Multiple departments using the LLM (creates multi-role policy scenarios)
  • Cross-institution collaboration (creates federation scenarios)
  • Existing audit/compliance infrastructure (SIEM, GRC platform)

Red Flags

  • No sysadmin capacity to deploy containers
  • Purely cloud-hosted LLM with no local control
  • No identified compliance need
  • No faculty/staff sponsor

Ready to Govern Your AI?

Contact us to discuss whether your institution is a good fit for the Early Access Program.

university@hareprotocol.io