The Governor HQ Constitutional Framework
AI Safety Constitutions for Health & Biometric Data Projects
AI behavior guidance layer working with health data — prescriptive, executable constraints that prevent medical claims and ensure ethical boundaries across multiple domains.
The Governor HQ is a monorepo of AI Safety Constitutions — domain-specific constraint frameworks for building products that process health and biometric data. Each package enforces hard safety boundaries to prevent AI systems from crossing ethical and legal lines.
This framework is prescriptive and executable — not decorative. Each domain constitution encodes:
- Product scope — What systems should (and must not) do with health data
- Safety boundaries — Hard rules that prevent medical claims, diagnoses, and treatment advice
- Language constraints — How systems communicate with users about their health data
- Product identity — Clear positioning in consumer wellness (not medical) space
- AI agent guidance — Explicit instructions for AI coding assistants
Who This Is For
Developers Building Health Data Products
If you're building products that process health or biometric data and use AI coding assistants (GitHub Copilot, ChatGPT, Claude, Cursor, etc.), these frameworks should be in your AI agent's context.
The framework prevents AI from generating dangerous code with medical claims, treatment recommendations, or prescriptive language.
The Problem
AI coding assistants can generate dangerous code when working with health data:
- ❌ Making medical claims or implied diagnoses
- ❌ Recommending supplements, dosages, or treatments
- ❌ Using authoritative prescriptive language
- ❌ Crossing legal and ethical boundaries
The Solution
✅ Domain-specific constitutional frameworks that prevent these issues
📦 Available Packages
🏃 Wearables & Fitness Trackers
npm install --save-dev @the-governor-hq/constitution-wearablesFor: Smartwatch and fitness tracker data (Garmin, Apple Watch, Whoop, Oura, Fitbit)
Covers: Sleep, HRV, heart rate, activity, training load, recovery, readiness scores
🧠 Brain-Computer Interfaces (BCI)
npm install --save-dev @the-governor-hq/constitution-bciFor: EEG, fNIRS, and neurofeedback data
Covers: Brain waves, focus detection, meditation states, neurofeedback, sleep stages
💭 Therapy & Mental Health
npm install --save-dev @the-governor-hq/constitution-therapyFor: Therapy and emotional wellbeing applications
Covers: Mood tracking, journaling, symptom logging, behavioral patterns
⚙️ Core Infrastructure
npm install --save-dev @the-governor-hq/constitution-coreShared safety rules and utilities (auto-installed with domain packages)
Not For
These frameworks are for consumer wellness products, not:
- Clinical medical devices (FDA-regulated)
- Diagnostic tools
- Treatment planning systems
- Professional medical software
Core Principles
These principles apply across all domain-specific packages:
| Principle | Detail |
|---|---|
| Personal baseline | Systems must learn each user's normal over time (30–90 days minimum) |
| Deviation-driven | Recommendations only when meaningful change from baseline is detected |
| Behavioral suggestions | Timing adjustments, rest cues, activity modifications — never medical interventions |
| Non-medical language | No diagnoses, no supplements, no treatment protocols, no disease names |
| Consumer wellness only | Clear positioning outside medical/clinical scope |
| Privacy-first | Health data stays local, user-controlled |
| Optionality | "Consider" and "might help" — never "you must" or "you should" |
| Safety first | When in doubt about a feature, default to NO until confirmed safe |
The Reference Implementation: The Governor
These principles originated from The Governor, a personal recovery-aware AI coach that:
- Reads wearable data (HRV, sleep, resting heart rate, activity)
- Learns your personal baseline over 30-90 days
- Detects meaningful deviations from your normal patterns
- Offers targeted behavioral guidance (timing, rest, activity adjustments)
- Never makes medical claims or recommendations
The framework is broader than this single product — it applies to any wearable data application.
Documentation Map
AI Agent Guide ⭐ Start Here for AI-Assisted Development
Comprehensive instructions for AI coding assistants. Includes code patterns, decision trees, validation checklists, and common pitfalls.
Core System
How data-driven systems should read, learn, and make decisions.
- Signals — What data wearables provide (and what it cannot tell us)
- Baseline — How systems learn "normal" for each person
- Deviation Engine — When and why recommendation systems activate
Agents
What data-driven recommendation systems can suggest.
- Recovery Agent — Allowed recovery guidance (HRV-based)
- Stress Agent — Stress load interpretation and behavioral suggestions
Constraints
What systems must never do — the constitutional boundaries.
- Hard Rules — Absolute system limits (product constitution)
- Language Rules — Tone, wording, and phrasing controls
Positioning & Boundaries
- Product Identity — What wearable data products are and are not
- What We Don't Do — Explicit domain boundaries and scope limits
Quick Start for AI Coding Assistants
If you're an AI assistant helping a developer, read this workflow:
- First-time setup: Read
/ai-agent-guidecompletely - Before implementing features: Check
/constraints/hard-rulesand/what-we-dont-do - When writing user-facing text: Validate against
/constraints/language-rules - When processing biometric data: Reference
/core/signalsand/core/baseline
Golden Rule: If unsure whether something is allowed, assume NO until confirmed.
Why This Matters
Legal & Ethical Risk
AI coding assistants, without constraints, will generate code that:
- Makes medical claims
- Recommends supplements and dosages
- Uses diagnostic language
- Commands users to take health actions
- Compares users to population averages (creating anxiety)
This exposes you to legal liability and harms users.
Regulatory Boundaries
Consumer wellness products live in a gray area between "helpful data visualization" and "medical device." Stay clearly on the wellness side by:
- Never diagnosing
- Never treating
- Never prescribing
- Always deferring to healthcare professionals for medical concerns
User Trust
Users trust you with intimate biometric data. Reciprocate by:
- Learning their personal patterns (not judging against generic standards)
- Speaking calmly and optionally (not alarmingly or commandingly)
- Staying in your lane (wellness feedback, not medical advice)
This documentation is not optional. It is not decorative. It is a constitutional framework that must be enforced in code.