⚠️ This system does not provide medical advice.
Introduction

The Governor HQ Constitutional Framework

AI Safety Constitutions for Health & Biometric Data Projects

AI behavior guidance layer working with health data — prescriptive, executable constraints that prevent medical claims and ensure ethical boundaries across multiple domains.

The Governor HQ is a monorepo of AI Safety Constitutions — domain-specific constraint frameworks for building products that process health and biometric data. Each package enforces hard safety boundaries to prevent AI systems from crossing ethical and legal lines.

This framework is prescriptive and executable — not decorative. Each domain constitution encodes:

  • Product scope — What systems should (and must not) do with health data
  • Safety boundaries — Hard rules that prevent medical claims, diagnoses, and treatment advice
  • Language constraints — How systems communicate with users about their health data
  • Product identity — Clear positioning in consumer wellness (not medical) space
  • AI agent guidance — Explicit instructions for AI coding assistants

Who This Is For

Developers Building Health Data Products

If you're building products that process health or biometric data and use AI coding assistants (GitHub Copilot, ChatGPT, Claude, Cursor, etc.), these frameworks should be in your AI agent's context.

The framework prevents AI from generating dangerous code with medical claims, treatment recommendations, or prescriptive language.

The Problem

AI coding assistants can generate dangerous code when working with health data:

  • ❌ Making medical claims or implied diagnoses
  • ❌ Recommending supplements, dosages, or treatments
  • ❌ Using authoritative prescriptive language
  • ❌ Crossing legal and ethical boundaries

The Solution

✅ Domain-specific constitutional frameworks that prevent these issues


📦 Available Packages

🏃 Wearables & Fitness Trackers

npm (opens in a new tab)

npm install --save-dev @the-governor-hq/constitution-wearables

For: Smartwatch and fitness tracker data (Garmin, Apple Watch, Whoop, Oura, Fitbit)
Covers: Sleep, HRV, heart rate, activity, training load, recovery, readiness scores

📖 Wearables Documentation

🧠 Brain-Computer Interfaces (BCI)

npm (opens in a new tab) Status

npm install --save-dev @the-governor-hq/constitution-bci

For: EEG, fNIRS, and neurofeedback data
Covers: Brain waves, focus detection, meditation states, neurofeedback, sleep stages

💭 Therapy & Mental Health

npm (opens in a new tab) Status

npm install --save-dev @the-governor-hq/constitution-therapy

For: Therapy and emotional wellbeing applications
Covers: Mood tracking, journaling, symptom logging, behavioral patterns

⚙️ Core Infrastructure

npm (opens in a new tab)

npm install --save-dev @the-governor-hq/constitution-core

Shared safety rules and utilities (auto-installed with domain packages)


Not For

These frameworks are for consumer wellness products, not:

  • Clinical medical devices (FDA-regulated)
  • Diagnostic tools
  • Treatment planning systems
  • Professional medical software

Core Principles

These principles apply across all domain-specific packages:

PrincipleDetail
Personal baselineSystems must learn each user's normal over time (30–90 days minimum)
Deviation-drivenRecommendations only when meaningful change from baseline is detected
Behavioral suggestionsTiming adjustments, rest cues, activity modifications — never medical interventions
Non-medical languageNo diagnoses, no supplements, no treatment protocols, no disease names
Consumer wellness onlyClear positioning outside medical/clinical scope
Privacy-firstHealth data stays local, user-controlled
Optionality"Consider" and "might help" — never "you must" or "you should"
Safety firstWhen in doubt about a feature, default to NO until confirmed safe

The Reference Implementation: The Governor

These principles originated from The Governor, a personal recovery-aware AI coach that:

  • Reads wearable data (HRV, sleep, resting heart rate, activity)
  • Learns your personal baseline over 30-90 days
  • Detects meaningful deviations from your normal patterns
  • Offers targeted behavioral guidance (timing, rest, activity adjustments)
  • Never makes medical claims or recommendations

The framework is broader than this single product — it applies to any wearable data application.


Documentation Map

AI Agent Guide ⭐ Start Here for AI-Assisted Development

Comprehensive instructions for AI coding assistants. Includes code patterns, decision trees, validation checklists, and common pitfalls.

Core System

How data-driven systems should read, learn, and make decisions.

  • Signals — What data wearables provide (and what it cannot tell us)
  • Baseline — How systems learn "normal" for each person
  • Deviation Engine — When and why recommendation systems activate

Agents

What data-driven recommendation systems can suggest.

Constraints

What systems must never do — the constitutional boundaries.

Positioning & Boundaries


Quick Start for AI Coding Assistants

If you're an AI assistant helping a developer, read this workflow:

  1. First-time setup: Read /ai-agent-guide completely
  2. Before implementing features: Check /constraints/hard-rules and /what-we-dont-do
  3. When writing user-facing text: Validate against /constraints/language-rules
  4. When processing biometric data: Reference /core/signals and /core/baseline

Golden Rule: If unsure whether something is allowed, assume NO until confirmed.


Why This Matters

Legal & Ethical Risk

AI coding assistants, without constraints, will generate code that:

  • Makes medical claims
  • Recommends supplements and dosages
  • Uses diagnostic language
  • Commands users to take health actions
  • Compares users to population averages (creating anxiety)

This exposes you to legal liability and harms users.

Regulatory Boundaries

Consumer wellness products live in a gray area between "helpful data visualization" and "medical device." Stay clearly on the wellness side by:

  • Never diagnosing
  • Never treating
  • Never prescribing
  • Always deferring to healthcare professionals for medical concerns

User Trust

Users trust you with intimate biometric data. Reciprocate by:

  • Learning their personal patterns (not judging against generic standards)
  • Speaking calmly and optionally (not alarmingly or commandingly)
  • Staying in your lane (wellness feedback, not medical advice)

This documentation is not optional. It is not decorative. It is a constitutional framework that must be enforced in code.