Back to WheelWright
Leverages alignment

WAI Framework
Institutional memory.

A hub-and-spoke knowledge operating system that gives your projects persistent intelligence — memory that doesn't just survive sessions, it compounds across them.

TL;DR — try it nowClone on GitHub
TL;DR — Try it yourself

Initialize. Work. Close. Reopen. It remembers.

Clone and init

git clone https://github.com/wheelwright-ai/framework

From inside any project folder, open an LLM session and say:

wai wakeup (look in ./WAI-Spoke folder)

The foundation interview captures your stack, decisions, and goals into WAI files.

Work a session, then close out

Build features, make decisions, write code. When you're done, run the closeout. WAI captures everything — state, commitments, context — into persistent files.

Reopen anywhere

Open the project in a different IDE, a different LLM, a different machine. Run wai wakeup. The agent briefs you on exactly where you left off.

That's the magic. Close in VS Code. Reopen in Cursor. Switch from Claude to ChatGPT. The Framework remembers your architecture, your decisions, and your next steps. No re-explaining. No rework. Just continuity.
See the continuity

One project. Three tools. Zero context loss.

Morning

VS Code + Claude

You scaffold the OAuth callback, decide on PKCE over implicit grant, and write 3 tests. Run closeout.

Afternoon

Cursor + GPT-4

Open the same project in Cursor. wai wakeup briefs GPT-4: “OAuth callback scaffolded. PKCE chosen. CSRF state param pending. 3 tests passing.”

Next day

Terminal + Copilot

SSH from your laptop. wai wakeup again. Copilot picks up where GPT-4 left off. Zero re-explanation.

Three tools. Two LLMs. One unbroken thread of context.

As long as you're working from the same project folder — whether it's synced by Git or just shared on disk — your IDE, terminal, and model are all operating against the same WAI state.

Architecture

Hub-and-spoke intelligence

The Hub is the shared brain — the clearinghouse that consolidates patterns, decisions, and institutional knowledge across your projects.

Each Spoke is a project with its own context, its own orchestration layer, and a team of Advisors. The spoke guards task-specific context while staying connected to the Hub's shared intelligence.

WAI Tracks serve as the transport layer, carrying context between sessions, agents, and spokes. Lugs ride inside Tracks in transit — knowledge records moving where they're needed.

Advisors navigate by folder convention — their position in the directory tree tells them their scope. No configuration files, no routing tables. Structure implies scope.

HUB
Spoke
Project A
Spoke
Project B
Spoke
Project C
Shared memory
Task context
Tracks in transit
Memory design principles

Canonical, diffable, reconstructable

WAI memory is canonical — human-readable files that any developer can inspect, edit, and understand without tooling.

It's diffable — because it lives in plain text, Git shows you exactly what changed, when, and why. No hidden state.

It's reconstructable — if derived memory drifts or corrupts, you rebuild from the canonical source. The Framework delivers insights on top of that foundation; the foundation itself never becomes a mystery.

You get semantic, scalable recall — without building a hidden brain you can't audit or recover.

Core concepts

The building blocks

Six building blocks power the entire system.

Skills

Executable capabilities

Verified, repeatable capabilities with embedded tests. Skills are executable during wakeup — not documentation. They declare what they do, what they read, what they write, and how to verify they worked.

Lugs

Actionable knowledge records

Records with origin stories, evolution context, and cross-references. Lugs capture not just what was decided but why. Newest version is HEAD; prior versions nest underneath for audit and apprenticeship.

PEV Contract

Perceive / Execute / Verify

The universal execution pattern. Every action follows the same contract: understand the state, do the work, confirm the result. Eliminates interpretation gaps between intent and action.

Ozi

Orchestration + trust boundary

The Chief of Staff within each spoke. Conducts intake interviews, manages Advisors, handles inter-spoke communication. Only Ozi reviews and actions Advisor outputs.

Advisors

Domain experts, recursively nested

Specialized agents nested within spokes. They navigate by folder convention — their directory position defines their scope. Advisors can nest recursively for deep domain work.

Imperatives

Negotiated guardrails

Hard-stop boundaries per project with autonomy tiers. Personal projects get act-then-report freedom. Critical commercial work stays at propose-only. Respects your risk profile.

Go deeper

Full Documentation

Complete specs for the Lug Schema, Skill Contract, Hub protocols, built-in Skills, session ledger, and cross-node communication.

Read the docs
Start lighter?

Try Tracks first

Not ready for the full operating system? WAI Tracks activate alignment with zero infrastructure. One file, any AI, immediate value.

Explore Tracks