A hub-and-spoke knowledge operating system that gives your projects persistent intelligence — memory that doesn't just survive sessions, it compounds across them.
git clone https://github.com/wheelwright-ai/framework
From inside any project folder, open an LLM session and say:
wai wakeup (look in ./WAI-Spoke folder)
The foundation interview captures your stack, decisions, and goals into WAI files.
Build features, make decisions, write code. When you're done, run the closeout. WAI captures everything — state, commitments, context — into persistent files.
Open the project in a different IDE, a different LLM, a different machine. Run wai wakeup. The agent briefs you on exactly where you left off.
You scaffold the OAuth callback, decide on PKCE over implicit grant, and write 3 tests. Run closeout.
Open the same project in Cursor. wai wakeup briefs GPT-4: “OAuth callback scaffolded. PKCE chosen. CSRF state param pending. 3 tests passing.”
SSH from your laptop. wai wakeup again. Copilot picks up where GPT-4 left off. Zero re-explanation.
Three tools. Two LLMs. One unbroken thread of context.
As long as you're working from the same project folder — whether it's synced by Git or just shared on disk — your IDE, terminal, and model are all operating against the same WAI state.
The Hub is the shared brain — the clearinghouse that consolidates patterns, decisions, and institutional knowledge across your projects.
Each Spoke is a project with its own context, its own orchestration layer, and a team of Advisors. The spoke guards task-specific context while staying connected to the Hub's shared intelligence.
WAI Tracks serve as the transport layer, carrying context between sessions, agents, and spokes. Lugs ride inside Tracks in transit — knowledge records moving where they're needed.
Advisors navigate by folder convention — their position in the directory tree tells them their scope. No configuration files, no routing tables. Structure implies scope.
WAI memory is canonical — human-readable files that any developer can inspect, edit, and understand without tooling.
It's diffable — because it lives in plain text, Git shows you exactly what changed, when, and why. No hidden state.
It's reconstructable — if derived memory drifts or corrupts, you rebuild from the canonical source. The Framework delivers insights on top of that foundation; the foundation itself never becomes a mystery.
You get semantic, scalable recall — without building a hidden brain you can't audit or recover.
Six building blocks power the entire system.
Verified, repeatable capabilities with embedded tests. Skills are executable during wakeup — not documentation. They declare what they do, what they read, what they write, and how to verify they worked.
Records with origin stories, evolution context, and cross-references. Lugs capture not just what was decided but why. Newest version is HEAD; prior versions nest underneath for audit and apprenticeship.
The universal execution pattern. Every action follows the same contract: understand the state, do the work, confirm the result. Eliminates interpretation gaps between intent and action.
The Chief of Staff within each spoke. Conducts intake interviews, manages Advisors, handles inter-spoke communication. Only Ozi reviews and actions Advisor outputs.
Specialized agents nested within spokes. They navigate by folder convention — their directory position defines their scope. Advisors can nest recursively for deep domain work.
Hard-stop boundaries per project with autonomy tiers. Personal projects get act-then-report freedom. Critical commercial work stays at propose-only. Respects your risk profile.
Complete specs for the Lug Schema, Skill Contract, Hub protocols, built-in Skills, session ledger, and cross-node communication.
Read the docsNot ready for the full operating system? WAI Tracks activate alignment with zero infrastructure. One file, any AI, immediate value.
Explore Tracks