TL;DR: SpecFact is the governance and memory layer for AI-assisted development: Git-native Markdown rules, a CLI-first module system, and a roadmap from reviews and CI evidence to steerable context, without replacing Copilot, Cursor, or your merge gates.
North star: This article is a vision piece. It describes where we are taking the product: how governance, memory, and modules should fit together over time. It is not a runtime or API reference. Some of what you read is already in the CLI and repos today; other parts are direction and planned work. End-to-end flows are the target shape, not a claim that every step is live in production yet.
AI coding agents are everywhere now: Copilot, Cursor, Claude Code, CodeRabbit, BugBot, and more. Many teams are still missing one critical piece: a governance and memory layer that makes those tools follow your rules, your architecture, and your standards.
SpecFact is that missing layer.
It does not try to replace your existing AI tools. Instead, SpecFact lives next to your code and CI, curates what you learn from reviews and tests, and turns that into compact Markdown rules and knowledge that your AI tools can read as instructions and context.
Why AI-Assisted Teams Need SpecFact
Left alone, AI tools tend to:
- Re-introduce the same bugs and anti-patterns across repos.
- Drift away from agreed architecture and boundaries over time.
- Ignore hard-won lessons from previous incidents and reviews.
- Burn tokens on speculative exploration instead of focused change proposals.
Most of those problems are not about model quality; they are about missing guardrails and memory:
- No single place where your team's rules, decisions, and patterns live.
- No mechanism to keep those rules aligned with reality as code changes.
- No way to feed those rules back into all the different AI tools you use.
SpecFact exists to solve exactly this class of problems.
What SpecFact Is (Today)
At its core, SpecFact is:
- A CLI-first, local-first toolkit (
specfact-cli) that runs next to your repositories. - A module system (
specfact-cli-modules) for domain-specific governance (code quality, security, architecture, knowledge, FinOps, and more). - A growing set of Markdown rules and docs that describe how AI tools should behave in your codebase:
docs/agent-rules/…: how agents are expected to act.docs/architecture/…: structural constraints and decisions.docs/prompts/…: reusable prompt patterns.docs/module-system/…: how modules extend SpecFact safely.
A few important clarifications:
SpecFact does not run AI models.
It does not call OpenAI, Anthropic, Gemini, or similar providers. Instead, it produces and maintains inputs for those tools: rules, instructions, and context you can wire into IDE agents, PR bots, and CI.
SpecFact is AI-agnostic.
Whether your team uses Copilot, Cursor, Claude Code, Windsurf, or something else, SpecFact's output is plain text and Markdown plus config that any of them can consume.
SpecFact is Git-native.
Rules, knowledge, and module contracts live in your repo as Markdown. They can be diffed, reviewed, branched, and versioned like code.
Even in its current state, this gives you a concrete place to:
- Encode team rules (style, architecture, security) as Markdown.
- Start organizing knowledge your AI tools should respect.
- Structure modules so new governance features can plug in cleanly.
How SpecFact Evolved to This Point
SpecFact did not appear in a vacuum. It grew out of two directions that converged:
- Experience with "memory" stacks that optimize for storage over steerability.
Teams often reach for heavy retrieval setups to capture developer context. The valuable part turned out not to be any particular database or transport. It was the knowledge layer on top: collections of evidence, workflows, validation, and structured thinking that could be turned into reviewable rules. SpecFact focuses there first: Markdown-first, backend-agnostic, so your governance story does not depend on a single retrieval technology. - The SpecFact module system and OpenSpec workflows.
SpecFact introduced composable building blocks (command registry, package manifests for modules, and similar) so extensions stay contract-driven. Docs for agent rules, architecture, prompts, and plans form a natural home for governance.
From those directions, a clear design emerged:
- Knowledge and workflows should live as Markdown modules, not as tightly coupled storage schemas.
- SpecFact should focus on how teams think and behave, not on being another AI runtime.
That is the direction you see in the current repositories: a CLI plus module system plus rules and docs, with richer "developer memory" concerns intentionally expressed as modules you can evolve without locking early adopters into one backend.
The High-Level Flow: Your World → SpecFact → Your AI Tools
A simple mental model:
Your World → SpecFact → AI and delivery
1. Your World
On the left is everything you already have today:
- Code repositories and branches.
- CI pipelines and test results.
- Security scans and dependency checks.
- AI and human PR reviews from tools like CodeRabbit, BugBot, Copilot Review, and others.
These are signals and evidence: they tell you what is going well and what keeps going wrong.
2. SpecFact Engine
In the middle is specfact-cli and modules, conceptually split into three stages:
- Analyze: ingest structured signals and evidence:
- Code diffs and structure.
- Test results and transitions.
- Security findings.
- AI and human PR review comments.
- Distill: turn noisy evidence into compact knowledge:
- Evidence → learnings → rules.
- Rules live as short, high-signal Markdown files (on the order of hundreds of tokens per rule file) with frontmatter for domain, confidence, scope, and similar fields.
- Govern: group those rules under governance pillars:
- Code quality.
- Security and licenses.
- Architecture and boundaries.
- Knowledge and developer memory.
- FinOps and efficiency.
All of that is stored as Markdown knowledge and rules, tracked in Git and decoupled from any specific RAG backend.
Today, some of this is still conceptual (especially fully automated distillation and harvesting), but the directory structure and module system are already built to host it.
3. AI and Delivery
On the right are the tools that actually call LLMs:
- IDE agents (Cursor, Claude Code, Windsurf, and others).
- AI PR review bots (CodeRabbit, BugBot, Copilot Review).
- CI and policy gates (pre-commit, GitHub Actions, merge checks).
SpecFact feeds those tools:
- Instructions and context for AI
Short, curated rule files injected as project instructions, system prompts, repository instruction files, slash-prompt snippets, and config templates. - Policy and guardrails
CLI and CI checks that validate changes against your rules, guard OpenSpec changes against architectural drift and duplication, and enforce boundaries between core and modules.
SpecFact itself never calls the models. It teaches your AI how to behave in your world.
What Is Available Today vs What Is Coming
To keep expectations accurate, it helps to split the picture in two.
Available Today / Early Stage
You can already:
- Install and run
specfact-clilocally alongside your repos. - Use the existing docs structure (
docs/agent-rules,docs/architecture,docs/prompts,docs/module-system, and related paths) as a home for your team's rules and patterns. - Organize rules in Markdown files that are easy to diff, review, inject manually into AI tools as project instructions, and evolve alongside your code.
- Experiment with the module system in
specfact-cli-modulesso governance features stay composable: transport and integration in core, domain logic in modules, and explicit contracts to avoid cross-module drift.
Put differently: today, SpecFact already gives you a structured, Git-native place to encode how your AI tools should behave, even when some of the automation is still manual.
In Progress / Roadmap
The design and issues point toward several concrete next steps:
- Developer memory module (Markdown graph as a default backend)
A dedicated knowledge module that logs evidence from reviews, tests, and security scans into a local Markdown "wiki"; promotes recurring patterns into distilled rules; and supports multiple backends behind a small memory protocol (Markdown graph first, optional integrations later). - Automated distillation loop
CLI commands to harvest structured evidence from GitHub PR comments (including AI review bots), run an LLM once to suggest rule updates from accumulated evidence, and write rule diffs into Git for human review and merge. - Integration with OpenSpec flow
Pre-flight context assembly for spec changes: read the relevant rules (OpenSpec authoring rules, module boundaries, architecture decisions), inject them before an OpenSpec change is drafted, and validate spec changes against those rules in CI. - Profile-driven defaults and shared learning stores
Profile-specific defaults for where evidence, learnings, and rules live; how aggressive distillation should be; and support for org-wide learning repos that multiple projects can reference. - Enterprise policy and telemetry layer
A future layer that manages org and team policy centrally, provides telemetry (for example via OpenTelemetry) on where AI and rules are working or failing, and plugs into the same CLI and module contracts used in the open-source core.
Those pieces are not all built yet, but the architecture and folder layout you see now are explicitly designed to host them without breaking early adopters.
Why Start Using SpecFact Now
Even before the full distillation loop and enterprise layer land, there are good reasons to start sooner rather than later:
- You need a home for your rules anyway.
Most teams scatter rules across wikis, README fragments, and ad-hoc prompt snippets. SpecFact gives you a single, versioned place for agent rules and architecture constraints, and a natural way to review and evolve those rules like code. - Your AI tools get better as your rules get better.
You can start with hand-written rule files and manual injection into project instructions. As SpecFact's distillation automation matures, those same files become the target of automated updates, so you do not have to rewrite everything. - You shape the module and governance model.
Early usage and feedback influence which governance pillars become first-class modules, how modules declare responsibilities and boundaries, and how opinionated default profiles should be for different stacks. - You de-risk AI adoption across repos.
The more repos and tools you add, the more important shared, cross-repo rules and a plan for OpenSpec change governance become.
All of that is easier to establish before AI usage spreads widely across your org than after.
What to Watch Next
If you are evaluating whether to keep an eye on SpecFact, here is what to watch for in the near term:
- First public developer memory module with Markdown graph backend and a simple evidence-to-rules distillation command.
- Initial integrations with AI review bots (via GitHub review comment harvesting).
- OpenSpec pre-flight support that injects relevant rules before spec changes are drafted.
- Profile-based defaults for popular stacks (for example Python services, TypeScript full-stack) that make SpecFact useful with almost no configuration.
Each of these steps moves SpecFact closer to its intended role:
The governance and memory layer that makes all your AI tools behave like a coherent, evolving engineering system instead of a collection of disconnected copilots.
If that is a problem you are already feeling, it is worth experimenting with SpecFact now, and helping shape how this layer grows.
Shape SpecFact with us (limited slots)
If you have read this far and the direction resonates, we would like to hear from you. Share your expectations, pain points, and constraints around AI governance, team rules, and developer memory in real organizations. That feedback helps us align SpecFact with broader business demand, not only our own roadmap assumptions.
Partner capacity is limited. We can only include a small number of teams and individuals in a focused loop: early testing, structured feedback, and co-shaping priorities. We select people who show genuine interest in that kind of partnership (ongoing collaboration), not one-off feature requests or generic curiosity.
How to reach us
- Email: hello@noldai.com with a short note on your context (stack, team size, what you would want from a governance layer). We read every message and follow up when there is a fit for the current partner cohort.
- Community: For open, async discussion, use GitHub Discussions.