Structured AI workflow architecture for Google Antigravity — project understanding, documentation generation, and persistent memory across coding sessions.
An .agents rule and workflow system that turns your AI IDE into a disciplined software documentation writer and project analyst. Drop .agents/ into any project and your AI assistant gains structured memory, self-evaluation, and professional documentation pipelines.
Built for Google Antigravity IDE — easily adaptable to any AI IDE using the cascade method (e.g. Windsurf) with minor rule adjustments.
This is not just a memory bank. It is:
- 🔍 A project-overview generator — deep-analyzes your workspace and produces structured project documentation from source code
- 📝 A documentation writer — generates professional docs (requirements, tech stack, features, user flows, architecture diagrams) using templates and modular rules
- 🧠 A persistent memory system — maintains three-layer memory (working, short-term, long-term) across sessions so the AI never loses context
- ⚖️ A self-evaluating workflow engine — scores its own output on a 23-point system with a Creator → Critic → Defender → Judge cycle
- Copy the
.agents/directory to your project root - Set up global rules:
- Open Google Antigravity's agent customization settings (or your AI IDE's equivalent)
- Paste the contents of
GEMINI.mdinto the global rules editor - Alternatively, place it at
~/.gemini/GEMINI.md(the default global rules path)
- Start a chat and type:
Initialize your Memory Bank with the "SessionStart" workflow.
The AI will scaffold .agents/memory-bank/ with core files, plans, task logs, and error records. On existing projects it detects the structure and fills gaps without overwriting.
flowchart TD
subgraph "Global Layer"
GR["🌐 GEMINI.md
(Global Rules)"]
end
subgraph ".agents/ — Project Layer"
subgraph "Rules (always_on)"
WR["⚙️ workspace-rule.md
Core operating rules"]
DR["📄 documentation-rule.md
Doc base principles"]
end
subgraph "Rules (model_decision)"
WS["ws-* rules
Workflow diagrams, events,
functions, error recovery,
evaluation"]
DS["doc-* rules
Per-document guidance
(10 specialized rules)"]
end
subgraph "Templates"
T["📋 documentation-templates/
Fill-in templates for
each document type"]
end
subgraph "Workflows"
WF1["▶️ /project-overview
Project stocktaking"]
WF2["▶️ /project-documentation
Full doc suite generation"]
end
end
subgraph "Output"
MB["🧠 .agents/memory-bank/
Persistent memory"]
DOCS["📁 docs/
Generated documentation"]
end
GR --> WR
WR --> WS
DR --> DS
DS --> T
WF1 --> MB
WF2 --> DS
WF2 --> T
WF2 --> DOCS
WS --> MB
Two trigger tiers keep the AI's context lean:
| Trigger | Loaded | Purpose |
|---|---|---|
always_on |
Every session | Core operating constraints, base documentation principles |
model_decision |
On demand | Topic-specific guidance (e.g., writing dependency docs) |
your-project/
└── .agents/
├── rules/ # AI behavior rules
│ ├── workspace-rule.md # ⚙️ (always_on) Core operating rules
│ ├── ws-workflow-diagrams.md # (model_decision) Mermaid workflow visuals
│ ├── ws-event-handlers.md # (model_decision) Session/task lifecycle
│ ├── ws-function-map.md # (model_decision) XML function map
│ ├── ws-error-recovery.md # (model_decision) Retry logic, escalation
│ ├── ws-evaluation.md # (model_decision) Scoring + self-critique
│ ├── documentation-rule.md # 📄 (always_on) Documentation principles
│ ├── doc-project-overview.md # (model_decision) Project overview
│ ├── doc-dependencies.md # (model_decision) Dependency docs
│ ├── doc-features.md # (model_decision) Feature specification
│ ├── doc-requirements.md # (model_decision) Requirements
│ ├── doc-tech-stack.md # (model_decision) Tech stack
│ ├── doc-user-flow.md # (model_decision) User flows
│ ├── doc-implementation-standards.md # (model_decision) Coding standards
│ ├── doc-project-structure.md # (model_decision) Directory docs
│ ├── doc-meta-workflow.md # (model_decision) Meta-workflow
│ └── doc-architecture-visual.md # (model_decision) Architecture diagrams
├── documentation-templates/ # 📋 Fill-in templates for each doc type
└── workflows/ # ▶️ Executable workflow definitions
├── project-overview.md # /project-overview
└── project-documentation.md # /project-documentation
Analyzes your entire workspace (manifests, configs, source, infrastructure) and generates a structured project-overview.md.
Run the /project-overview workflow.
- Create Mode → Deep source analysis, populate every section from code evidence
- Update Mode → Memory bank delta analysis, surgical updates to stale sections only
Generates a complete documentation set in docs/ using all 10 doc-* rules and their templates:
Run the /project-documentation workflow.
| Step | Output | What It Does |
|---|---|---|
| 0 | — | Analyze workspace, determine project type |
| 1 | project-overview.md |
Vision, problem, solution, scope, risks |
| 2 | dependencies.md |
Auto-extracted deps with versions and docs links |
| 3 | features.md |
Hierarchical feature specification |
| 4 | requirements.md |
Functional + technical requirements |
| 5 | tech-stack.md |
Technology justification with alternatives |
| 6 | user-flow.md |
User-perspective flow documentation |
| 7 | implementation-standards.md |
Code patterns with real examples |
| 8 | project-structure.md |
Directory reasoning and conventions |
| 9 | meta-workflow-integration.md |
Memory system integration |
| 10 | architecture.md |
Mermaid architecture diagrams |
| 11 | — | Cross-reference validation |
Three-layer memory system in .agents/memory-bank/:
.agents/memory-bank/
├── core/
│ ├── activeContext.md # 🔄 Working Memory — current focus
│ ├── projectbrief.md # Project goals and constraints
│ ├── productContext.md # User needs and requirements
│ ├── systemPatterns.md # Architecture decisions
│ ├── techContext.md # Stack, dependencies, tooling
│ └── progress.md # Roadmap and status
├── plans/ # 📋 Implementation plans
├── task-logs/ # 📊 Per-task records with scoring
├── errors/ # 🔧 Error patterns and resolutions
└── memory-index.md # 🗂️ Master index with checksums
Working Memory updates every task. Short-Term Memory (task logs) captures recent decisions. Long-Term Memory (core files) persists architecture knowledge.
Every task is scored on a 23-point scale:
| Score | Rating |
|---|---|
| 21–23 | ✅ Excellent (≥90%) |
| 18–20 | ☑️ Sufficient (≥78%) |
| < 18 | ❌ Fail — requires remediation |
The self-critique cycle: Creator (generate) → Critic (identify weaknesses) → Defender (fix issues) → Judge (score and compare).
| Situation | Prompt |
|---|---|
| 🚀 Start of session | Initialize your Memory Bank with the "SessionStart" workflow. |
| 🔁 AI loses focus | Remember to follow the Memory System. |
| 📝 Force task logging | Make sure you are keeping a task log and update memory. |
| ⚖️ Quality review | Execute Evaluation Phase. |
| 📁 Generate project docs | Run the /project-documentation workflow. |
| 🔍 Project stocktake | Run the /project-overview workflow. |
This project is a fork and evolution of the Engineered Meta-Cognitive Workflow Architecture by Shawn McAllister (@entrepeneur4lyf) / Engineered Automated Systems for Artificial Intelligence (EASAI).
The original work established the core concepts: XML function maps, Mermaid workflow diagrams, three-layer memory, event-driven handlers, and the self-critique cycle. This fork adapts and extends it with modular rule splitting, documentation generation workflows, and template-driven doc pipelines.
Credit to Nick Baumann w/ Cline Memory Bank for the original memory bank concept.
Apache License 2.0 — see LICENSE for details.