How changes are introduced, documented, and verified in tb-solid-pod.
| If you're... | Start with |
|---|---|
| New and want to understand the project | PRINCIPLES_AND_GOALS.md — why we built it this way |
| Investigating a broken feature | FEATURE_CHECKLIST.md — verify what works, find what's broken |
| Checking if a change caused regressions | FEATURE_CHECKLIST.md — re-verify affected levels |
| Looking for work to pick up | BACKLOG.md — pending tasks with context |
| Wondering why something was built a certain way | IMPLEMENTATION_PLAN.md — phase history and rationale |
| Reviewing or auditing the process | This doc — lifecycle stages, verification workflow, documentation review |
The process flows from feature/requirements → design/selection → implementation → verification and validation:
| Stage | What happens | Outputs |
|---|---|---|
| Requirements | Capture why, what "done" looks like, and how to test | Reason for change, acceptance criteria, testing strategy |
| Design/selection | Decide approach; align docs and contracts | Updated DESIGN, IMPLEMENTATION_PLAN, SOLID_SERVER_STRATEGIES, etc. |
| Implementation | Code and doc changes | Code, store layout, schemas, doc updates |
| Verification | Automated tests + manual verification | Passing tests, checked Feature Checklist, resolved review items |
| Question | Document |
|---|---|
| What needs to be done next? | BACKLOG.md |
| What's been completed in each phase? | IMPLEMENTATION_PLAN.md |
| Does a feature actually work? | FEATURE_CHECKLIST.md |
| What behavior must be preserved? | FEATURE_CHECKLIST.md |
| What was done and how was it resolved? | BACKLOG.md (Completed section) |
The Feature Checklist serves two purposes: verifying new work and defining behavior to preserve. Before making changes, review affected checklist items—they represent working functionality that should not regress.
If code is hard to test, it's hard to maintain. This is a core technical principle (see PRINCIPLES_AND_GOALS.md).
- Unit tests required for all features. No exceptions. See TEST_PLAN.md.
- Refactor over complex tests. If a test needs extensive setup or mocking, refactor the code instead.
- Tests enable safe change. New contributors can modify code confidently when tests catch regressions.
- Cost stays low. Projects without tests accumulate risk. Projects with tests can grow indefinitely.
This is what makes it feasible for new participants to make changes without high levels of risk, and what enables the project to live for a long time with minimal cost.
For code features, "Documenting Changes" and "Verification Workflow" below apply. For planning and documentation (e.g. before or after a major phase), use the Documentation review process so design and docs stay coherent and gaps are tracked to closure.
We use a three-document review to move from planning docs to a clear design, then to implementation and verification. Use it when you want to align requirements with design, find gaps, and get to an actionable checklist (e.g. before a release or after adding new planning docs).
Document: DOCUMENT_REVIEW.md
A single pass over the planning set (AGENTS.md, DESIGN.md, IMPLEMENTATION_PLAN.md, TEST_PLAN.md, DOCUMENTATION_GUIDELINES.md, SOLID_SERVER_STRATEGIES.md, SHORTCOMINGS.md, USE_CASES.md, DOCUMENT_SHARING_SCENARIOS.md, README.md, testing docs). The reviewer captures:
- Overall impression — Coherence and alignment (e.g. local-first, sync-later) across docs.
- Strengths — What is working (principle-first, app-author focus, critical path, testing).
- Objections/risks — Mismatches (e.g. DESIGN vs actual store), ambiguous scope (Phase 8 ACL, Phase 9 sync), or conflicting vision (e.g. retired TODO_NEXT.md).
- Where more information is required — Gaps that block implementation (Store ↔ LDP mapping, conflict resolution, token handling, BDD vs manual boundary, versioning).
This is the design/selection step for documentation: it decides what is correct, what must be fixed, and what must be specified before or during implementation.
Document: CLAUDE_DOC_REVIEW.md
A second reviewer (human or AI) works from the same planning set and from DOCUMENT_REVIEW.md:
- Comparison — Agreement, additional observations, and different perspectives vs DOCUMENT_REVIEW.md.
- Independent assessment — Executive summary, strengths, objections, areas requiring more information, document-by-document notes.
- Priority actions — High/medium/low and where to add content (e.g. SOLID_SERVER_STRATEGIES for LDP mapping and conflict resolution).
Resolved items are marked (e.g. DESIGN code examples, success criteria, TODO_NEXT) so the checklist stays current. This step validates the first review and prioritizes what to implement.
Document: FINAL_DOC_REVIEW.md
Synthesis of both reviews into one actionable list:
- Resolved (for reference) — Items already fixed; no further action.
- Doc cleanup checklist — Grouped by document or theme (IMPLEMENTATION_PLAN, SOLID_SERVER_STRATEGIES, TEST_PLAN/testing, DESIGN, USE_CASES), with priority (High/Medium/Low) and concrete tasks (e.g. “Add Store ↔ LDP mapping section”).
- Work through and check off — Implement doc (and code) changes; mark items done as they are verified.
This is implementation (doc and code updates) and verification (confirming each item is done). The summary table and links back to DOCUMENT_REVIEW.md and CLAUDE_DOC_REVIEW.md keep the process traceable.
- After adding or rewriting major planning docs (e.g. SOLID_SERVER_STRATEGIES, USE_CASES).
- Before starting a new implementation phase (e.g. sync, ACL) so design and contracts are clear.
- When multiple people or agents need a single, agreed list of gaps and priorities.
- After a release or milestone to capture “what we learned” and update the checklist.
You get: (1) a shared view of strengths and risks, (2) a prioritized list of gaps to close, and (3) a single checklist (FINAL_DOC_REVIEW) to drive implementation and verification, so the lifecycle from requirements → design → implementation → verification is explicit and repeatable.
Every feature is broken until manually verified. Automated tests provide confidence but do not guarantee a feature works end-to-end in the browser. The Feature Checklist is the source of truth for "does this actually work?"
When introducing a change (new feature, bug fix, refactor), document:
Before writing code, state why:
## Change: [Brief title]
**Reason:** [Why is this change needed? What problem does it solve?]
**Context:** [Link to issue, discussion, or related doc if applicable]Define what "done" looks like:
**Acceptance Criteria:**
- [ ] User can [specific action]
- [ ] CLI command `foo bar` produces [expected output]
- [ ] UI shows [expected state] when [condition]
- [ ] Data persists after page refreshCriteria should be:
- Specific: Not "works correctly" but "displays contact name in the list"
- Testable: A human can verify by using the app
- Independent: Each can be checked separately
How the change will be verified:
**Testing Strategy:**
- Unit tests: [what schemas/utils will be tested]
- Manual verification: [steps to verify in browser]
- Feature Checklist: [which levels/items to re-verify]The Feature Checklist is a living document optimized for manual review.
- Regression detection: After any change, re-verify affected features
- Release readiness: Before releases, walk through the full checklist
- Onboarding: Verify the app works on a new machine
- Demo prep: Ensure features work before showing to others
The checklist is ordered by dependency (foundational first):
| Level | What | If Broken, Skip |
|---|---|---|
| 0 | App loads | Everything |
| 1 | Store/persistence | All features |
| 2 | Navigation | Feature UIs |
| 3 | CLI terminal | CLI commands |
| 4 | Personas | Contacts, groups, authorship |
| 5 | Contacts | Groups (membership) |
| 6 | Groups | — |
| 7 | Files | — |
| 8 | Settings | — |
| 9 | Type indexes | WebID profile |
| 10 | WebID profile | — |
Within each level, items are grouped by CLI / UI / Data where it makes sense.
- Add items when new features are implemented
- Remove items when features are removed
- Uncheck items when something breaks
- Add
[BROKEN]notes for known issues with details
- Document: Reason, acceptance criteria, testing strategy
- Implement: Write the code
- Test: Unit tests, Storybook stories if UI
- Verify: Manual verification against acceptance criteria
- Update Feature Checklist: Add new items, mark as checked
- Update IMPLEMENTATION_PLAN.md: If part of a phase
- Document: What's broken and expected behavior
- Reproduce: Verify the bug manually
- Fix: Write the fix
- Verify: Confirm fix works manually
- Re-verify: Check related items in Feature Checklist
- Add test: If regression is possible
- Document: Why the refactor is needed
- Snapshot: Note current Feature Checklist state
- Refactor: Change the code
- Re-verify: All affected levels in Feature Checklist
- Confirm: No regressions
- Before merging significant PRs
- Before releases or demos
- After upgrading dependencies
- After changes to core infrastructure (store, persistence, routing)
- When automated tests pass but something seems wrong
| Document | Purpose |
|---|---|
| PRINCIPLES_AND_GOALS.md | Core principles and project goals |
| BACKLOG.md | Pending work items and completed task history |
| FEATURE_CHECKLIST.md | Manual verification checklist |
| IMPLEMENTATION_PLAN.md | Phase roadmap |
| CODING_GUIDELINES.md | Code style |
| testing/ | Automated test docs |