Author: Antony Duran
Date: 2025-11-13
Context: First implementation using DevAgent workflows for "Simple Datatable to View Data" feature
- Initial Impressions & Confusion
- How Workflows Work in Practice
- The Iterative Process & DEVELOPER-GUIDE.md
- Common Questions & Solutions
- Best Practices & Recommendations
When first encountering DevAgent, the structure can feel overwhelming. There are multiple directories (.devagent/core/, .devagent/workspace/, .agents/commands/), workflows, templates, and documentation scattered across different locations.
Key Confusion Points:
.agents/commands/vs.devagent/core/workflows/— What's the difference? How do they relate?- Where to start? — The
.devagent/core/README.mdexists but isn't immediately obvious - Workflow vs Command — Are these the same thing? How do they interact?
Through actual usage, the structure became clearer:
.devagent/core/= Portable agent kit (workflows, templates) that can be copied to any project.devagent/workspace/= Project-specific artifacts (features, research, specs, decisions).agents/commands/= Command files that trigger workflows (symlinked to.cursor/commands/)
The Missing Piece: A high-level "Getting Started" guide that explains:
- The relationship between commands and workflows
- Where to start for different types of work
- How workflows chain together
- A glossary of terms
Based on the datatable feature implementation, here's how workflows were used in practice:
1. devagent new-task "Add datatable to view dataset data"
→ Creates task hub with AGENTS.md and folder structure
→ Recommends next steps (research, clarify)
2. devagent research "table components and data access patterns"
→ Investigates codebase, finds existing patterns
→ Creates research packet with findings
→ Identifies gaps requiring clarification
3. devagent clarify-task
→ Validates requirements across 8 dimensions
→ Creates clarification packet
→ Identifies missing information (4/8 complete initially)
4. devagent clarify-task (re-run after gap-fill)
→ Updates clarification packet with new information
→ Improves completeness (7/8 complete)
5. devagent create-plan
→ Synthesizes research + clarification into comprehensive plan
→ Creates plan document with product context and implementation tasks
→ Note: This workflow consolidates the previous create-spec and plan-tasks workflows
6. devagent implement-plan
→ Executes tasks from plan document sequentially
→ Tracks progress in AGENTS.md automatically
→ Validates dependencies before execution
7. devagent clarify-task (re-run for major direction change)
→ Scope changed: migrate to @lambdacurry/forms
→ Creates comprehensive clarification document
→ Updates completeness (8/8 complete)
8. devagent create-plan (re-run for migration)
→ Creates new plan reflecting migration requirements
9. devagent implement-plan (re-run for migration)
→ Executes migration tasks from updated plan
1. Workflows Can Be Re-Run
- If something changes, you can re-call the same command to update previous documents
- This is powerful but can create confusion about which document is "current"
- Solution: Use clear versioning in filenames (e.g.,
plan-v2.md) and updateAGENTS.mdreferences
2. Workflows Chain Naturally
- Each workflow produces artifacts that feed into the next
- Research → Clarification → Plan → Implementation
- But: You can skip steps for simple features (research → create-plan → implement-plan)
- Note: The workflow has been simplified -
create-specandplan-taskswere consolidated intocreate-plan
3. Iteration is Expected
- The datatable feature went through multiple clarification cycles
- Initial implementation (TanStack Table) was later migrated to @lambdacurry/forms
- Lesson: Don't be afraid to re-run workflows when scope changes
4. Workflows Don't Execute Automatically
- After
/new-task, you must manually call the next workflow - Workflows are tools, not autonomous agents
- You remain the coordinator — workflows don't talk to each other
After the first few workflow executions, several issues emerged:
- Lost after research — Research recommended
/clarify-task, but it didn't ask clarifying questions as expected - Unclear next steps — After each workflow, it wasn't always clear what to do next
- No examples — Workflow descriptions were abstract; real examples were needed
- Gap handling — When research or clarification found gaps, the process wasn't clear
The DEVELOPER-GUIDE.md was created to:
- Provide step-by-step examples — Real interactions showing how workflows chain together
- Explain gap handling — What to do when research finds
[NEEDS CLARIFICATION]tags - Clarify decision points — When to proceed with assumptions vs. filling gaps
- Show iteration patterns — How to handle scope changes and re-runs
Initial Confusion
↓
First Workflow Execution (/new-task)
↓
Second Workflow Execution (/research)
↓
Confusion: "What do I do with gaps?"
↓
Third Workflow Execution (/clarify-task)
↓
Confusion: "It didn't ask questions?"
↓
Manual Clarification (gap-fill document)
↓
Re-run /clarify-task
↓
Continue with /create-spec
↓
Realization: "I need examples and guidance"
↓
Create DEVELOPER-GUIDE.md
↓
Use DEVELOPER-GUIDE.md for remaining workflows
↓
Much smoother experience
Key Takeaway: The DEVELOPER-GUIDE.md emerged from actual pain points during first-time usage. It's not theoretical—it's a practical guide based on real experience.
The Question: After /new-task creates a task hub, how do I pass that context to /research?
Solution A: Reference the Task Hub Path
You: devagent research "What table components exist in the codebase?
How do we query organization database tables?"
Feature: .devagent/workspace/tasks/active/2025-11-06_simple-datatable-to-view-data/
Solution B: Reference the AGENTS.md File
You: devagent research "table components and data access patterns"
Context: See .devagent/workspace/tasks/active/2025-11-06_simple-datatable-to-view-data/AGENTS.md
Best Practice: Always include the task hub path or AGENTS.md reference when chaining workflows. Workflows read from the workspace, but explicit references help ensure context is captured.
The Question: If I stop working and come back later, how do I let the LLM know what to continue working on?
Solution A: Use devagent review-progress
You: devagent review-progress
Plan: plan/2025-11-06_datatable-plan.md
Completed: Task 1 (DataTable component created)
In Progress: Task 2 (Server pagination endpoint)
Blocked: Need clarification on pagination API format
This creates a checkpoint file and updates AGENTS.md with progress state.
Solution B: Reference AGENTS.md and Plan Document
You: [Open task hub AGENTS.md]
[Review "Progress Log" and "Implementation Checklist"]
[Open plan document]
Continue from Task 2: Implement server-side pagination endpoint
See: plan/2025-11-06_datatable-plan.md, Task 2
Context: Feature hub AGENTS.md shows Task 1 complete
Best Practice: Use devagent review-progress when stopping work. When resuming, reference both AGENTS.md (for overall progress) and the plan document (for task details). If using devagent implement-plan, it will automatically resume from where it left off.
The Question: After /research execution, I don't agree with the outcome or proposed solution. What's the proper way to proceed?
Solution A: Document Disagreement in Clarification
You: devagent clarify-task
Note: Research recommended TanStack Table, but I want to use
@lambdacurry/forms instead. See research/2025-11-06_datatable-research.md
for historical record, but we're proceeding with @lambdacurry/forms.
The clarification packet will document this decision, and future workflows will use the clarified approach.
Solution B: Re-run Research with Different Focus
You: devagent research "How do we implement data tables with @lambdacurry/forms?
What are the server-side pagination patterns?"
Note: Previous research focused on TanStack Table (see
research/2025-11-06_datatable-research.md), but we're exploring
@lambdacurry/forms as an alternative.
This creates a new research packet that can be referenced in the spec.
Best Practice: Research packets are historical records. If you disagree, either:
- Document the disagreement in clarification (recommended for stakeholder decisions)
- Create new research with different focus (recommended for technical alternatives)
- Keep old research for historical context, but proceed with clarified approach
The Question: Does devagent implement-plan execute all tasks automatically, or do I need to run it multiple times?
Answer: devagent implement-plan executes tasks automatically from the plan document. It:
- Parses the plan document to extract implementation tasks
- Validates task dependencies against AGENTS.md
- Executes tasks sequentially in dependency order
- Updates AGENTS.md after each task completion
- Skips non-coding tasks gracefully
- Pauses only for truly ambiguous decisions or blockers
Usage Pattern:
1. devagent create-plan (creates plan/2025-11-06_feature-plan.md)
2. devagent implement-plan (executes all tasks from plan)
3. Review AGENTS.md to see progress
4. Manually validate: bun run lint && bun run typecheck && bun run test
Best Practice: The workflow executes as much as possible without stopping. Review progress in AGENTS.md after execution. For partial execution, you can specify a task range (e.g., "tasks 1-3").
The Question: What if the current feature state is "good enough" but requirements change significantly?
Solution A: Start New Feature (Recommended if Current State is Good)
Current State: TanStack Table implementation complete and functional
New Requirement: Migrate to @lambdacurry/forms
Decision: Keep current feature as-is (it's functional)
Create new feature: "Migrate datatable to @lambdacurry/forms"
New feature can reference old feature's artifacts
Solution B: Re-run Workflows in Current Feature (Recommended if Current State Needs Changes)
Current State: TanStack Table implementation, but needs major refactor
New Requirement: Migrate to @lambdacurry/forms
Decision: Re-run devagent clarify-task (update scope)
Re-run devagent create-plan (create v2 plan)
Re-run devagent implement-plan (execute migration tasks)
Decision Criteria:
- Current state is functional and acceptable? → Start new feature
- Current state needs major changes anyway? → Re-run workflows in current feature
- Unclear? → Document decision in
AGENTS.mdKey Decisions section
Best Practice: For the datatable feature, we used Solution B because:
- The TanStack Table implementation was complete but needed migration
- The migration was a natural evolution, not a separate feature
- Re-running workflows kept all context in one place
The Question: Is DevAgent designed to work with different models for planning vs. implementation?
Answer: DevAgent workflows are model-agnostic. They work with any LLM that can:
- Follow structured instructions
- Read and write markdown files
- Reference context from workspace
Current Usage Pattern:
- Planning workflows (
devagent research,devagent create-plan) → Use best available model (e.g., Claude Sonnet 4.5) - Implementation (
devagent implement-plan) → Use best available model for automated execution
Potential Enhancement:
- Background agents (Codegen) → Can run implementation tasks asynchronously
- See
.devagent/core/workflows/codegen/run-codegen-background-agent.mdfor details
Best Practice:
- Use best models for planning (research, spec, tasks) — these benefit from reasoning
- Use auto or best models for implementation — depends on token budget and complexity
- Consider background agents for independent tasks that can run in parallel
Don't skip the task hub. Even for simple features, creating a task hub provides:
- Centralized progress tracking (
AGENTS.md) - Organized artifact storage (research/, clarification/, plan/)
- Clear ownership and status
Workflow:
devagent new-task "Brief description"
↓
devagent research "Specific questions"
↓
[Continue based on complexity]
When chaining workflows, always include the task hub path:
devagent research "question"
Feature: .devagent/workspace/tasks/active/YYYY-MM-DD_task-slug/
This ensures workflows can:
- Read existing artifacts (research, clarification, spec)
- Update
AGENTS.mdwith progress - Maintain context across workflow executions
AGENTS.md is the single source of truth for feature progress:
- Progress Log — Chronological history of what happened
- Implementation Checklist — What's done, in progress, or pending
- Key Decisions — Important choices with rationale
- References — Links to all artifacts (research, spec, tasks)
Check AGENTS.md before:
- Starting a new workflow
- Resuming work after context switch
- Making scope changes
- Creating new artifacts
When proceeding with incomplete information:
- Document in clarification packet — Mark as
[ASSUMPTION]with validation plan - Update AGENTS.md — Add to Key Decisions section
- Include in spec — Document in Risks & Open Questions section
- Schedule validation — Set a date/owner for assumption validation
Never proceed with undocumented assumptions.
If requirements change significantly:
- Re-run
devagent clarify-task— Update requirements and completeness - Re-run
devagent create-plan— Create new plan version (use-v2suffix) - Re-run
devagent implement-plan— Execute updated tasks - Update
AGENTS.md— Document the change in Progress Log
Don't try to manually update old artifacts. Re-running workflows ensures consistency.
The devagent implement-plan workflow executes tasks automatically:
- Parses plan document for implementation tasks
- Validates dependencies before execution
- Executes tasks sequentially
- Updates AGENTS.md after each task
- Pauses only for blockers or ambiguous decisions
For manual implementation: You can still work through tasks manually by referencing the plan document, but devagent implement-plan automates the process.
When stopping work (end of day, switching features, interruptions):
devagent review-progress
Plan: plan/YYYY-MM-DD_feature-plan.md
Completed: Task 1, 2
In Progress: Task 3 (50% complete)
Blocked: Need clarification on API format
This creates a checkpoint for easy resumption. When resuming, devagent implement-plan will automatically continue from where it left off.
File Naming:
- Research:
research/YYYY-MM-DD_topic.md - Clarification:
clarification/YYYY-MM-DD_description.md - Plan:
plan/YYYY-MM-DD_feature-plan.md(use-v2for major revisions)
Versioning:
- Major revisions: Use
-v2,-v3suffixes - Minor updates: Re-run workflow (overwrites old file, but history in
AGENTS.md)
After devagent implement-plan execution or manual implementation:
bun run lint && bun run typecheck && bun run test
This runs:
bun run lint— Linting errorsbun run typecheck— TypeScript errorsbun run test— Test failures
Fix errors immediately before moving to the next task. Note: There is no /validate-code workflow - validation is done manually.
The datatable feature went through:
- Initial research → TanStack Table approach
- Implementation → TanStack Table complete
- Scope change → Migrate to @lambdacurry/forms
- Re-clarification → Updated requirements
- Re-plan → v2 plan document
- Re-implementation → Migration tasks executed
This is normal. Workflows are designed to be re-run when scope changes. The workflow has been simplified - create-spec and plan-tasks were consolidated into create-plan, and implement-plan automates task execution.
- DevAgent is a tool, not an autonomous agent — You remain the coordinator
- Workflows can be re-run — Don't be afraid to iterate when scope changes
- AGENTS.md is your north star — Check it before starting, update it as you progress
- Workflows chain naturally — Research → Clarify → Plan → Implement
- Document assumptions — Never proceed with undocumented assumptions
- Use
devagent review-progress— For context switches and resumption - Validate early and often — Run lint/typecheck/test manually after implementation
- Iteration is expected — Complex features will go through multiple cycles
- Workflow consolidation —
create-specandplan-tasksmerged intocreate-plan - Automated implementation —
devagent implement-planexecutes tasks automatically
Start Here:
- Read
.devagent/core/README.md(overview) - Read
.devagent/core/AGENTS.md(workflow roster) - Read
DEVELOPER-GUIDE.md(this document's companion) - Start with
devagent new-taskfor your first feature - Follow the workflow sequence, referencing examples in DEVELOPER-GUIDE.md
Common Mistakes to Avoid:
- Skipping task hub creation
- Not referencing task hub in workflow calls
- Using old workflow names (
/create-spec,/plan-tasksinstead ofdevagent create-plan) - Not using
devagent implement-planfor automated task execution - Proceeding with undocumented assumptions
- Not checking
AGENTS.mdbefore starting work
Potential Enhancements:
- Getting Started Guide — High-level overview explaining commands → workflows relationship
- Workflow Chaining Hints — After each workflow, suggest next steps with ready-to-run commands
- Gap Handling Guidance — When research finds
[NEEDS CLARIFICATION], provide clear next steps - Progress Resumption — Better tooling for resuming work after context switches (partially addressed by
devagent implement-plan) - Model Recommendations — Guidance on which models to use for which workflows
- Validation Integration — Consider adding automated validation step to
devagent implement-plan
Last Updated: 2025-12-27
Related Documents:
DEVELOPER-GUIDE.md— Comprehensive workflow guide with examples.devagent/core/README.md— Core kit setup and usage.devagent/core/AGENTS.md— Workflow roster and reference