A flexible, task-adaptive, human-guided RAG system designed specifically for development workflows with sophisticated multi-agent orchestration.
-
✅ Sophisticated Multi-Agent Workflow
- QueryAnalystAgent
- RetrievalSpecialistAgent
- ReRankerAgent
- QualityAssuranceAgent
- WriterAgent
-
✅ Task-Adaptive Routing
- Workflow adapts to task type
- Different agent combinations for different tasks
- Optimized for efficiency and quality
-
✅ Human-in-the-Loop First (6 Checkpoints!)
- HITL #0: Knowledge Source Selection ← NEW!
- HITL #1: Query Analysis Review
- HITL #2: Retrieval Results Review
- HITL #3: Ranked Context Review
- HITL #4: Draft Answer Review
- HITL #5: Final Approval
-
✅ Proactive Knowledge Source Management
- User selects sources at START
- Dynamic URL scraping
- Document upload capability
- Predefined category selection (architecture, agile, coding, frameworks)
-
✅ Long-Running Project Support
- Thread-based state persistence
- Multi-session continuity
- Pause/resume capability
- Full conversation history
-
✅ Development-Context Specialization
- Task types for development workflows
- Architecture design support
- Code implementation guidance
- API integration assistance
- Bug fixing help
START
↓
📚 [HITL #0: SELECT KNOWLEDGE SOURCES]
Agent: "What knowledge sources should I use for this task?"
User Options:
- Predefined categories (architecture, agile, coding, frameworks)
- Custom URLs (API docs, tutorials)
- Local documents (specs, designs)
- "defaults" or "all"
Example: "architecture, coding_guidelines, url: https://docs.langchain.com/langgraph"
↓
🔧 LOAD_KNOWLEDGE_SOURCES
- Scrape URLs → Index into vector store
- Load documents → Index into vector store
- Load category docs → Index into vector store
↓
🔍 QUERY_ANALYST (QueryAnalystAgent)
- Understand intent
- Extract key concepts
- Determine retrieval strategy
- Classify task type
↓
📊 [HITL #1: REVIEW QUERY ANALYSIS]
User: Approve understanding or refine
↓
📚 RETRIEVAL_SPECIALIST (RetrievalSpecialistAgent)
- Multi-source retrieval (using loaded sources)
- Query expansion
- Source aggregation
↓
📊 [HITL #2: REVIEW RETRIEVAL RESULTS]
User: Approve sources, request more, or add specific URLs
↓
🎯 RE_RANKER (ReRankerAgent)
- Relevance scoring
- Quality filtering
- Result ordering
↓
📊 [HITL #3: REVIEW RANKED CONTEXT]
User: Approve quality or improve ranking
↓
✍️ WRITER (WriterAgent)
- Answer synthesis
- Formatting
- Citations
↓
📊 [HITL #4: REVIEW DRAFT ANSWER]
User: Approve, revise, or request more context
↓
✅ QUALITY_ASSURANCE (QualityAssuranceAgent)
- Completeness check
- Consistency validation
- Final quality assessment
↓
📊 [HITL #5: FINAL APPROVAL]
User: Ship it, iterate, or restart
↓
END
Knowledge Sources: architecture docs, design patterns
Agents: query_analyst → retrieval → re_ranker → writer
HITL: After source selection, after pattern matching, final review
Knowledge Sources: coding standards, API docs, code examples
Agents: query_analyst → multi_source_retrieval → code_synthesizer → quality_checker
HITL: After source selection, after code generation
Knowledge Sources: Official API docs (PRIMARY!), examples
Agents: query_analyst → api_doc_retriever → example_finder → integration_writer
HITL: After source selection, after API version confirm, after code review
Knowledge Sources: Coding standards, security guidelines, anti-patterns
Agents: query_analyst → standards_retriever → code_analyzer → recommendation_writer
HITL: After issue identification
Knowledge Sources: Error pattern database, similar bug resolutions
Agents: query_analyst → error_pattern_matcher → solution_retriever → fix_generator
HITL: After root cause hypothesis, after fix generation
Knowledge Sources: Testing patterns, framework docs, coverage requirements
Agents: query_analyst → test_pattern_retriever → edge_case_identifier → test_generator
HITL: After test strategy approval, after test review
📚 Knowledge Source Selection
I'm helping you with: 'Implement LangGraph agent with checkpointing'
To provide the best assistance, I need to know what knowledge sources to use.
Available Options:
Predefined Categories:
- architecture - Architecture documentation and patterns
- agile - Sprint plans, user stories, requirements
- coding_guidelines - Coding standards and best practices
- framework_docs - LangChain, LangGraph, API documentation
Custom Sources:
- url: https://example.com - Add website URL
- doc: /path/to/file.md - Add local document
Quick Options:
- defaults - Use project default sources
- all - Use all available sources
Format (comma-separated):
architecture, coding_guidelines, url: https://docs.langchain.com/langgraph
What knowledge sources should I use?
Your Selection: _________________________________
User Input: "framework_docs, url: https://langchain-ai.github.io/langgraph/, doc: specs/requirements.md"
Parsed:
{
"categories": ["framework_docs"],
"urls": ["https://langchain-ai.github.io/langgraph/"],
"documents": ["specs/requirements.md"]
}
Actions:
1. Load framework_docs from docs/development/
2. Scrape https://langchain-ai.github.io/langgraph/
3. Load specs/requirements.md
4. Index all sources into vector store
5. Proceed to query_analyst# Continue forward
"approve", "continue", "yes"
# Refine current stage
"refine: <feedback>", "improve: <feedback>"
# Add more sources (at retrieval stage)
"add_source: https://example.com"
"need_more"
# Go back to previous stage
"more_context" → Back to retrieval
"improve_ranking" → Retry re-ranker
# Start over
"rewrite", "restart"
# Fast-track
"skip_qa" → Skip quality assurance
"ship" → END immediately# Session 1: Start project, select sources
result = coordinator.execute(
"Design architecture for RAG document pipeline",
human_in_loop=True,
thread_id="architecture_project_123"
)
# → Interrupts at HITL #0 (knowledge sources)
# User: "architecture, design_patterns, url: https://langchain.com/docs"
# → Loads sources, continues to query_analyst
# → Interrupts at HITL #1 (query understanding)
# User approves, continues...
# → Interrupts at HITL #2 (retrieval results)
# User pauses session (needs to think)
# Session 2: Resume next day
result = coordinator.resume(
thread_id="architecture_project_123",
human_input="approve" # Approve retrieval results
)
# → Continues from HITL #2
# → Goes through re-ranker, writer, QA
# → User iteratively refines through remaining stages
# Session 3: Final refinements
result = coordinator.resume(
thread_id="architecture_project_123",
human_input="revise: add performance considerations"
)
# → Goes back to writer with feedback
# → Human approves final version- User decides what knowledge to use (HITL #0)
- Strategic checkpoints at critical decisions
- Flexible routing based on feedback
- Workflow adapts to task type
- Optimal agent combinations
- Efficient for simple tasks, thorough for complex ones
- Specialized for development workflows
- Architecture, coding, API integration, debugging
- Leverages project documentation
- Multiple quality checkpoints
- Sophisticated agent processing
- Human validation at each stage
- State persistence across sessions
- Pause/resume capability
- Multi-session refinement
- See all agent work in LangSmith
- Intermediate results visible
- Clear decision points
- Sophisticated 5-agent workflow
- Graph structure with flexible routing
- All agent node implementations
- Knowledge source selection (HITL #0)
- Knowledge source loading and indexing
- Thread-based state persistence
- 6 HITL checkpoints defined
- Task type classification (enum)
- Implement remaining HITL review nodes (#1-5)
- Implement routing functions for each checkpoint
- Integrate with KnowledgeSourceManager for real loading
- Web scraping implementation for URLs
- Document upload handler
- UI components for knowledge source selection
- UI components for each HITL stage
- Task type auto-detection logic
- Development task type workflows
- Progress visualization in UI
- Testing with real long-running projects
- LangSmith trace enhancements
- Performance optimization
- Complete HITL node implementations (review_query_analysis, etc.)
- Implement routing functions for all checkpoints
- Build Streamlit UI for knowledge source selection
- Test end-to-end with a real development task
- Integrate with development agents (US-DEV-RAG-001)
- Add telemetry for task type and workflow performance
- HITL-First RAG Architecture
- Task-Adaptive RAG System
- Development Context RAG Tasks
- US-DEV-RAG-001 User Story
A RAG system that:
- Adapts to the task at hand
- Learns from user feedback
- Guides human through complex workflows
- Leverages exactly the knowledge needed
- Supports long-running development projects
- Delivers high-quality, contextually-aware assistance
This is not just a Q&A system - it's a collaborative development partner that uses sophisticated agents to help developers build better software.