DevOps Copilot is an AI-powered assistant designed to analyze local codebases and provide specialized DevOps guidance.
Unlike a general-purpose chatbot, this tool uses multi-agent orchestration (via LangGraph) to route user requests to specific specialist agents based on the intent of the query and the actual structure of the provided code.
The application allows a user to provide a path to a local codebase and ask for specific DevOps deliverables. The system then:
- Analyzes the code: Uses an AST-based analyzer to scan the codebase and extract context.
- Routes the request: A router determines which specialist agent is best suited for the task.
- Generates output: The selected agent uses a Google Gemini model to produce a tailored result.
The repository implements five distinct agent roles:
- 🐳 Dockerfile Agent: Generate or optimize Dockerfiles and container workflows.
- 🧪 Test Case Agent: Inspect code paths and suggest test scenarios and coverage.
- 📦 Bundle Size Agent: Review frontend/backend dependencies and suggest bundle-size improvements.
- 🔒 Production Agent: Find production-readiness issues, runtime risks, and deployment gaps.
- 💬 General Agent: Answer general DevOps questions about CI/CD, cloud, infrastructure, and deployment.
- FastAPI: HTTP API with
/api/v1routes and/analyze/streamSSE endpoint. - SQLite: Local chat and conversation persistence via
backend/data/devops_chatbot.db. - LangGraph & LangChain: Orchestrates router and agents for intent-aware request handling.
- Google Gemini: Cloud LLM runtime support via
GOOGLE_API_KEY. - LM Studio / edge runtime: Supports local LM Studio edge model selection with
EDGE_MODEL_NAME. - AST analyzer:
backend/app/tools/code_analyzer.pyparses Python AST and summarizes source trees for other languages.
- React 19 + TypeScript: Interactive chat UI.
- Vite: Development server and production build.
- Tailwind CSS v4 + DaisyUI: UI styling and components.
- Proxy: Frontend dev server proxies
/apirequests to backend port8000.
cd backendpip install -e .or install requirements frompyproject.toml- Create
.envwith values below. uv run main.pyorpython main.py
cd frontendnpm installnpm run dev
Create backend/.env with:
GOOGLE_API_KEY=your-google-api-key
CLOUD_MODEL_NAME=gemini-2.0-flash
EDGE_MODEL_NAME=your-lmstudio-model-name
LMSTUDIO_BASE_URL=http://127.0.0.1:1234/v1
LMSTUDIO_API_KEY=lm-studio
SQLITE_DB_PATH=./data/devops_chatbot.dbGOOGLE_API_KEY: required for cloud model runtime.EDGE_MODEL_NAME: selects LM Studio / edge model.LMSTUDIO_BASE_URL/LMSTUDIO_API_KEY: LM Studio OpenAI-compatible host.SQLITE_DB_PATH: local conversation store.
GET /health— health check.GET /api/v1/runtime/edge/status— edge runtime status.POST /api/v1/analyze/stream— stream analysis results via SSE.GET /api/v1/conversations— list saved chats.POST /api/v1/conversations— create chat.GET /api/v1/conversations/{id}/messages— load messages.
The backend analyzer can inspect:
- source files:
.py,.js,.ts,.jsx,.tsx,.go,.java,.rs,.rb,.php,.c,.cpp,.h,.hpp,.swift,.kt - dependency files:
pyproject.toml,package.json,requirements.txt,go.mod,Cargo.toml,pom.xml,build.gradle, etc. - config files:
Dockerfile,.dockerignore,.gitignore,tsconfig.json,vite.config.ts, and common app config files.
backend/: backend API, agents, prompts, storage, and tooling.backend/app/agents/: agent nodes and router logic.backend/app/api/: API routes and SSE streaming.backend/app/llm/: model runtime and edge status checks.backend/app/prompts/: prompt library and skill persona definitions.backend/app/tools/: codebase analyzer and file scanning.backend/app/storage/: SQLite chat storage.
frontend/: React chat UI and build configuration.frontend/src/: UI components, hooks, API utilities, and types.
- Frontend dev mode proxies API requests to
http://localhost:8000. - Backend logs startup warnings when cloud or edge runtime config is incomplete.
- Chat history is stored locally, so restarting backend preserves conversations unless DB file removed.