Next-Gen AI Research Automation Platform
Full research lifecycle coverage: Ideation · Literature Survey · Hypothesis Design · Experiment Execution · Paper Writing
Local-first · Multi-model · Fully open-source
中文 · Quick Start · Screenshots · Contributing · Roadmap
✦ Core Capabilities
Research Pipeline · 4-stage automation │ AI Chat · Multi-model streaming │ Experiment Execution · SSH to remote GPU servers │ Codex Integration · AI writes experiment code
Memory System · Cross-experiment knowledge persistence │ Multi-model Config · Separate profiles for chat / code / paper / experiment │ Stage Reports · Survey → Analysis → Experiment │ Local-first · Your data stays with you
🎬 Watch LabOS Demo (4min, Chinese with UI walkthrough)
Also available on the Releases page
📁 Projects — Multi-project management with independent experiments, memory, and paper library per project
| Feature | Description |
|---|---|
| 🔬 4-Stage Research Pipeline | Ideation → Design → Execution → Paper, each stage produces independent reports |
| 💬 Chat-Driven | AI assistant with streaming; create projects directly from conversations |
| 🤖 Multi-LLM Profiles | Configure different models and APIs for chat, code analysis, paper writing, experiment design |
| 🖥️ Remote Execution | SSH to GPU servers (AutoDL, etc.) with real-time log streaming |
| ⚡ Codex CLI Integration | Full-auto mode, JSONL streaming, AI writes experiment code |
| 🧠 Memory System | Cross-experiment knowledge persistence, project-level memory retrieval |
| ✅ Stage Approval | Approve → next stage / Revise & rerun / Reject & terminate |
| ⚙️ Fully Configurable | All settings exposed via Web UI; works with any OpenAI-compatible API |
| 💾 Local-First | SQLite storage, all data stays on your machine |
git clone https://github.com/YUANXICHE98/LabOS.git
cd LabOS
bash start.shOr manually:
pip install -r requirements.txt
cd src && python api_server.pyOpen your browser at http://localhost:8000
- Go to Settings → Configure your LLM API endpoint and key (any OpenAI-compatible API)
- (Optional) Configure SSH server for remote experiment execution
- Go to Chat → Start chatting → Create a project from the conversation
- Or go to Projects → Create a project manually → Launch an experiment
LabOS supports independent LLM configurations per task type:
| Task Type | Use Case | Recommended Models |
|---|---|---|
| General Chat | Daily research discussions | DeepSeek-Chat / GPT-4o |
| Code Analysis | Code review, experiment code generation | DeepSeek-Coder / Claude |
| Paper Analysis | Literature review, paper writing | GPT-4o / Claude |
| Experiment Design | Hypothesis generation, experiment planning | DeepSeek-Chat / Claude |
Configure via Settings > LLM Profiles — each task type gets its own Base URL + API Key + Model.
LabOS/
├── src/
│ ├── api_server.py # FastAPI backend — all API endpoints and pipeline logic
│ ├── index.html # Main page (SPA)
│ ├── app.js # Frontend — UI logic, API calls, SSE streaming
│ └── style.css # Styles
├── docs/
│ ├── screenshots/ # Screenshots
│ └── videos/ # Demo videos
├── start.sh # One-click launcher
├── requirements.txt # Python dependencies
├── CONTRIBUTING.md # Contribution guide
├── GOVERNANCE.md # Contributor governance & incentives
└── LICENSE # AGPL-3.0
| Layer | Technology |
|---|---|
| Backend | Python / FastAPI / uvicorn |
| Database | SQLite (zero-config, local file) |
| Frontend | Vanilla HTML + CSS + JavaScript (no build step) |
| Remote Execution | Paramiko (SSH) |
| LLM Calls | httpx (OpenAI-compatible protocol) |
| Real-time | Server-Sent Events (SSE) |
LabOS follows an open core + paid add-ons model:
All code in this repo is permanently free and open source: full research pipeline, chat, project management, LLM config, memory system, experiment execution.
- Skill Library — Verified research methodologies, experiment paths, and best practices. Think of it as a knowledge base of "what actually works"
- Premium Integrations — Pre-built connectors for more cloud GPU platforms and HPC clusters
- Priority Support — Direct access to the dev team
The platform itself will always be open source. The real value is in verified research methodologies — battle-tested paths that save weeks of trial and error.
We value every contribution. See GOVERNANCE.md for details.
| Tier | Requirement | Incentive |
|---|---|---|
| 🌱 Contributor | 1 merged PR | Contributors Wall recognition |
| 🌿 Active Contributor | 3+ PRs | Free Skill Library access + Beta early access |
| 🌳 Core Contributor | 10+ PRs or 1 major feature | 30% revenue share + paper co-authorship |
| 💰 Bounty Tasks | 💰 bounty label |
Crypto / Sponsors cash rewards |
Code isn't the only way to contribute — docs, translations, bug reports, research methodologies, and design all count equally.
PRs welcome! See CONTRIBUTING.md, browse Issues to find tasks that interest you.
⭐ Star this repo — Help more people discover LabOS │ 🍴 Fork & Contribute │ 💰 Sponsor — Fund continued development
| Chain | Address |
|---|---|
| ETH / ERC-20 (USDT, USDC, etc.) | 0xc6B4720835E6C3CB58618B4df26B64F595C30202 |
Click the "Sponsor" button at the top of this repository.
AGPL-3.0 — Modifications must be open-sourced. Network services must provide source code. Derivatives must reference the upstream repo.
Made with ❤️ for the research community



