You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+22-6Lines changed: 22 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,14 @@
1
1
# 🧠 docker-claude-code
2
2
3
-
[Claude Code](https://claude.com/product/claude-code) in a Docker container. No host installs. No permission nightmares. Just vibes and `--dangerously-skip-permissions`.
3
+
[Claude Code](https://claude.com/product/claude-code) in a Docker container. No host installs. No permission nightmares. Just vibes and `--dangerously-skip-permissions`. Use it as a CLI, HTTP API, OpenAI-compatible endpoint, MCP server, or Telegram bot.
4
4
5
-
Four ways to unleash it:
5
+
Four modes, five interfaces:
6
6
7
7
-**Interactive** — drop-in `claude` CLI replacement, persistent container, picks up where you left off
8
8
-**Programmatic** — pass a prompt, get a response, pipe it into your cursed pipeline
9
9
-**API server** — HTTP endpoints for prompts, file management, monitoring. Slap it in your infra
10
+
-**OpenAI-compatible** — `chat/completions` endpoint for LiteLLM, OpenAI SDKs, and anything that speaks OpenAI
11
+
-**MCP server** — Model Context Protocol endpoint so other AI agents can use Claude Code as a tool
10
12
-**Telegram bot** — talk to Claude from your phone when you're takin' a shit. Per-chat workspaces, models, effort levels, file sharing, shell access
@@ -502,7 +506,7 @@ All file paths are relative to `/workspaces`. Path traversal outside root is blo
502
506
503
507
#### OpenAI-compatible endpoints
504
508
505
-
The API also exposes an OpenAI-compatible adapter so tools like [LiteLLM](https://github.com/BerriAI/litellm), OpenAI SDKs, or anything that speaks `chat/completions` can connect directly.
509
+
The API also exposes an OpenAI-compatible adapter so tools like [LiteLLM](https://github.com/BerriAI/litellm), OpenAI SDKs, or anything that speaks `chat/completions` can connect directly. Unlike a plain model proxy, this runs the full Claude Code agentic CLI behind the scenes — it can read/write files, run commands, and use tools.
506
510
507
511
**`GET /openai/v1/models`** — list available models:
508
512
@@ -525,14 +529,24 @@ curl -X POST http://localhost:8080/openai/v1/chat/completions \
Use the same model aliases as the CLI (`haiku`, `sonnet`, `opus`). `system` role messages become `--system-prompt`. Multiple user/assistant turns are concatenated into a single prompt. Pass `reasoning_effort` (`low`/`medium`/`high`) to control effort — maps to claude's `--effort`. `temperature` and `max_tokens` are accepted but ignored.
532
+
Use the same model aliases as the CLI (`haiku`, `sonnet`, `opus`). `system` role messages become `--system-prompt`. Pass `reasoning_effort` (`low`/`medium`/`high`) to control effort — maps to claude's `--effort`. `temperature`, `max_tokens`, `tools`, and other OpenAI-specific fields are accepted but silently ignored. Provider prefixes are stripped automatically (`claude-code/haiku` → `haiku`).
533
+
534
+
**Message handling:**
535
+
- **Single user message** — sent directly as the prompt (fast path, no overhead).
536
+
- **Multi-turn conversations** — the full messages array is written to a JSON file in the workspace (`_oai_uploads/conv_<id>.json`). Claude Code reads the file and responds to the last user message, preserving the conversation context.
537
+
- **Multimodal content** — base64-encoded images and image URLs in message content are downloaded/decoded and saved to the workspace. The content is replaced with the local file path so Claude Code can read the images directly.
538
+
539
+
Streaming (`"stream": true`) returns standard SSE events. Content arrives in message-level chunks (not character-by-character deltas) since Claude Code assembles full messages internally.
540
+
541
+
**File workflow tip:** for best performance, upload input files via `PUT /files/...`, tell Claude Code to work with them by path, then download the output files via `GET /files/...`. Much faster than embedding large content in the prompt.
529
542
530
543
Custom headers for claude-specific behavior:
531
544
532
545
| Header | Description |
533
546
| ------ | ----------- |
534
547
| `X-Claude-Workspace` | Workspace subpath under `/workspaces` |
535
548
| `X-Claude-Continue` | Set to `1`/`true`/`yes` to continue the previous session |
549
+
| `X-Claude-Append-System-Prompt` | Text to append to the system prompt |
The API also exposes an MCP (Model Context Protocol) server at `/mcp` using streamable HTTP transport. Any MCP-compatible client (Claude Desktop, Claude Code, etc.) can connect to it.
567
+
The API also exposes an MCP (Model Context Protocol) server at `/mcp/` using streamable HTTP transport. Any MCP-compatible client (Claude Desktop, Claude Code, etc.) can connect to it. The `claude_run` tool runs the full Claude Code agentic CLI — it can read/write files, run commands, and use tools in the workspace, not just generate text.
0 commit comments