Skip to content

Commit 1f800ff

Browse files
authored
Merge pull request #44 from ericc-ch/feat/claude-endpoints
Implement Anthropic/Claude compatible endpoints
2 parents d1cfc71 + 5a745a5 commit 1f800ff

39 files changed

Lines changed: 2055 additions & 148 deletions

.claude/settings.json

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
{
2+
"env": {
3+
"ANTHROPIC_BASE_URL": "http://localhost:4141",
4+
"ANTHROPIC_AUTH_TOKEN": "dummy",
5+
"ANTHROPIC_MODEL": "gpt-4.1",
6+
"ANTHROPIC_SMALL_FAST_MODEL": "gpt-4.1"
7+
}
8+
}

CLAUDE.md

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
# CLAUDE.md
2+
3+
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4+
5+
## Development Commands
6+
7+
- **Install dependencies**: `bun install`
8+
- **Build**: `bun run build`
9+
- **Dev server (watch)**: `bun run dev`
10+
- **Production start**: `bun run start`
11+
- **Lint**: `bun run lint`
12+
- **Pre-commit lint/fix**: Runs automatically via git hooks (bunx eslint --fix)
13+
14+
## Architecture Overview
15+
16+
- **Entry point**: `src/main.ts` defines CLI subcommands (`start` and `auth`) for the Copilot API server and authentication flow.
17+
- **Server**: `src/server.ts` sets up HTTP routes using Hono, maps OpenAI/Anthropic-compatible endpoints, and handles logging/cors.
18+
- **Routes**: Handlers for chat completions, embeddings, models, and messages are under `src/routes/`, providing API endpoints compatible with OpenAI and Anthropic APIs.
19+
- **Copilot communication**: `src/services/copilot/` contains methods for proxying requests (chat completions, model listing, embeddings) to the GitHub Copilot backend using user tokens.
20+
- **Lib utilities**: `src/lib/` contains configuration, token, model caching, and error handling helpers.
21+
- **Authentication**: `src/auth.ts` provides the CLI handler for authenticating with GitHub, managing required tokens, and persisting them locally.
22+
23+
## API Endpoints
24+
25+
- **OpenAI-compatible**:
26+
- `POST /v1/chat/completions`
27+
- `GET /v1/models`
28+
- `POST /v1/embeddings`
29+
- **Anthropic-compatible**:
30+
- `POST /v1/messages`
31+
- `POST /v1/messages/count_tokens`
32+
33+
## Other Notes
34+
35+
- Ensure Bun (>= 1.2.x) is installed for all scripts and local dev.
36+
- Tokens and cache are handled automatically; manual authentication can be forced with the `auth` subcommand.
37+
- No .cursorrules, .github/copilot-instructions.md, or .cursor/rules found, so follow typical TypeScript/Bun/ESLint conventions as seen in this codebase.

README.md

Lines changed: 81 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
1-
# Copilot API
1+
# Copilot API Proxy
22

3-
⚠️ **EDUCATIONAL PURPOSE ONLY** ⚠️
4-
This project is a reverse-engineered implementation of the GitHub Copilot API created for educational purposes only. It is not officially supported by GitHub and should not be used in production environments.
3+
> [!WARNING]
4+
> This is a reverse-engineered proxy of GitHub Copilot API. It is not supported by GitHub, and may break unexpectedly. Use at your own risk.
55
66
[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/E1E519XS7W)
77

88
## Project Overview
99

10-
A wrapper around GitHub Copilot API to make it OpenAI compatible, making it usable for other tools like AI assistants, local interfaces, and development utilities.
10+
A reverse-engineered proxy for the GitHub Copilot API that exposes it as an OpenAI and Anthropic compatible service. This allows you to use GitHub Copilot with any tool that supports the OpenAI Chat Completions API or the Anthropic Messages API, including to power [Claude Code](https://docs.anthropic.com/en/docs/claude-code/overview).
1111

1212
## Demo
1313

@@ -16,7 +16,7 @@ https://github.com/user-attachments/assets/7654b383-669d-4eb9-b23c-06d7aefee8c5
1616
## Prerequisites
1717

1818
- Bun (>= 1.2.x)
19-
- GitHub account with Copilot subscription (Individual or Business)
19+
- GitHub account with Copilot subscription (individual, business, or enterprise)
2020

2121
## Installation
2222

@@ -64,7 +64,7 @@ npx copilot-api@latest auth
6464

6565
Copilot API now uses a subcommand structure with two main commands:
6666

67-
- `start`: Start the Copilot API server (default command). This command will also handle authentication if needed.
67+
- `start`: Start the Copilot API server. This command will also handle authentication if needed.
6868
- `auth`: Run GitHub authentication flow without starting the server. This is typically used if you need to generate a token for use with the `--github-token` option, especially in non-interactive environments.
6969

7070
## Command Line Options
@@ -73,22 +73,46 @@ Copilot API now uses a subcommand structure with two main commands:
7373

7474
The following command line options are available for the `start` command:
7575

76-
| Option | Description | Default | Alias |
77-
| -------------- | ----------------------------------------------------------------------------- | ------- | ----- |
78-
| --port | Port to listen on | 4141 | -p |
79-
| --verbose | Enable verbose logging | false | -v |
76+
| Option | Description | Default | Alias |
77+
| -------------- | ----------------------------------------------------------------------------- | ---------- | ----- |
78+
| --port | Port to listen on | 4141 | -p |
79+
| --verbose | Enable verbose logging | false | -v |
8080
| --account-type | Account type to use (individual, business, enterprise) | individual | -a |
81-
| --manual | Enable manual request approval | false | none |
82-
| --rate-limit | Rate limit in seconds between requests | none | -r |
83-
| --wait | Wait instead of error when rate limit is hit | false | -w |
84-
| --github-token | Provide GitHub token directly (must be generated using the `auth` subcommand) | none | -g |
81+
| --manual | Enable manual request approval | false | none |
82+
| --rate-limit | Rate limit in seconds between requests | none | -r |
83+
| --wait | Wait instead of error when rate limit is hit | false | -w |
84+
| --github-token | Provide GitHub token directly (must be generated using the `auth` subcommand) | none | -g |
85+
| --claude-code | Generate a command to launch Claude Code with Copilot API config | false | -c |
8586

8687
### Auth Command Options
8788

8889
| Option | Description | Default | Alias |
8990
| --------- | ---------------------- | ------- | ----- |
9091
| --verbose | Enable verbose logging | false | -v |
9192

93+
## API Endpoints
94+
95+
The server exposes several endpoints to interact with the Copilot API. It provides OpenAI-compatible endpoints and now also includes support for Anthropic-compatible endpoints, allowing for greater flexibility with different tools and services.
96+
97+
### OpenAI Compatible Endpoints
98+
99+
These endpoints mimic the OpenAI API structure.
100+
101+
| Endpoint | Method | Description |
102+
| --------------------------- | ------ | --------------------------------------------------------- |
103+
| `POST /v1/chat/completions` | `POST` | Creates a model response for the given chat conversation. |
104+
| `GET /v1/models` | `GET` | Lists the currently available models. |
105+
| `POST /v1/embeddings` | `POST` | Creates an embedding vector representing the input text. |
106+
107+
### Anthropic Compatible Endpoints
108+
109+
These endpoints are designed to be compatible with the Anthropic Messages API.
110+
111+
| Endpoint | Method | Description |
112+
| -------------------------------- | ------ | ------------------------------------------------------------ |
113+
| `POST /v1/messages` | `POST` | Creates a model response for a given conversation. |
114+
| `POST /v1/messages/count_tokens` | `POST` | Calculates the number of tokens for a given set of messages. |
115+
92116
## Example Usage
93117

94118
Using with npx:
@@ -125,6 +149,45 @@ npx copilot-api@latest auth
125149
npx copilot-api@latest auth --verbose
126150
```
127151

152+
## Using with Claude Code
153+
154+
This proxy can be used to power [Claude Code](https://docs.anthropic.com/en/claude-code), an experimental conversational AI assistant for developers from Anthropic.
155+
156+
There are two ways to configure Claude Code to use this proxy:
157+
158+
### Interactive Setup with `--claude-code` flag
159+
160+
To get started, run the `start` command with the `--claude-code` flag:
161+
162+
```sh
163+
npx copilot-api@latest start --claude-code
164+
```
165+
166+
You will be prompted to select a primary model and a "small, fast" model for background tasks. After selecting the models, a command will be copied to your clipboard. This command sets the necessary environment variables for Claude Code to use the proxy.
167+
168+
Paste and run this command in a new terminal to launch Claude Code.
169+
170+
### Manual Configuration with `settings.json`
171+
172+
Alternatively, you can configure Claude Code by creating a `.claude/settings.json` file in your project's root directory. This file should contain the environment variables needed by Claude Code. This way you don't need to run the interactive setup every time.
173+
174+
Here is an example `.claude/settings.json` file:
175+
176+
```json
177+
{
178+
"env": {
179+
"ANTHROPIC_BASE_URL": "http://localhost:4141",
180+
"ANTHROPIC_AUTH_TOKEN": "dummy",
181+
"ANTHROPIC_MODEL": "gpt-4.1",
182+
"ANTHROPIC_SMALL_FAST_MODEL": "gpt-4.1"
183+
}
184+
}
185+
```
186+
187+
You can find more options here: [Claude Code settings](https://docs.anthropic.com/en/docs/claude-code/settings#environment-variables)
188+
189+
You can also read more about IDE integration here: [Add Claude Code to your IDE](https://docs.anthropic.com/en/docs/claude-code/ide-integrations)
190+
128191
## Running from Source
129192

130193
The project can be run from source in several ways:
@@ -143,18 +206,8 @@ bun run start
143206

144207
## Usage Tips
145208

146-
- Consider using free models (e.g., Gemini, Mistral, Openrouter) as the `weak-model`
147-
- Use architect mode sparingly
148-
- Disable `yes-always` in your aider configuration
149-
- Enable the `--manual` flag to review and approve each request before processing
209+
- To avoid hitting GitHub Copilot's rate limits, you can use the following flags:
210+
- `--manual`: Enables manual approval for each request, giving you full control over when requests are sent.
211+
- `--rate-limit <seconds>`: Enforces a minimum time interval between requests. For example, `copilot-api start --rate-limit 30` will ensure there's at least a 30-second gap between requests.
212+
- `--wait`: Use this with `--rate-limit`. It makes the server wait for the cooldown period to end instead of rejecting the request with an error. This is useful for clients that don't automatically retry on rate limit errors.
150213
- If you have a GitHub business or enterprise plan account with Copilot, use the `--account-type` flag (e.g., `--account-type business`). See the [official documentation](https://docs.github.com/en/enterprise-cloud@latest/copilot/managing-copilot/managing-github-copilot-in-your-organization/managing-access-to-github-copilot-in-your-organization/managing-github-copilot-access-to-your-organizations-network#configuring-copilot-subscription-based-network-routing-for-your-enterprise-or-organization) for more details.
151-
152-
### Manual Request Approval
153-
154-
When using the `--manual` flag, the server will prompt you to approve each incoming request:
155-
156-
```
157-
? Accept incoming request? > (y/N)
158-
```
159-
160-
This helps you control usage and monitor requests in real-time.

0 commit comments

Comments
 (0)