Skip to content
This repository was archived by the owner on Jan 24, 2026. It is now read-only.

Commit 34e1aec

Browse files
committed
docs: add README with usage instructions and example config
1 parent c3ab8b0 commit 34e1aec

1 file changed

Lines changed: 88 additions & 0 deletions

File tree

README.md

Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
# ProxyAsLocalModel
2+
3+
Proxy remote LLM API as Local model. Especially works for using custom LLM in JetBrains AI Assistant.
4+
5+
Powered by Ktor and kotlinx.serialization. Thanks to their no-reflex features.
6+
7+
## Currently supported
8+
9+
Proxy from: OpenAI, DashScope(Alibaba Qwen), Gemini, Deepseek, Mistral, SiliconFlow.
10+
11+
Proxy as: LM Studio, Ollama.
12+
13+
Streaming chat completion API only.
14+
15+
## How to use
16+
17+
This application is a proxy server, distributed as a fat runnable jar and a GraalVM native image (Windows x64).
18+
19+
Run the application, and you will see a help message:
20+
21+
```
22+
2025-05-02 10:43:53 INFO Help - It looks that you are starting the program for the first time here.
23+
2025-05-02 10:43:53 INFO Help - A default config file is created at E:\ACodeSpace\local\OpenAI2LmStudioProxy\OpenAI2LmStudioProxy\build\native\nativeCompile\config.yml with schema annotation.
24+
2025-05-02 10:43:53 INFO Config - Config file watcher started at E:\ACodeSpace\local\OpenAI2LmStudioProxy\OpenAI2LmStudioProxy\build\native\nativeCompile\config.yml
25+
2025-05-02 10:43:53 INFO LM Studio Server - LM Studio Server started at 1234
26+
2025-05-02 10:43:53 INFO Ollama Server - Ollama Server started at 11434
27+
2025-05-02 10:43:53 INFO Model List - Model list loaded with: []
28+
```
29+
30+
Then you can edit the config file to set up your proxy server.
31+
32+
## Config file
33+
34+
This config file is automatically hot-reloaded when you change it. Only the influenced parts of the server will be updated.
35+
36+
When first generating the config file, it will be created with schema annotations. This will bring completion and check in your editor.
37+
38+
## Example config file
39+
40+
```yaml
41+
# $schema: https://github.com/Stream29/ProxyAsLocalModel/raw/master/config_v0.schema.json
42+
lmStudio:
43+
port: 1234 # This is default value
44+
enabled: true # This is default value
45+
ollama:
46+
port: 11434 # This is default value
47+
enabled: true # This is default value
48+
49+
apiProviders:
50+
OpenAI:
51+
type: OpenAi
52+
baseUrl: https://api.openai.com/v1
53+
apiKey: <your_api_key>
54+
modelList:
55+
- gpt-4o
56+
Qwen:
57+
type: DashScope
58+
apiKey: <your_api_key>
59+
modelList: # This is default value
60+
- qwen-max
61+
- qwen-plus
62+
- qwen-turbo
63+
- qwen-long
64+
DeepSeek:
65+
type: DeepSeek
66+
apiKey: <your_api_key>
67+
modelList: # This is default value
68+
- deepseek-chat
69+
- deepseek-reasoner
70+
Mistral:
71+
type: Mistral
72+
apiKey: <your_api_key>
73+
modelList: # This is default value
74+
- codestral-latest
75+
- mistral-large
76+
SiliconFlow:
77+
type: SiliconFlow
78+
apiKey: <your_api_key>
79+
modelList:
80+
- Qwen/Qwen3-235B-A22B
81+
- Pro/deepseek-ai/DeepSeek-V3
82+
- THUDM/GLM-4-32B-0414
83+
Gemini:
84+
type: Gemini
85+
apiKey: <your_api_key>
86+
modelList:
87+
- gemini-2.5-flash-preview-04-17
88+
```

0 commit comments

Comments
 (0)