You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+63-3Lines changed: 63 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -126,12 +126,72 @@ python lmstudio_bridge.py
126
126
127
127
For complete MCP configuration instructions, see [MCP_CONFIGURATION.md](MCP_CONFIGURATION.md).
128
128
129
+
### Optional: MCP `description` hint
130
+
131
+
You can add a `description` field to your `.mcp.json` entry to help Claude understand when to use this server and what to expect. This is particularly useful for reminding Claude of version requirements:
132
+
133
+
```json
134
+
{
135
+
"lmstudio-mcp": {
136
+
"command": "...",
137
+
"args": [...],
138
+
"description": "Local LLM bridge via LM Studio. Use for private/offline inference, embeddings, and multi-turn conversations. start_conversation and continue_conversation require LM Studio v0.3.29+."
139
+
}
140
+
}
141
+
```
142
+
143
+
## 🧠 LM Studio System Prompt (Recommended)
144
+
145
+
Setting a system prompt directly in LM Studio gives your local model a consistent baseline personality and behaviour across all interactions — without needing to pass it on every API call.
146
+
147
+
### How to set it
148
+
149
+
1. Open **LM Studio**
150
+
2. Click the model name at the top of the chat panel
151
+
3. Find the **System Prompt** field (may be under a ⚙️ gear icon or **Advanced settings**)
152
+
4. Paste your system prompt and save
153
+
154
+
> The system prompt set here applies to all completions sent via the API, including those from this MCP bridge.
155
+
156
+
### Example system prompts
157
+
158
+
**General assistant — clean and direct:**
159
+
```
160
+
You are a helpful, concise assistant. Answer directly without preamble like
161
+
"Sure!" or "Of course!". Never cut off mid-sentence — always finish your thought.
162
+
```
163
+
164
+
**Casual conversation partner:**
165
+
```
166
+
You are a regular person having a relaxed conversation with a friend.
167
+
Keep responses short and natural, like real chat. No bullet points or formal
168
+
language. You can invent fun details about your life and stay consistent with them.
169
+
Never cut off mid-sentence — always finish your thought.
170
+
```
171
+
172
+
**Local coding assistant:**
173
+
```
174
+
You are an expert software engineer. Be concise and precise. When writing code,
175
+
always include brief inline comments. Prefer simple, readable solutions over
176
+
clever ones. Never cut off mid-sentence or mid-code block.
177
+
```
178
+
179
+
**Privacy-first document analyst:**
180
+
```
181
+
You are a careful document analyst. Summarise accurately and concisely.
182
+
Never invent information not present in the source material.
183
+
Always flag uncertainty explicitly.
184
+
```
185
+
186
+
> 💡 **Tip:** Always end your system prompt with "Never cut off mid-sentence — always finish your thought." This prevents truncated responses regardless of how `max_tokens` is configured.
187
+
129
188
## Usage
130
189
131
190
1.**Start LM Studio** and ensure it's running on port 1234 (the default)
132
-
2.**Load a model** in LM Studio
133
-
3.**Configure Claude MCP** with one of the configurations above
134
-
4.**Connect to the MCP server** in Claude when prompted
191
+
2.**Set a system prompt** in LM Studio (see above — recommended)
192
+
3.**Load a model** in LM Studio
193
+
4.**Configure Claude MCP** with one of the configurations above
194
+
5.**Connect to the MCP server** in Claude when prompted
0 commit comments