You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+33-3Lines changed: 33 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,6 +14,7 @@ LMStudio-MCP creates a bridge between Claude (with MCP capabilities) and your lo
14
14
- Generate chat and raw text completions using your local models
15
15
- Generate vector embeddings for semantic search and RAG
16
16
- Hold stateful multi-turn conversations via response IDs
17
+
- Start and continue persistent conversations with a locked-in system prompt
17
18
18
19
This enables you to leverage your own locally running models through Claude's interface, combining Claude's capabilities with your private models.
19
20
@@ -134,7 +135,7 @@ For complete MCP configuration instructions, see [MCP_CONFIGURATION.md](MCP_CONF
134
135
135
136
## Available Tools
136
137
137
-
The bridge provides the following 7 tools:
138
+
The bridge provides the following 9 tools:
138
139
139
140
| Tool | Description |
140
141
|------|-------------|
@@ -144,7 +145,36 @@ The bridge provides the following 7 tools:
144
145
|`chat_completion(prompt, system_prompt, temperature, max_tokens)`| Generate a chat response from your local model |
145
146
|`text_completion(prompt, temperature, max_tokens, stop_sequences)`| Generate raw text/code completion — faster, no chat formatting overhead |
146
147
|`generate_embeddings(text, model)`| Generate vector embeddings for semantic search and RAG workflows |
147
-
|`create_response(input_text, previous_response_id, reasoning_effort, stream, model)`| Stateful multi-turn conversation via response IDs — requires LM Studio v0.3.29+ |
148
+
|`create_response(input_text, previous_response_id, reasoning_effort, stream, model)`| Stateful conversation via response IDs — requires LM Studio v0.3.29+ |
149
+
|`start_conversation(system_prompt, first_message, temperature, max_tokens, model)`| Start a multi-turn session with a persistent system prompt — returns a `response_id`|
150
+
|`continue_conversation(response_id, message, temperature, max_tokens, model)`| Continue a session started with `start_conversation` — context preserved automatically |
151
+
152
+
### Multi-turn conversation workflow
153
+
154
+
The recommended way to run a persistent conversation with a local model:
155
+
156
+
```
157
+
1. start_conversation(
158
+
system_prompt="You are a friend at a bar, keep it casual and fun.",
159
+
first_message="Hey! How's it going?"
160
+
)
161
+
→ { response_id: "resp_abc...", message: "Hey! Not bad, just unwinding..." }
162
+
163
+
2. continue_conversation(
164
+
response_id="resp_abc...",
165
+
message="Work's been insane this week."
166
+
)
167
+
→ { response_id: "resp_def...", message: "Ugh, tell me about it..." }
168
+
169
+
3. continue_conversation(
170
+
response_id="resp_def...",
171
+
message="If you could go anywhere tomorrow, where would you go?"
0 commit comments