Skip to content

Commit b4f8f62

Browse files
docs: update README with 9 tools table and conversation workflow example
1 parent 6830b59 commit b4f8f62

1 file changed

Lines changed: 33 additions & 3 deletions

File tree

README.md

Lines changed: 33 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ LMStudio-MCP creates a bridge between Claude (with MCP capabilities) and your lo
1414
- Generate chat and raw text completions using your local models
1515
- Generate vector embeddings for semantic search and RAG
1616
- Hold stateful multi-turn conversations via response IDs
17+
- Start and continue persistent conversations with a locked-in system prompt
1718

1819
This enables you to leverage your own locally running models through Claude's interface, combining Claude's capabilities with your private models.
1920

@@ -134,7 +135,7 @@ For complete MCP configuration instructions, see [MCP_CONFIGURATION.md](MCP_CONF
134135

135136
## Available Tools
136137

137-
The bridge provides the following 7 tools:
138+
The bridge provides the following 9 tools:
138139

139140
| Tool | Description |
140141
|------|-------------|
@@ -144,7 +145,36 @@ The bridge provides the following 7 tools:
144145
| `chat_completion(prompt, system_prompt, temperature, max_tokens)` | Generate a chat response from your local model |
145146
| `text_completion(prompt, temperature, max_tokens, stop_sequences)` | Generate raw text/code completion — faster, no chat formatting overhead |
146147
| `generate_embeddings(text, model)` | Generate vector embeddings for semantic search and RAG workflows |
147-
| `create_response(input_text, previous_response_id, reasoning_effort, stream, model)` | Stateful multi-turn conversation via response IDs — requires LM Studio v0.3.29+ |
148+
| `create_response(input_text, previous_response_id, reasoning_effort, stream, model)` | Stateful conversation via response IDs — requires LM Studio v0.3.29+ |
149+
| `start_conversation(system_prompt, first_message, temperature, max_tokens, model)` | Start a multi-turn session with a persistent system prompt — returns a `response_id` |
150+
| `continue_conversation(response_id, message, temperature, max_tokens, model)` | Continue a session started with `start_conversation` — context preserved automatically |
151+
152+
### Multi-turn conversation workflow
153+
154+
The recommended way to run a persistent conversation with a local model:
155+
156+
```
157+
1. start_conversation(
158+
system_prompt="You are a friend at a bar, keep it casual and fun.",
159+
first_message="Hey! How's it going?"
160+
)
161+
→ { response_id: "resp_abc...", message: "Hey! Not bad, just unwinding..." }
162+
163+
2. continue_conversation(
164+
response_id="resp_abc...",
165+
message="Work's been insane this week."
166+
)
167+
→ { response_id: "resp_def...", message: "Ugh, tell me about it..." }
168+
169+
3. continue_conversation(
170+
response_id="resp_def...",
171+
message="If you could go anywhere tomorrow, where would you go?"
172+
)
173+
→ { response_id: "resp_ghi...", message: "Honestly? Northern Portugal..." }
174+
```
175+
176+
> The system prompt is locked in for the entire session — no need to re-send it on every turn.
177+
> Requires LM Studio v0.3.29+.
148178
149179
## Deployment Options
150180

@@ -163,7 +193,7 @@ This project supports multiple deployment methods:
163193
- Some models (e.g., phi-3.5-mini-instruct_uncensored) may have compatibility issues
164194
- The bridge currently uses only the OpenAI-compatible API endpoints of LM Studio
165195
- Model responses will be limited by the capabilities of your locally loaded model
166-
- `create_response` requires LM Studio v0.3.29 or later
196+
- `create_response`, `start_conversation`, and `continue_conversation` require LM Studio v0.3.29+
167197
- `generate_embeddings` requires an embedding-specific model (e.g. `text-embedding-nomic-embed-text-v1.5`)
168198

169199
## Troubleshooting

0 commit comments

Comments
 (0)