Describe the bug
When an MCP server sends a sampling/createMessage request, the Copilot CLI correctly routes it to the LLM and returns a response. However, the response content prepends the entire system prompt text before the LLM's actual answer.
Affected version
GitHub Copilot CLI 1.0.16-0
Steps to reproduce the behavior
- Configure an MCP server that uses
sampling/createMessage with a system prompt
- Have the server send a sampling request where it expects a specific response format
- Approve the sampling request when prompted
- Observe the response content includes the system prompt prepended to the LLM's answer
Expected behavior
The CreateMessageResult.Content should contain only the LLM's response text, e.g.:
Additional context
Actual behavior
The response content contains the system prompt echoed back, followed by the LLM's answer:
<entire system prompt text>... === FALSE
Note
This works correctly in VS Code's MCP sampling implementation — the system prompt is not included in the response content.
Describe the bug
When an MCP server sends a
sampling/createMessagerequest, the Copilot CLI correctly routes it to the LLM and returns a response. However, the response content prepends the entire system prompt text before the LLM's actual answer.Affected version
GitHub Copilot CLI 1.0.16-0
Steps to reproduce the behavior
sampling/createMessagewith a system promptExpected behavior
The
CreateMessageResult.Contentshould contain only the LLM's response text, e.g.:Additional context
Actual behavior
The response content contains the system prompt echoed back, followed by the LLM's answer:
Note
This works correctly in VS Code's MCP sampling implementation — the system prompt is not included in the response content.