Skip to content

Commit 39c8488

Browse files
MaanavDCopilot
andcommitted
Align Responses sample/tests with JS PR #671 patterns; all tests pass locally
Cross-references the JS Responses sample/tests (PR #671) to keep the C# pattern consistent. Sample (samples/cs/responses-foundry-local-web-server): - Added README.md mirroring the JS sample (prereqs, run, expected output, troubleshooting) - Tool now uses an empty-params schema (matches JS PR), which the small qwen2.5-0.5b reliably calls - Single ResponseTool reused on the follow-up call; deterministic options (Temperature=0, MaxOutputTokenCount=64) - Cleanup wrapped in try/finally so StopWebService/Unload run even on exceptions Integration tests (sdk/cs/test/FoundryLocal.Tests/ResponsesIntegrationTests.cs): - Mirrors the JS suite responsesWebService.test.ts (NonStreaming, Streaming, FunctionCalling) - Skips when Utils.IsRunningInCI() is true and when qwen2.5-0.5b is not pre-cached - Streaming asserts response.created, response.output_text.delta, and response.completed events (parity with JS) - Tool-calling test reuses the same get_weather empty-params definition - Streaming options include StreamingEnabled = true so the official ResponsesClient allows the call Pre-existing fix (test infra only): - Utils.GetRepoRoot() previously failed in git worktrees because .git is a file, not a directory; now accepts either form. This unblocked test execution in worktree checkouts. Validation: - dotnet build samples/cs/responses-foundry-local-web-server -c Release: 0 warnings, 0 errors - dotnet build sdk/cs/test/FoundryLocal.Tests -c Release: 0 errors - dotnet test --filter ResponsesIntegration: all 3 Responses tests pass end-to-end against a real local model - The 10 remaining failures across the project are pre-existing EmbeddingClientTests infra (different model not cached), unrelated to this PR Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
1 parent 4731157 commit 39c8488

4 files changed

Lines changed: 221 additions & 150 deletions

File tree

samples/cs/responses-foundry-local-web-server/Program.cs

Lines changed: 88 additions & 94 deletions
Original file line numberDiff line numberDiff line change
@@ -8,12 +8,10 @@
88
// - starting/stopping the local web service
99
//
1010
// Responses API calls go through the official OpenAI .NET package's `ResponsesClient`
11-
// pointed at the local web service, mirroring how `foundry-local-web-server` uses
12-
// `OpenAIClient.GetChatClient(...)`.
11+
// pointed at the local web service, mirroring how `samples/cs/foundry-local-web-server`
12+
// uses `OpenAIClient.GetChatClient(...)` for chat completions.
1313

1414
using System.ClientModel;
15-
using System.Text;
16-
using System.Text.Json;
1715

1816
using Microsoft.AI.Foundry.Local;
1917

@@ -73,108 +71,104 @@ await model.DownloadAsync(progress =>
7371
await mgr.StartWebServiceAsync();
7472
Console.WriteLine("done.");
7573

76-
// <<<<<< OPEN AI RESPONSES SDK USAGE >>>>>>
77-
// Use the OpenAI Responses client to call the local Foundry web service.
78-
ApiKeyCredential key = new ApiKeyCredential("notneeded");
79-
OpenAIClient openai = new OpenAIClient(key, new OpenAIClientOptions
74+
try
8075
{
81-
Endpoint = new Uri(config.Web.Urls + "/v1"),
82-
});
83-
ResponsesClient responses = openai.GetResponsesClient();
84-
85-
// 1) Non-streaming
86-
Console.WriteLine("\n=== Non-streaming ===");
87-
ResponseResult simple = await responses.CreateResponseAsync(model.Id, "What is 2 + 2? Respond with just the number.");
88-
Console.WriteLine($"[ASSISTANT]: {simple.GetOutputText()}");
89-
90-
// 2) Streaming
91-
Console.WriteLine("\n=== Streaming ===");
92-
Console.Write("[ASSISTANT]: ");
93-
await foreach (StreamingResponseUpdate update in responses.CreateResponseStreamingAsync(model.Id, "Count from 1 to 3."))
94-
{
95-
if (update is StreamingResponseOutputTextDeltaUpdate delta && !string.IsNullOrEmpty(delta.Delta))
76+
// <<<<<< OPEN AI RESPONSES SDK USAGE >>>>>>
77+
// Use the OpenAI Responses client to call the local Foundry web service.
78+
ApiKeyCredential key = new("notneeded");
79+
OpenAIClient openai = new(key, new OpenAIClientOptions
9680
{
97-
Console.Write(delta.Delta);
98-
}
99-
}
100-
Console.WriteLine();
101-
102-
// 3) Function/tool calling — full round-trip using previous_response_id.
103-
Console.WriteLine("\n=== Function calling ===");
104-
var weatherSchema = BinaryData.FromString("""
81+
Endpoint = new Uri(config.Web.Urls + "/v1"),
82+
});
83+
ResponsesClient responses = openai.GetResponsesClient();
84+
85+
// 1) Non-streaming
86+
Console.WriteLine("\n=== Non-streaming ===");
87+
ResponseResult simple = await responses.CreateResponseAsync(model.Id, "Reply with one short sentence about local AI.");
88+
Console.WriteLine($"[ASSISTANT]: {simple.GetOutputText()}");
89+
90+
// 2) Streaming
91+
Console.WriteLine("\n=== Streaming ===");
92+
Console.Write("[ASSISTANT]: ");
93+
await foreach (StreamingResponseUpdate update in responses.CreateResponseStreamingAsync(model.Id, "Count from 1 to 3."))
10594
{
106-
"type": "object",
107-
"properties": {
108-
"city": { "type": "string", "description": "The city to look up" }
109-
},
110-
"required": ["city"]
95+
if (update is StreamingResponseOutputTextDeltaUpdate delta && !string.IsNullOrEmpty(delta.Delta))
96+
{
97+
Console.Write(delta.Delta);
98+
}
11199
}
112-
""");
100+
Console.WriteLine();
101+
102+
// 3) Function/tool calling — full round-trip via previous_response_id.
103+
// The function takes no arguments, which matches the pattern small models handle reliably.
104+
Console.WriteLine("\n=== Function calling ===");
105+
var emptyParamsSchema = BinaryData.FromString("""
106+
{
107+
"type": "object",
108+
"properties": {},
109+
"additionalProperties": false
110+
}
111+
""");
112+
113+
ResponseTool getWeatherTool = ResponseTool.CreateFunctionTool(
114+
functionName: "get_weather",
115+
functionParameters: emptyParamsSchema,
116+
strictModeEnabled: true,
117+
functionDescription: "Get the current weather. This sample always returns Seattle weather.");
113118

114-
var toolOptions = new CreateResponseOptions(
115-
model.Id,
116-
new[] { ResponseItem.CreateUserMessageItem("Use get_weather to look up the weather in Seattle, then summarize it.") })
117-
{
118-
StoredOutputEnabled = true,
119-
ToolChoice = ResponseToolChoice.CreateRequiredChoice(),
120-
};
121-
toolOptions.Tools.Add(ResponseTool.CreateFunctionTool(
122-
functionName: "get_weather",
123-
functionParameters: weatherSchema,
124-
strictModeEnabled: true,
125-
functionDescription: "Get the current weather for a given city."));
119+
var toolCallOptions = new CreateResponseOptions(
120+
model.Id,
121+
new[] { ResponseItem.CreateUserMessageItem("Use the get_weather tool and then answer with the weather.") })
122+
{
123+
StoredOutputEnabled = true,
124+
ToolChoice = ResponseToolChoice.CreateRequiredChoice(),
125+
MaxOutputTokenCount = 64,
126+
Temperature = 0.0f,
127+
};
128+
toolCallOptions.Tools.Add(getWeatherTool);
126129

127-
ResponseResult toolCallResponse = await responses.CreateResponseAsync(toolOptions);
130+
ResponseResult toolResponse = await responses.CreateResponseAsync(toolCallOptions);
128131

129-
// Find the function-call output item the model produced.
130-
FunctionCallResponseItem? functionCall = null;
131-
foreach (var item in toolCallResponse.OutputItems)
132-
{
133-
if (item is FunctionCallResponseItem fc && fc.FunctionName == "get_weather")
132+
FunctionCallResponseItem? functionCall = null;
133+
foreach (var item in toolResponse.OutputItems)
134134
{
135-
functionCall = fc;
136-
break;
135+
if (item is FunctionCallResponseItem fc && fc.FunctionName == "get_weather")
136+
{
137+
functionCall = fc;
138+
break;
139+
}
137140
}
138-
}
139141

140-
if (functionCall is null)
141-
{
142-
Console.WriteLine("Model did not produce a function call; skipping tool round-trip.");
143-
}
144-
else
145-
{
146-
var argsJson = functionCall.FunctionArguments?.ToString() ?? "{}";
147-
var city = "unknown";
148-
try
142+
if (functionCall is null)
149143
{
150-
city = JsonDocument.Parse(argsJson).RootElement.GetProperty("city").GetString() ?? "unknown";
144+
Console.WriteLine("Model did not produce a function call; skipping tool round-trip.");
151145
}
152-
catch (KeyNotFoundException) { /* model gave us no city */ }
153-
154-
Console.WriteLine($"Tool call: get_weather(city=\"{city}\")");
155-
var toolOutput = $$$"""{"city": "{{{city}}}", "temperatureF": 68, "summary": "partly cloudy"}""";
156-
Console.WriteLine($"Tool output: {toolOutput}");
157-
158-
// Submit the tool's output and ask the model to continue using `previous_response_id`.
159-
var followUpOptions = new CreateResponseOptions(
160-
model.Id,
161-
new[] { ResponseItem.CreateFunctionCallOutputItem(functionCall.CallId, toolOutput) })
146+
else
162147
{
163-
PreviousResponseId = toolCallResponse.Id,
164-
StoredOutputEnabled = true,
165-
};
166-
followUpOptions.Tools.Add(ResponseTool.CreateFunctionTool(
167-
functionName: "get_weather",
168-
functionParameters: weatherSchema,
169-
strictModeEnabled: true,
170-
functionDescription: "Get the current weather for a given city."));
171-
172-
ResponseResult finalResponse = await responses.CreateResponseAsync(followUpOptions);
173-
Console.WriteLine($"[ASSISTANT]: {finalResponse.GetOutputText()}");
148+
Console.WriteLine($"[TOOL CALL]: {functionCall.FunctionName}({functionCall.FunctionArguments})");
149+
150+
const string toolOutput = """{"location": "Seattle", "weather": "72 degrees F and sunny"}""";
151+
152+
var followUpOptions = new CreateResponseOptions(
153+
model.Id,
154+
new[] { ResponseItem.CreateFunctionCallOutputItem(functionCall.CallId, toolOutput) })
155+
{
156+
PreviousResponseId = toolResponse.Id,
157+
StoredOutputEnabled = true,
158+
MaxOutputTokenCount = 64,
159+
Temperature = 0.0f,
160+
};
161+
followUpOptions.Tools.Add(getWeatherTool);
162+
163+
ResponseResult finalResponse = await responses.CreateResponseAsync(followUpOptions);
164+
Console.WriteLine($"[ASSISTANT FINAL]: {finalResponse.GetOutputText()}");
165+
}
166+
// <<<<<< END OPEN AI RESPONSES SDK USAGE >>>>>>
167+
}
168+
finally
169+
{
170+
// Tidy up
171+
await mgr.StopWebServiceAsync();
172+
await model.UnloadAsync();
174173
}
175-
// <<<<<< END OPEN AI RESPONSES SDK USAGE >>>>>>
176-
177-
// Tidy up
178-
await mgr.StopWebServiceAsync();
179-
await model.UnloadAsync();
180174
// </complete_code>
Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,55 @@
1+
# Foundry Local Responses web service sample (C#)
2+
3+
This sample starts the Foundry Local OpenAI-compatible web service, then uses the official OpenAI .NET SDK to call the Responses API.
4+
5+
The pattern is:
6+
7+
1. `FoundryLocalManager` handles Foundry Local setup, model download/load, web service startup, and cleanup.
8+
1. `OpenAI.Responses.ResponsesClient` (from the official `OpenAI` NuGet package) handles the actual `/v1/responses` calls.
9+
10+
## Prerequisites
11+
12+
- .NET 9 SDK
13+
- Internet access on first run to download the sample model
14+
15+
## What the sample does
16+
17+
1. Initializes `FoundryLocalManager`.
18+
1. Downloads and registers execution providers.
19+
1. Downloads and loads `qwen2.5-0.5b`.
20+
1. Starts the local web service at `http://127.0.0.1:52495`.
21+
1. Creates an `OpenAIClient` pointed at `http://127.0.0.1:52495/v1`.
22+
1. Runs a non-streaming Responses call.
23+
1. Runs a streaming Responses call (`StreamingResponseOutputTextDeltaUpdate` events).
24+
1. Runs a Responses function-calling flow with a sample `get_weather` tool, then submits a tool result back via `previous_response_id`.
25+
1. Stops the web service and unloads the model.
26+
27+
## Run the sample
28+
29+
```powershell
30+
cd samples/cs/responses-foundry-local-web-server
31+
dotnet run
32+
```
33+
34+
## Expected output
35+
36+
```text
37+
=== Non-streaming ===
38+
[ASSISTANT]: 4
39+
40+
=== Streaming ===
41+
[ASSISTANT]: 1, 2, 3.
42+
43+
=== Function calling ===
44+
Tool call: get_weather()
45+
Tool output: {"location": "Seattle", "weather": "72 degrees F and sunny"}
46+
[ASSISTANT]: It's 72 degrees F and sunny in Seattle.
47+
```
48+
49+
The exact model text varies.
50+
51+
## Troubleshooting
52+
53+
If the sample fails while creating `FoundryLocalManager` with a native symbol error such as `Failed to resolve 'execute_command_with_binary' symbol`, the installed Foundry Local Core runtime is older than the native bits expect. Try the latest stable `Microsoft.AI.Foundry.Local[.WinML]` package, or a recent ORT-Nightly package if needed.
54+
55+
If port `52495` is already in use, edit `Program.cs` and change `config.Web.Urls`.

0 commit comments

Comments
 (0)