You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Introduce support for OpenAI-compatible providers (Provider = "OpenAICompatible"). Updates include: docs (README, HOW_IT_WORKS, TECHNICAL_REFERENCE, CONTRIBUTING), docker files (.env.example, docker-compose.yml), CLI help, and code changes to plumb new config fields (BaseUrl, Organization, Project, ModelId mapping) and environment variables (OPENAI_COMPAT_*). KernelFactory, AgentKernelFactory, and Executor config/validation logic were extended to detect the new provider, validate required fields, and create chat completions against a custom endpoint when OpenAICompatible is selected. DockerRunner now propagates OPENAI_COMPAT_* env vars into containers. Note: appsettings.json in this changeset contains a populated ApiKey value — do not commit real secrets; remove or rotate this key and use environment variables instead.
Copy file name to clipboardExpand all lines: HOW_IT_WORKS.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,7 +60,7 @@ The Analyst's job is to read the tutorial and produce a machine-readable test pl
60
60
61
61
**Scraping** — The Analyst fetches the tutorial URL and parses the HTML. It follows navigation links within the same tutorial series, collecting up to a configurable maximum number of pages. Each page is converted to clean Markdown.
62
62
63
-
**Analysis** — The Analyst sends the Markdown content to an AI model (OpenAIor Azure OpenAI) with a prompt that instructs it to extract every action a developer must take. The result is a list of structured steps in JSON format, called the **test plan** (`testplan.json`).
63
+
**Analysis** — The Analyst sends the Markdown content to an AI model (OpenAI, Azure OpenAI, or any OpenAI-compatible provider) with a prompt that instructs it to extract every action a developer must take. The result is a list of structured steps in JSON format, called the **test plan** (`testplan.json`).
64
64
65
65
**Compaction** — Long tutorials can produce hundreds of raw steps. To keep execution time and AI cost reasonable, the Analyst merges adjacent steps of the same type (e.g., two consecutive file edits become one step with two modifications). This is controlled by the `--target-steps` and `--max-steps` arguments.
TutorialValidator is an AI-powered tool that checks whether a software documentation tutorial actually works. You give it a URL, it scrapes the tutorial, turns every instruction into an executable step, then runs those steps exactly as a developer would — installing packages, writing files, running commands, making HTTP calls, and asserting results. If any step fails, the tutorial has a bug.
6
+
TutorialValidator is an AI-powered tool that checks whether a software documentation tutorial actually works. You give it a URL, it scrapes the tutorial, turns every instruction into an executable step, then runs those steps exactly as a developer would — installing packages, writing files, running commands, making HTTP calls, and asserting results.
8
7
9
-
It was built to validate [ABP Framework](https://abp.io) tutorials, but the architecture supports any publicly accessible tutorial.
8
+
We originally built it internally to validate [ABP Framework](https://abp.io) tutorials, and then decided to publish it as open source so you can use it to validate any publicly accessible tutorial.
10
9
11
10
---
12
11
@@ -18,7 +17,7 @@ Before you start, make sure you have the following installed:
|OpenAI **or** Azure OpenAI API key | — |[platform.openai.com](https://platform.openai.com) or your Azure portal|
20
+
|AI provider API key | — |Refer to your AI provider's documentation|
22
21
23
22
> Docker is required for the default (recommended) execution mode. If you want to run without Docker, see [Running Locally Without Docker](#running-locally-without-docker) below.
See [Environment Variables](#environment-variables) for all supported providers.
66
+
62
67
**Step 4 — Run the validation**
63
68
64
69
```bash
@@ -210,10 +215,11 @@ The file at `src/Validator.Orchestrator/appsettings.json` controls all default s
210
215
211
216
| Field | Description |
212
217
|---|---|
213
-
|`Provider`| AI provider to use. Accepted values: `OpenAI`, `AzureOpenAI`. Auto-detected from environment variables if omitted. |
214
-
|`Model`| The model name to request from OpenAI (e.g. `gpt-5.2`, `gpt-4o`). Ignored when using Azure OpenAI. |
218
+
|`Provider`| AI provider to use. Accepted values: `OpenAI`, `AzureOpenAI`, `OpenAICompatible`. Auto-detected from environment variables if omitted. |
219
+
|`Model`| The model name to request from your AI provider (e.g. `gpt-5.2`, `gpt-4o`). Ignored when using Azure OpenAI. |
215
220
|`DeploymentName`| The deployment name for Azure OpenAI. When using OpenAI directly, this can mirror the `Model` value or be left empty. |
216
-
|`ApiKey`| Your API key. Leave blank and use the `OPENAI_API_KEY` or `AZURE_OPENAI_API_KEY` environment variable instead — do not commit keys to source control. |
221
+
|`ApiKey`| Your AI provider API key. Leave blank and use environment variables (`OPENAI_API_KEY`, `AZURE_OPENAI_API_KEY`, or `OPENAI_COMPAT_API_KEY`) instead — do not commit keys to source control. |
222
+
|`BaseUrl`| Base URL for OpenAI-compatible providers (e.g. `https://your-provider.example.com/v1`). Used when `Provider` is `OpenAICompatible`. |
Copy file name to clipboardExpand all lines: TECHNICAL_REFERENCE.md
+16-2Lines changed: 16 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -146,7 +146,7 @@ After compaction, all steps are renumbered sequentially starting from 1.
146
146
### 3.1 Initialization
147
147
148
148
`AgentKernelFactory.CreateExecutorKernel` builds a `Microsoft.SemanticKernel.Kernel` with:
149
-
- The configured AI chat completion service (OpenAI or Azure OpenAI)
149
+
- The configured AI chat completion service (OpenAI, Azure OpenAI, or OpenAI-compatible endpoint)
150
150
- Four plugins registered as kernel functions (see [3.3 Plugins](#33-plugins))
151
151
- A `FunctionCallTracker` that intercepts and records every function call and its result for deterministic result parsing
152
152
@@ -487,7 +487,8 @@ A workaround for the .NET 10 SDK image ships an RC/preview runtime while ABP CLI
487
487
488
488
1. If `AI_PROVIDER` is set explicitly, use that value (`OpenAI` or `AzureOpenAI`).
489
489
2. If `AZURE_OPENAI_ENDPOINT` is set, default to `AzureOpenAI`.
490
-
3. Otherwise, use `OpenAI`.
490
+
3. Otherwise, if `OPENAI_COMPAT_BASE_URL` and `OPENAI_COMPAT_API_KEY` are present, use `OpenAICompatible`.
491
+
4. Otherwise, use `OpenAI`.
491
492
492
493
### OpenAI Configuration
493
494
@@ -506,6 +507,19 @@ Required:
506
507
507
508
`AI.Model` is ignored for Azure OpenAI; the model is determined by the deployment.
508
509
510
+
### OpenAI-Compatible Configuration
511
+
512
+
Required:
513
+
-`OPENAI_COMPAT_BASE_URL` or `AI.BaseUrl` in `appsettings.json`
514
+
-`OPENAI_COMPAT_API_KEY` or `AI.ApiKey`
515
+
-`OPENAI_COMPAT_MODEL` or `AI.ModelId` (falls back to `AI.DeploymentName`)
516
+
517
+
Optional:
518
+
-`OPENAI_COMPAT_ORG` / `AI.Organization`
519
+
-`OPENAI_COMPAT_PROJECT` / `AI.Project`
520
+
521
+
Set `AI_PROVIDER=OpenAICompatible` to force this mode when multiple provider variables are present.
522
+
509
523
### Configuration Precedence
510
524
511
525
Environment variables always override `appsettings.json`. The configuration is loaded using `Microsoft.Extensions.Configuration.IConfigurationBuilder`:
0 commit comments