Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/GRAMMAR.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ Tasks can optionally specify which Model to use on the configured inference endp
This is a user prompt.
```

Note that model identifiers may differ between OpenAI compatible endpoint providers, make sure you change your model identifier accordingly when switching providers. If not specified, a default LLM model (`gpt-4o`) is used.
Note that model identifiers may differ between OpenAI compatible endpoint providers, make sure you change your model identifier accordingly when switching providers. If not specified, a default LLM model (such as `gpt-4.1`) is used.
Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This note now suggests a default model like gpt-4.1, but the actual default model identifier is endpoint-specific (e.g., models.github.ai defaults to a namespaced ID like openai/gpt-4.1 in src/seclab_taskflow_agent/agent.py). To avoid confusion for users switching endpoints, consider explicitly stating that the default model string depends on AI_API_ENDPOINT and may be namespaced, rather than implying a single identifier.

Suggested change
Note that model identifiers may differ between OpenAI compatible endpoint providers, make sure you change your model identifier accordingly when switching providers. If not specified, a default LLM model (such as `gpt-4.1`) is used.
Note that model identifiers, including the default model string, depend on the configured `AI_API_ENDPOINT` and may be namespaced (for example, `openai/gpt-4.1` on some endpoints). When switching providers, make sure you update your model identifier accordingly. If `model` is not specified in a task, the endpoint's own default LLM model will be used (which may be `gpt-4.1` or another provider-specific default).

Copilot uses AI. Check for mistakes.

Parameters to the model can also be specified in the task using the `model_settings` section:

Expand Down
4 changes: 1 addition & 3 deletions examples/model_configs/model_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,5 @@ seclab-taskflow-agent:
version: "1.0"
filetype: model_config
models:
sonnet_default: claude-sonnet-4
sonnet_latest: claude-sonnet-4.5
gpt_default: gpt-4.1
gpt_latest: gpt-5

Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gpt_latest (and the previously defined Claude aliases) are removed from the example model_config, but other repository examples and docs still reference gpt_latest (e.g., README “Model configs” section, doc/GRAMMAR.md model_config example, and examples/taskflows/CVE-2023-2283.yaml uses model: gpt_latest with this config). With this change, those taskflows will pass gpt_latest through as the provider model ID and likely fail at runtime. Consider either reintroducing gpt_latest (mapping to the desired provider model) or updating the referenced docs/taskflows to use gpt_default (or another existing key).

Suggested change
gpt_latest: gpt-4.1

Copilot uses AI. Check for mistakes.
6 changes: 3 additions & 3 deletions src/seclab_taskflow_agent/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,11 +41,11 @@
api_endpoint = get_AI_endpoint()
match urlparse(api_endpoint).netloc:
case AI_API_ENDPOINT_ENUM.AI_API_GITHUBCOPILOT:
default_model = "gpt-4o"
default_model = "gpt-4.1"
case AI_API_ENDPOINT_ENUM.AI_API_MODELS_GITHUB:
default_model = "openai/gpt-4o"
default_model = "openai/gpt-4.1"
case AI_API_ENDPOINT_ENUM.AI_API_OPENAI:
default_model = "gpt-4o"
default_model = "gpt-4.1"
Comment on lines 41 to +48
Copy link

Copilot AI Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing the runtime default model from gpt-4o to gpt-4.1 makes existing documentation inaccurate (e.g., doc/GRAMMAR.md explicitly states the default is gpt-4o). Please update the docs (and any other user-facing references) to reflect the new default so users aren’t misled when they omit model: in task definitions.

Copilot uses AI. Check for mistakes.
case _:
default_model = "please-set-default-model-via-env"

Expand Down
Loading