Skip to content

Add OpenAI provider support#208

Open
ahmadhozien wants to merge 1 commit into
FujiwaraChoki:mainfrom
ahmadhozien:feat/openai-provider-support
Open

Add OpenAI provider support#208
ahmadhozien wants to merge 1 commit into
FujiwaraChoki:mainfrom
ahmadhozien:feat/openai-provider-support

Conversation

@ahmadhozien
Copy link
Copy Markdown

Closes #206

What changed

  • added provider-aware config helpers for Ollama and OpenAI
  • added OpenAI Responses API support to the LLM adapter
  • updated startup and cron model selection to pass the provider through
  • documented the new OpenAI config flow in the docs and example config

Validation

  • venv\\Scripts\\python.exe -m py_compile src\\config.py src\\llm_provider.py src\\main.py src\\cron.py

@kamusis
Copy link
Copy Markdown

kamusis commented Apr 7, 2026

Hey @ahmadhozien, thanks for the contribution! One concern with the current implementation: generate_text calls POST /responses (OpenAI's Responses API), which is OpenAI-specific and not part of the standard OpenAI-compatible API spec.

Most providers that advertise OpenAI compatibility — such as OpenRouter, MiniMax, Groq, Azure OpenAI, and others — only implement POST /v1/chat/completions, not /responses. This means setting a custom openai_base_url to any of those providers will still fail.

It would be much better to use POST /chat/completions with the standard payload:

{
  "model": "<model>",
  "messages": [{"role": "user", "content": "<prompt>"}]
}

This is universally supported across all OpenAI-compatible APIs and would make the openai_base_url option genuinely useful for third-party providers. Would you be open to switching to that endpoint?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add OpenAI API provider support for text generation

2 participants