Skip to content

Commit b3aa78c

Browse files
dpageclaude
andauthored
Added support for custom LLM provider URLs for OpenAI and Anthropic, allowing use of OpenAI-compatible providers such as LM Studio, EXO, and LiteLLM. #9703
- Add configurable API URL fields for OpenAI and Anthropic providers - Make API keys optional when using custom URLs (for local providers) - Auto-clear model dropdown when provider settings change - Refresh button uses current unsaved form values - Update documentation and release notes Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
1 parent 78e3b67 commit b3aa78c

File tree

13 files changed

+354
-96
lines changed

13 files changed

+354
-96
lines changed

docs/en_US/ai_tools.rst

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -48,15 +48,18 @@ button and select *AI*).
4848
Select your preferred LLM provider from the dropdown:
4949

5050
**Anthropic**
51-
Use Claude models from Anthropic. Requires an Anthropic API key.
51+
Use Claude models from Anthropic, or any Anthropic-compatible API provider.
5252

53-
* **API Key File**: Path to a file containing your Anthropic API key (obtain from https://console.anthropic.com/).
53+
* **API URL**: Custom API endpoint URL (leave empty for default: https://api.anthropic.com/v1).
54+
* **API Key File**: Path to a file containing your Anthropic API key (obtain from https://console.anthropic.com/). Optional when using a custom URL with a provider that does not require authentication.
5455
* **Model**: Select from available Claude models (e.g., claude-sonnet-4-20250514).
5556

5657
**OpenAI**
57-
Use GPT models from OpenAI. Requires an OpenAI API key.
58+
Use GPT models from OpenAI, or any OpenAI-compatible API provider (e.g.,
59+
LiteLLM, LM Studio, EXO, or other local inference servers).
5860

59-
* **API Key File**: Path to a file containing your OpenAI API key (obtain from https://platform.openai.com/).
61+
* **API URL**: Custom API endpoint URL (leave empty for default: https://api.openai.com/v1). Include the ``/v1`` path prefix if required by your provider.
62+
* **API Key File**: Path to a file containing your OpenAI API key (obtain from https://platform.openai.com/). Optional when using a custom URL with a provider that does not require authentication.
6063
* **Model**: Select from available GPT models (e.g., gpt-4).
6164

6265
**Ollama**
@@ -72,6 +75,10 @@ Select your preferred LLM provider from the dropdown:
7275
* **API URL**: The URL of the Docker Model Runner API (default: http://localhost:12434).
7376
* **Model**: Select from available models or enter a custom model name.
7477

78+
.. note:: You can also use the *OpenAI* provider with a custom API URL for any
79+
OpenAI-compatible endpoint, including Docker Model Runner and other local
80+
inference servers.
81+
7582
After configuring your provider, click *Save* to apply the changes.
7683

7784

docs/en_US/preferences.rst

Lines changed: 23 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -47,19 +47,33 @@ Use the fields on the *AI* panel to configure your LLM provider:
4747

4848
**Anthropic Settings:**
4949

50+
* Use the *API URL* field to set a custom API endpoint URL. Leave empty to use
51+
the default Anthropic API (``https://api.anthropic.com/v1``). Set a custom URL
52+
to use an Anthropic-compatible API provider.
53+
5054
* Use the *API Key File* field to specify the path to a file containing your
51-
Anthropic API key.
55+
Anthropic API key. The API key may be optional when using a custom API URL
56+
with a provider that does not require authentication.
5257

5358
* Use the *Model* field to select from the available Claude models. Click the
54-
refresh button to fetch the latest available models from Anthropic.
59+
refresh button to fetch the latest available models from your configured
60+
endpoint.
5561

5662
**OpenAI Settings:**
5763

64+
* Use the *API URL* field to set a custom API endpoint URL. Leave empty to use
65+
the default OpenAI API (``https://api.openai.com/v1``). Set a custom URL to
66+
use any OpenAI-compatible API provider (e.g., LiteLLM, LM Studio, EXO).
67+
Include the ``/v1`` path prefix if required by your provider
68+
(e.g., ``http://localhost:1234/v1``).
69+
5870
* Use the *API Key File* field to specify the path to a file containing your
59-
OpenAI API key.
71+
OpenAI API key. The API key may be optional when using a custom API URL
72+
with a provider that does not require authentication.
6073

6174
* Use the *Model* field to select from the available GPT models. Click the
62-
refresh button to fetch the latest available models from OpenAI.
75+
refresh button to fetch the latest available models from your configured
76+
endpoint.
6377

6478
**Ollama Settings:**
6579

@@ -79,6 +93,11 @@ Use the fields on the *AI* panel to configure your LLM provider:
7993
model name. Click the refresh button to fetch the latest available models
8094
from your Docker Model Runner.
8195

96+
.. note:: You can also use the *OpenAI* provider with a custom API URL for any
97+
OpenAI-compatible endpoint, including Docker Model Runner, LM Studio, EXO,
98+
and other local inference servers. This can be useful when you want to use
99+
a provider that isn't explicitly listed but supports the OpenAI API format.
100+
82101
The Browser Node
83102
****************
84103

docs/en_US/release_notes_9_14.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ New features
2121
************
2222

2323
| `Issue #4011 <https://github.com/pgadmin-org/pgadmin4/issues/4011>`_ - Added support to download binary data from result grid.
24+
| `Issue #9703 <https://github.com/pgadmin-org/pgadmin4/issues/9703>`_ - Added support for custom LLM provider URLs for OpenAI and Anthropic, allowing use of OpenAI-compatible providers such as LM Studio, EXO, and LiteLLM.
2425
2526
Housekeeping
2627
************

web/config.py

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -987,19 +987,35 @@
987987
DEFAULT_LLM_PROVIDER = ''
988988

989989
# Anthropic Configuration
990+
# URL for the Anthropic API endpoint. Leave empty to use the default
991+
# (https://api.anthropic.com/v1). Set a custom URL to use an
992+
# Anthropic-compatible API provider.
993+
ANTHROPIC_API_URL = ''
994+
990995
# Path to a file containing the Anthropic API key. The file should contain
991996
# only the API key with no additional whitespace or formatting.
992997
# Default: ~/.anthropic-api-key
998+
# Note: The API key may be optional when using a custom API URL with a
999+
# provider that does not require authentication.
9931000
ANTHROPIC_API_KEY_FILE = '~/.anthropic-api-key'
9941001

9951002
# The Anthropic model to use for AI features.
9961003
# Examples: claude-sonnet-4-20250514, claude-3-5-haiku-20241022
9971004
ANTHROPIC_API_MODEL = ''
9981005

9991006
# OpenAI Configuration
1007+
# URL for the OpenAI API endpoint. Leave empty to use the default
1008+
# (https://api.openai.com/v1). Set a custom URL to use any
1009+
# OpenAI-compatible API provider (e.g., LiteLLM, LM Studio, EXO).
1010+
# Include the /v1 path prefix if required by your provider
1011+
# (e.g., http://localhost:1234/v1).
1012+
OPENAI_API_URL = ''
1013+
10001014
# Path to a file containing the OpenAI API key. The file should contain
10011015
# only the API key with no additional whitespace or formatting.
10021016
# Default: ~/.openai-api-key
1017+
# Note: The API key may be optional when using a custom API URL with a
1018+
# provider that does not require authentication.
10031019
OPENAI_API_KEY_FILE = '~/.openai-api-key'
10041020

10051021
# The OpenAI model to use for AI features.
@@ -1020,6 +1036,8 @@
10201036
# OpenAI-compatible API. No API key is required.
10211037
# URL for the Docker Model Runner API endpoint. Leave empty to disable.
10221038
# Typical value: http://localhost:12434
1039+
# Tip: You can also use the OpenAI provider with a custom API URL for any
1040+
# OpenAI-compatible endpoint, including Docker Model Runner.
10231041
DOCKER_API_URL = ''
10241042

10251043
# The Docker Model Runner model to use for AI features.

0 commit comments

Comments
 (0)