refactor(provider): Add lifecycle guards for OpenAI client reusability#7434
Open
Tz-WIND wants to merge 2 commits intoAstrBotDevs:masterfrom
Open
refactor(provider): Add lifecycle guards for OpenAI client reusability#7434Tz-WIND wants to merge 2 commits intoAstrBotDevs:masterfrom
Tz-WIND wants to merge 2 commits intoAstrBotDevs:masterfrom
Conversation
Contributor
There was a problem hiding this comment.
Hey - I've found 3 issues, and left some high level feedback:
- The combination of
_is_underlying_client_closed()and_ensure_client()runs on every request and will log a warning and recreate the client on every call if the SDK internals change (raisingAttributeError); consider caching a boolean flag when detection fails so you only log once and avoid repeated costly client recreation attempts. - Client recreation in
_ensure_client()is not synchronized, so in high-concurrency scenarios multiple coroutines could simultaneously see a closed client and race to create new instances and updateself.default_params; consider adding a simple async lock around the recreation path to make this thread-safe.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The combination of `_is_underlying_client_closed()` and `_ensure_client()` runs on every request and will log a warning and recreate the client on every call if the SDK internals change (raising `AttributeError`); consider caching a boolean flag when detection fails so you only log once and avoid repeated costly client recreation attempts.
- Client recreation in `_ensure_client()` is not synchronized, so in high-concurrency scenarios multiple coroutines could simultaneously see a closed client and race to create new instances and update `self.default_params`; consider adding a simple async lock around the recreation path to make this thread-safe.
## Individual Comments
### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="501-510" />
<code_context>
+ def _is_underlying_client_closed(self) -> bool:
</code_context>
<issue_to_address>
**issue (bug_risk):** Treating AttributeError as "client is closed" may cause endless client recreation and log spam if the SDK internals change.
In `_is_underlying_client_closed`, an `AttributeError` currently logs a warning and returns `True`, causing `_ensure_client` to recreate the client on every call if `_client` or `is_closed` is ever removed/renamed. Consider instead returning `False` on `AttributeError` (treat as "unknown / assume open"), or at least gating the warning and recreation behind a one-time flag to avoid repeated reconstruction and log noise in that scenario.
</issue_to_address>
### Comment 2
<location path="astrbot/core/provider/sources/openai_source.py" line_range="462-464" />
<code_context>
self.custom_headers[key] = str(self.custom_headers[key])
- if "api_version" in provider_config:
+ self.client = self._create_openai_client()
+
+ self.default_params = inspect.signature(
+ self.client.chat.completions.create,
+ ).parameters.keys()
</code_context>
<issue_to_address>
**suggestion:** The `default_params` initialization logic is duplicated and could be centralized.
The logic that sets `self.default_params` from `inspect.signature(self.client.chat.completions.create)` now lives in both `__init__` and `_ensure_client`. To avoid divergence when this logic changes, extract it into a helper (e.g., `_refresh_default_params()`) and call that from both places.
Suggested implementation:
```python
self.client = self._create_openai_client()
self._refresh_default_params()
model = provider_config.get("model", "unknown")
self.set_model(model)
```
```python
def _refresh_default_params(self) -> None:
"""
Refresh default_params based on the current OpenAI client.
This centralizes the logic for inspecting the chat.completions.create
signature so it can be reused from __init__ and any place the client
is (re)initialized.
"""
if self.client is None:
self.default_params = ()
return
self.default_params = inspect.signature(
self.client.chat.completions.create,
).parameters.keys()
def _create_http_client(self, provider_config: dict) -> httpx.AsyncClient | None:
"""创建带代理的 HTTP 客户端"""
proxy = provider_config.get("proxy", "")
```
You should also update `_ensure_client` (or any other place that recreates `self.client`) to call `self._refresh_default_params()` instead of duplicating the `inspect.signature(self.client.chat.completions.create)` logic. Specifically:
1. After any assignment to `self.client = ...` inside `_ensure_client`, add `self._refresh_default_params()`.
2. Remove any direct `self.default_params = inspect.signature(...).parameters.keys()` code from `_ensure_client`.
</issue_to_address>
### Comment 3
<location path="astrbot/core/provider/sources/openai_source.py" line_range="525" />
<code_context>
+ )
+ return True
+
+ def _ensure_client(self) -> None:
+ """确保 client 可用。如果 client 为 None 或底层连接已关闭,则重新创建。"""
+ if self.client is None or self._is_underlying_client_closed():
</code_context>
<issue_to_address>
**issue (complexity):** Consider introducing a unified `_ensure_client_with_key` helper to manage client creation and key binding so that call sites and `get_current_key` stay simple and side‑effect free.
You can reduce complexity around client/key lifecycle by centralizing “ensure client + key” and avoiding side‑effectful `get_current_key`:
```python
def _ensure_client_with_key(self, api_key: str | None = None) -> None:
"""Ensure client exists and is bound to the given key (if provided)."""
if api_key is not None and api_key != self.chosen_api_key:
self.chosen_api_key = api_key
# Reuse your existing logic for closed/None detection
if self.client is None or self._is_underlying_client_closed():
self.client = self._create_openai_client()
self.default_params = inspect.signature(
self.client.chat.completions.create,
).parameters.keys()
elif api_key is not None:
# Keep ability to rotate keys on an existing client
self.client.api_key = api_key
```
Then the callers become simpler and consistent:
```python
# text_chat
for retry_cnt in range(max_retries):
try:
self._ensure_client_with_key(chosen_key)
llm_response = await self._query(payloads, func_tool)
break
...
# text_chat_stream
for retry_cnt in range(max_retries):
try:
self._ensure_client_with_key(chosen_key)
async for response in self._query_stream(payloads, func_tool):
yield response
break
...
# set_key
def set_key(self, key) -> None:
self._ensure_client_with_key(key)
# get_current_key
def get_current_key(self) -> str:
# Avoids creating/recreating a client as a side effect
return self.chosen_api_key
```
This keeps all current behavior (client recreation on closed/None, key rotation on retries) but:
- Removes repeated `self._ensure_client()` + `self.chosen_api_key = ...` + `self.client.api_key = ...` blocks.
- Makes the “which key is bound to which client” rule explicit in a single place.
- Prevents `get_current_key` from unexpectedly constructing or recreating a client.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
Contributor
There was a problem hiding this comment.
Code Review
This pull request refactors the OpenAI client initialization and lifecycle management within the openai_source.py file. It extracts client creation into dedicated methods, introduces logic to ensure the client is active before use (_ensure_client), and enhances the terminate method for safer client shutdown. A notable point of feedback is the reliance on private attributes of the OpenAI SDK (_client.is_closed) for checking client status, which could lead to instability with future SDK updates and should be monitored for public API alternatives.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Modifications / 改动点
refactor(provider): 增强 OpenAI client 生命周期管理
在 openai_source.py 中引入了完善的 client 生命周期防护机制,
解决了配置重载和 terminate 后可能出现的 client 复用问题。
主要改动:
get_current_key/set_key 等方法中调用 _ensure_client()
这些改动确保了 client 在以下场景下的正确行为:
涉及的功能模块:Provider 管理、OpenAI 适配器
Screenshots or Test Results / 运行截图或测试结果
Checklist / 检查清单
😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
/ 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。
👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
/ 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”。
🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in
requirements.txtandpyproject.toml./ 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到
requirements.txt和pyproject.toml文件相应位置。😮 My changes do not introduce malicious code.
/ 我的更改没有引入恶意代码。
Summary by Sourcery
Refine lifecycle management for the OpenAI provider client to ensure safe reuse across initialization, termination, and configuration reload scenarios.
Enhancements: