fix: Address unresolved PR review comments from #1, #7, #8#13
Conversation
- Fix Slack rate limit detection: _handle_slack_error now correctly handles SlackResponse objects (not just dicts) by checking resp.data, and extracts the Retry-After header for rate limit errors - Fix Teams JWT verification blocking: wrap synchronous PyJWKClient.get_signing_key_from_jwt in asyncio.to_thread to avoid blocking the event loop during JWKS fetch - Make Slack client cache max configurable: add client_cache_max field to SlackAdapterConfig (defaults to 100) - Fix unsafe session cleanup on LRU eviction: remove create_task session close that could break concurrent requests using evicted clients; rely on garbage collection instead - Add tests for SlackResponse handling, rate limiting, and configurable cache max Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Code Review
This pull request makes the Slack client cache size configurable, updates Slack error handling to support SlackResponse objects and extract Retry-After headers, and ensures JWT signing key retrieval in the Teams adapter is non-blocking. Feedback suggests using an explicit None check for the cache size to allow a value of zero, implementing a shared aiohttp.ClientSession to avoid resource leaks from unclosed sessions, and validating that the Retry-After header is numeric before integer conversion.
| # Cache of AsyncWebClient instances keyed by bot token (LRU-bounded) | ||
| self._client_cache: OrderedDict[str, Any] = OrderedDict() | ||
| self._client_cache_max = 100 # max cached clients | ||
| self._client_cache_max = config.client_cache_max or 100 |
There was a problem hiding this comment.
Using config.client_cache_max or 100 prevents setting the cache size to 0. If disabling the cache is a valid use case, you should check for None explicitly to allow 0 as a valid integer value.
| self._client_cache_max = config.client_cache_max or 100 | |
| self._client_cache_max = config.client_cache_max if config.client_cache_max is not None else 100 |
| # Evict oldest (LRU). We intentionally do NOT close the evicted | ||
| # client's session here because other concurrent requests may still | ||
| # hold a reference to the evicted AsyncWebClient instance. The | ||
| # underlying aiohttp.ClientSession will be closed by the garbage | ||
| # collector (via __del__) once all references are released. | ||
| self._client_cache.popitem(last=False) |
There was a problem hiding this comment.
Relying on garbage collection to close aiohttp sessions is generally discouraged in asyncio as it can lead to resource leaks and "Unclosed client session" warnings. While removing the create_task(session.close()) call prevents breaking concurrent requests, a more robust solution would be to manage a single shared aiohttp.ClientSession within the SlackAdapter and pass it to each AsyncWebClient instance. This would eliminate the need for per-client session management during eviction.
| retry_after = resp.headers.get("Retry-After") | ||
| elif isinstance(resp, dict): | ||
| retry_after = resp.get("headers", {}).get("Retry-After") | ||
| raise AdapterRateLimitError("slack", int(retry_after) if retry_after else None) from error |
There was a problem hiding this comment.
The int(retry_after) conversion may raise a ValueError if the Retry-After header contains an invalid integer or a date string (which is allowed by the HTTP spec). It is safer to verify the value is numeric before conversion to ensure the AdapterRateLimitError is raised correctly without crashing the error handler.
| raise AdapterRateLimitError("slack", int(retry_after) if retry_after else None) from error | |
| raise AdapterRateLimitError("slack", int(retry_after) if retry_after and str(retry_after).isdigit() else None) from error |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 9519a81e08
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| # Cache of AsyncWebClient instances keyed by bot token (LRU-bounded) | ||
| self._client_cache: OrderedDict[str, Any] = OrderedDict() | ||
| self._client_cache_max = 100 # max cached clients | ||
| self._client_cache_max = config.client_cache_max or 100 |
There was a problem hiding this comment.
Preserve explicit zero for client cache max
Use an explicit None check here instead of or 100, because 0 is a valid caller-provided integer but is currently treated as “unset.” With this code, configuring client_cache_max=0 (e.g., to disable caching) silently falls back to 100 clients, so runtime behavior does not match the provided configuration.
Useful? React with 👍 / 👎.
| retry_after = resp.headers.get("Retry-After") | ||
| elif isinstance(resp, dict): | ||
| retry_after = resp.get("headers", {}).get("Retry-After") | ||
| raise AdapterRateLimitError("slack", int(retry_after) if retry_after else None) from error |
There was a problem hiding this comment.
Handle non-integer Retry-After values safely
This directly calls int(retry_after) when a rate-limit response is detected, which can raise ValueError for valid-but-non-integer or malformed header values and prevent AdapterRateLimitError from being raised. In that case, callers expecting standardized rate-limit handling receive an unexpected parsing exception instead.
Useful? React with 👍 / 👎.
Summary
_handle_slack_errornow correctly handlesSlackResponseobjects (with.datadict) instead of only plain dicts, and extracts theRetry-Afterheader for rate limit errorsPyJWKClient.get_signing_key_from_jwtinasyncio.to_threadto avoid blocking the event loopclient_cache_maxfield toSlackAdapterConfig(defaults to 100)create_tasksession close that could break concurrent requests using evicted clients; rely on GC insteadTest plan
🤖 Generated with Claude Code