feat: optimize async io performance and benchmark coverage#5737
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request targets performance regressions in asynchronous I/O operations by minimizing unnecessary thread-pool context switches in critical code paths. It significantly improves the efficiency of large backup exports, high-frequency message conversions, and file downloads, which previously suffered from excessive Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Hey - 我这边发现了 3 个问题,并留下了一些总体反馈:
_stream_to_file辅助函数会返回downloaded_size,但download_file从未使用它,所以要么去掉这个返回值,要么把它贯穿传递给调用方,以免让人误以为这个字节数在语义上是有意义的。- 在
_stream_to_file中,建议收紧file_obj的类型注解(例如限制为二进制的IO类型),这样这个类文件对象的预期接口会更清晰,也更容易用静态分析工具进行校验。
给 AI Agent 的提示
Please address the comments from this code review:
## Overall Comments
- The `_stream_to_file` helper returns `downloaded_size` but `download_file` never uses it, so either drop the return value or thread it through to callers to avoid confusion about whether the byte count is meaningful.
- In `_stream_to_file`, consider tightening the `file_obj` type hint (e.g., to a binary `IO` type) so the expected interface for the file-like object is clear and easier to validate with static analysis.
## Individual Comments
### Comment 1
<location path="astrbot/core/utils/io.py" line_range="195-198" />
<code_context>
+ """Stream HTTP response into file with buffered thread-offloaded writes."""
+ downloaded_size = 0
+ buffered = bytearray()
+ progress_total = max(total_size, 1)
+
+ while True:
+ chunk = await stream.read(chunk_size)
+ if not chunk:
+ break
+ buffered.extend(chunk)
+ downloaded_size += len(chunk)
+
+ if len(buffered) >= flush_threshold:
+ chunk_to_write = bytes(buffered)
+ buffered.clear()
+ await asyncio.to_thread(file_obj.write, chunk_to_write)
+
+ if show_progress:
+ elapsed_time = max(time.time() - start_time, 1e-6)
+ speed = downloaded_size / 1024 / elapsed_time # KB/s
+ print(
+ f"\r下载进度: {downloaded_size / progress_total:.2%} 速度: {speed:.2f} KB/s",
+ end="",
</code_context>
<issue_to_address>
**suggestion:** Progress percentage can exceed 100% when content-length is missing or zero.
`progress_total = max(total_size, 1)` prevents division by zero but makes the percentage meaningless when `content-length` is 0/absent (it will exceed 100% for any non-empty body). For unknown lengths, either suppress the percentage (e.g. only show it when `total_size > 0`) or display just downloaded size/speed instead.
```suggestion
) -> int:
"""Stream HTTP response into file with buffered thread-offloaded writes."""
downloaded_size = 0
buffered = bytearray()
# Only use total_size for progress percentage when it's a positive value.
# For unknown/zero lengths, we will show downloaded size and speed only.
progress_total = total_size if total_size > 0 else None
while True:
chunk = await stream.read(chunk_size)
if not chunk:
break
buffered.extend(chunk)
downloaded_size += len(chunk)
if len(buffered) >= flush_threshold:
chunk_to_write = bytes(buffered)
buffered.clear()
await asyncio.to_thread(file_obj.write, chunk_to_write)
if show_progress:
elapsed_time = max(time.time() - start_time, 1e-6)
speed = downloaded_size / 1024 / elapsed_time # KB/s
if progress_total:
percent = downloaded_size / progress_total
print(
f"\r下载进度: {percent:.2%} 速度: {speed:.2f} KB/s",
end="",
)
else:
print(
f"\r已下载: {downloaded_size} 字节 速度: {speed:.2f} KB/s",
end="",
)
```
</issue_to_address>
### Comment 2
<location path="astrbot/core/backup/exporter.py" line_range="366-375" />
<code_context>
+ def _export_attachments_sync(
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Broad exception handling in attachment export can hide unexpected errors.
Catching `FileNotFoundError` explicitly is good, but the subsequent `except Exception` risks hiding real issues (e.g. permission errors, corrupted zip) behind a generic warning. Please narrow this handler (e.g. to `OSError`) or, if you must keep it broad, include richer context (such as `file_path` and `attachment_id`) in the log so we can diagnose real export failures instead of silently skipping them.
Suggested implementation:
```python
except OSError as e:
logger.warning(
"导出附件失败 (path=%s, attachment_id=%s): %s",
file_path,
attachment.get("id") or attachment.get("name") or "unknown",
e,
)
```
If the logging call in your code differs (e.g. different logger name or message), adjust the SEARCH block to match the existing `except Exception` handler and its log line. Also, if your `attachment` dict uses a different key than `id` or `name` for identification, replace those with the appropriate key(s) so the log context is meaningful.
</issue_to_address>
### Comment 3
<location path="tests/unit/test_message_components_paths.py" line_range="8-17" />
<code_context>
+ return max(1, value * scale)
+
+
+@pytest.mark.asyncio
+@pytest.mark.slow
+async def test_core_performance_benchmarks(tmp_path: Path) -> None:
</code_context>
<issue_to_address>
**suggestion (testing):** Add positive-path tests for `Image`/`Record.convert_to_base64` with existing local files to complement the new missing-file tests.
The new tests validate the error behavior for missing local files after switching to relying on `OSError` from `file_to_base64`, but they don’t assert that the success path still works.
Please also add a positive-path test that:
- creates a small temporary file with known contents,
- wraps it in `Image(file=str(path))` / `Record(file=str(path))`,
- calls `convert_to_base64()`, and
- checks that decoding the result matches the original bytes.
This will cover both success and failure semantics of the refactored path.
</issue_to_address>帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据反馈改进今后的 Review。
Original comment in English
Hey - I've found 3 issues, and left some high level feedback:
- The
_stream_to_filehelper returnsdownloaded_sizebutdownload_filenever uses it, so either drop the return value or thread it through to callers to avoid confusion about whether the byte count is meaningful. - In
_stream_to_file, consider tightening thefile_objtype hint (e.g., to a binaryIOtype) so the expected interface for the file-like object is clear and easier to validate with static analysis.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The `_stream_to_file` helper returns `downloaded_size` but `download_file` never uses it, so either drop the return value or thread it through to callers to avoid confusion about whether the byte count is meaningful.
- In `_stream_to_file`, consider tightening the `file_obj` type hint (e.g., to a binary `IO` type) so the expected interface for the file-like object is clear and easier to validate with static analysis.
## Individual Comments
### Comment 1
<location path="astrbot/core/utils/io.py" line_range="195-198" />
<code_context>
+ """Stream HTTP response into file with buffered thread-offloaded writes."""
+ downloaded_size = 0
+ buffered = bytearray()
+ progress_total = max(total_size, 1)
+
+ while True:
+ chunk = await stream.read(chunk_size)
+ if not chunk:
+ break
+ buffered.extend(chunk)
+ downloaded_size += len(chunk)
+
+ if len(buffered) >= flush_threshold:
+ chunk_to_write = bytes(buffered)
+ buffered.clear()
+ await asyncio.to_thread(file_obj.write, chunk_to_write)
+
+ if show_progress:
+ elapsed_time = max(time.time() - start_time, 1e-6)
+ speed = downloaded_size / 1024 / elapsed_time # KB/s
+ print(
+ f"\r下载进度: {downloaded_size / progress_total:.2%} 速度: {speed:.2f} KB/s",
+ end="",
</code_context>
<issue_to_address>
**suggestion:** Progress percentage can exceed 100% when content-length is missing or zero.
`progress_total = max(total_size, 1)` prevents division by zero but makes the percentage meaningless when `content-length` is 0/absent (it will exceed 100% for any non-empty body). For unknown lengths, either suppress the percentage (e.g. only show it when `total_size > 0`) or display just downloaded size/speed instead.
```suggestion
) -> int:
"""Stream HTTP response into file with buffered thread-offloaded writes."""
downloaded_size = 0
buffered = bytearray()
# Only use total_size for progress percentage when it's a positive value.
# For unknown/zero lengths, we will show downloaded size and speed only.
progress_total = total_size if total_size > 0 else None
while True:
chunk = await stream.read(chunk_size)
if not chunk:
break
buffered.extend(chunk)
downloaded_size += len(chunk)
if len(buffered) >= flush_threshold:
chunk_to_write = bytes(buffered)
buffered.clear()
await asyncio.to_thread(file_obj.write, chunk_to_write)
if show_progress:
elapsed_time = max(time.time() - start_time, 1e-6)
speed = downloaded_size / 1024 / elapsed_time # KB/s
if progress_total:
percent = downloaded_size / progress_total
print(
f"\r下载进度: {percent:.2%} 速度: {speed:.2f} KB/s",
end="",
)
else:
print(
f"\r已下载: {downloaded_size} 字节 速度: {speed:.2f} KB/s",
end="",
)
```
</issue_to_address>
### Comment 2
<location path="astrbot/core/backup/exporter.py" line_range="366-375" />
<code_context>
+ def _export_attachments_sync(
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Broad exception handling in attachment export can hide unexpected errors.
Catching `FileNotFoundError` explicitly is good, but the subsequent `except Exception` risks hiding real issues (e.g. permission errors, corrupted zip) behind a generic warning. Please narrow this handler (e.g. to `OSError`) or, if you must keep it broad, include richer context (such as `file_path` and `attachment_id`) in the log so we can diagnose real export failures instead of silently skipping them.
Suggested implementation:
```python
except OSError as e:
logger.warning(
"导出附件失败 (path=%s, attachment_id=%s): %s",
file_path,
attachment.get("id") or attachment.get("name") or "unknown",
e,
)
```
If the logging call in your code differs (e.g. different logger name or message), adjust the SEARCH block to match the existing `except Exception` handler and its log line. Also, if your `attachment` dict uses a different key than `id` or `name` for identification, replace those with the appropriate key(s) so the log context is meaningful.
</issue_to_address>
### Comment 3
<location path="tests/unit/test_message_components_paths.py" line_range="8-17" />
<code_context>
+ return max(1, value * scale)
+
+
+@pytest.mark.asyncio
+@pytest.mark.slow
+async def test_core_performance_benchmarks(tmp_path: Path) -> None:
</code_context>
<issue_to_address>
**suggestion (testing):** Add positive-path tests for `Image`/`Record.convert_to_base64` with existing local files to complement the new missing-file tests.
The new tests validate the error behavior for missing local files after switching to relying on `OSError` from `file_to_base64`, but they don’t assert that the success path still works.
Please also add a positive-path test that:
- creates a small temporary file with known contents,
- wraps it in `Image(file=str(path))` / `Record(file=str(path))`,
- calls `convert_to_base64()`, and
- checks that decoding the result matches the original bytes.
This will cover both success and failure semantics of the refactored path.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| @pytest.mark.asyncio | ||
| async def test_image_convert_to_file_path_returns_absolute_path(tmp_path): | ||
| file_path = tmp_path / "img.bin" | ||
| file_path.write_bytes(b"img") | ||
|
|
||
| image = Image(file=str(file_path)) | ||
| resolved = await image.convert_to_file_path() | ||
|
|
||
| assert resolved == os.path.abspath(str(file_path)) | ||
|
|
There was a problem hiding this comment.
suggestion (testing): 为已有本地文件的 Image/Record.convert_to_base64 添加“成功路径”的测试,用来补充新加的缺失文件测试。
新的测试在切换为依赖 file_to_base64 抛出的 OSError 之后,已经验证了本地文件缺失时的错误行为,但尚未验证成功路径是否依然正常。
请再添加一个“成功路径”的测试,该测试应当:
- 创建一个包含已知内容的小型临时文件,
- 用
Image(file=str(path))/Record(file=str(path))封装这个文件, - 调用
convert_to_base64(),并 - 校验对结果解码后得到的字节内容与原始字节一致。
这样可以同时覆盖重构后路径的成功语义和失败语义。
Original comment in English
suggestion (testing): Add positive-path tests for Image/Record.convert_to_base64 with existing local files to complement the new missing-file tests.
The new tests validate the error behavior for missing local files after switching to relying on OSError from file_to_base64, but they don’t assert that the success path still works.
Please also add a positive-path test that:
- creates a small temporary file with known contents,
- wraps it in
Image(file=str(path))/Record(file=str(path)), - calls
convert_to_base64(), and - checks that decoding the result matches the original bytes.
This will cover both success and failure semantics of the refactored path.
|
@sourcery-ai review |
There was a problem hiding this comment.
Code Review
This PR effectively optimizes asynchronous I/O performance by reducing unnecessary thread context switching in hot paths such as attachment export, message component file operations, and file downloads. The method of batching synchronous operations into a single asyncio.to_thread call has been well implemented. However, a security audit identified critical vulnerabilities including path traversal, insecure SSL verification, and SSRF. Specifically, message components lack proper validation for local file paths and URLs, potentially allowing arbitrary file reads or requests to internal services. Furthermore, the automatic fallback to disabled SSL verification in download functions introduces a risk of MITM attacks. On the positive side, a comprehensive benchmark suite has been introduced to track performance regressions, and new unit and integration tests provide good coverage for the refactored logic. A minor suggestion for further memory optimization during file downloads involves avoiding unnecessary buffer copies.
| local_path = await asyncio.to_thread(_absolute_path_if_exists, self.file) | ||
| if local_path: | ||
| return local_path |
There was a problem hiding this comment.
The message components (Record, Video, Image, File) allow the use of the file:/// prefix or direct absolute paths in their file or url attributes. These paths are resolved using os.path.abspath or returned directly without any validation against a safe directory (e.g., the bot's data or temp directory). Since these components are often constructed from untrusted external input (e.g., incoming messages from various platforms), an attacker can craft a message that causes the bot to read arbitrary files from the server's filesystem. This can lead to the leakage of sensitive information such as configuration files, database files, or system credentials.
| @@ -256,11 +268,15 @@ async def convert_to_file_path(self) -> str: | |||
| get_astrbot_temp_path(), f"videoseg_{uuid.uuid4().hex}" | |||
| ) | |||
| await download_file(url, video_file_path) | |||
There was a problem hiding this comment.
The convert_to_file_path and convert_to_base64 methods in Record, Video, and Image components automatically download content from URLs provided in the file or url attributes. If these attributes are controlled by an untrusted user, an attacker can cause the bot to make requests to internal network services, potentially bypassing firewalls or accessing internal APIs.
| if len(buffered) >= flush_threshold: | ||
| chunk_to_write = bytes(buffered) | ||
| buffered.clear() |
There was a problem hiding this comment.
在这里通过 bytes(buffered) 创建 chunk_to_write 实际上创建了一个不必要的内存副本。file_obj.write 方法可以直接接受 bytearray 类型的参数。由于 await asyncio.to_thread 会阻塞直到写操作完成,因此之后再调用 buffered.clear() 是线程安全的。这样可以减少内存分配和复制的开销。
同样的优化也适用于函数末尾对剩余缓冲区的写入(第 222 行)。
| if len(buffered) >= flush_threshold: | |
| chunk_to_write = bytes(buffered) | |
| buffered.clear() | |
| await asyncio.to_thread(file_obj.write, buffered) | |
| buffered.clear() |
There was a problem hiding this comment.
Hey - 我发现了两个问题,并给出了一些整体层面的反馈:
- 在
components.Image/Record.convert_to_base64中,新加的except OSError分支现在也会把权限或 I/O 错误映射为"not a valid file"。如果你只是想隐藏“文件不存在”的错误,建议将其收窄为只捕获FileNotFoundError(以及可能的IsADirectoryError),以避免掩盖其他问题。 - 性能基准测试和
test_io_download_file都依赖私有属性site._server.sockets来确定绑定的端口;如果能使用一个小的辅助函数或 fixture,通过公共 API 检查 server(或者把这处访问封装到一个地方),就可以减少重复代码,也降低在 aiohttp 内部实现变化时出问题的风险。
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- 在 `components.Image/Record.convert_to_base64` 中,新加的 `except OSError` 分支现在也会把权限或 I/O 错误映射为 `"not a valid file"`。如果你只是想隐藏“文件不存在”的错误,建议将其收窄为只捕获 `FileNotFoundError`(以及可能的 `IsADirectoryError`),以避免掩盖其他问题。
- 性能基准测试和 `test_io_download_file` 都依赖私有属性 `site._server.sockets` 来确定绑定的端口;如果能使用一个小的辅助函数或 fixture,通过公共 API 检查 server(或者把这处访问封装到一个地方),就可以减少重复代码,也降低在 aiohttp 内部实现变化时出问题的风险。
## Individual Comments
### Comment 1
<location path="astrbot/core/utils/io.py" line_range="143-152" />
<code_context>
print(f"文件大小: {total_size / 1024:.2f} KB | 文件地址: {url}")
file_obj = await asyncio.to_thread(Path(path).open, "wb")
try:
- while True:
- chunk = await resp.content.read(8192)
- if not chunk:
</code_context>
<issue_to_address>
**issue (bug_risk):** `_stream_to_file` 中的缓冲写入在任务在最终 flush 之前被取消时,可能导致数据丢失。
如果任务在 `while True` 循环内部被取消(例如在 `stream.read` 时或循环退出前),函数会在没有执行最终的 `if buffered:` 代码块的情况下直接退出,因此在此之前累积但尚未 flush 的、最大到 `flush_threshold` 字节的数据会丢失。之前的版本是每个分块立即写入,因此这是在最坏情况下数据丢失方面的一个回退。为避免这个问题,可以用 `try`/`finally` 包裹循环,并把 `if buffered: ... write(...)` 移动到 `finally` 中,这样在任务被取消或出错时都能确保缓冲数据被写出。
</issue_to_address>
### Comment 2
<location path="astrbot/core/utils/io.py" line_range="187" />
<code_context>
print()
+async def _stream_to_file(
+ stream: aiohttp.StreamReader,
+ file_obj: BinaryIO,
</code_context>
<issue_to_address>
**issue (complexity):** 可以考虑通过移除未被实际使用的可配置项,并将进度打印逻辑移动到一个辅助函数中,来简化 `_stream_to_file`,让这个函数只负责直接的流式写入与进度更新。
新的 `_stream_to_file` 确实减少了重复代码,但也引入了额外的状态和分支(手动缓冲、`flush_threshold`、`chunk_size`、`progress_total`),而这些都没有被调用方真正使用或改变。你可以在保留这个共享抽象的前提下对其进行简化,并分离不同的职责。
一个具体的简化方案:
- 去掉手动缓冲和 `flush_threshold`,除非你有明确的性能数据表明这么做是必要的。
- 不要把 `chunk_size`/`flush_threshold` 暴露为参数,因为调用方总是使用相同的值。
- 将进度格式化逻辑移动到一个单独的辅助函数中。
例如:
```python
async def _stream_to_file(
stream: aiohttp.StreamReader,
file_obj: BinaryIO,
*,
total_size: int,
start_time: float,
show_progress: bool,
) -> None:
downloaded_size = 0
# treat 0 as "unknown" size, but keep caller API simple
known_total = total_size if total_size > 0 else None
while True:
chunk = await stream.read(8192)
if not chunk:
break
await asyncio.to_thread(file_obj.write, chunk)
downloaded_size += len(chunk)
if show_progress:
_print_download_progress(downloaded_size, known_total, start_time)
```
```python
def _print_download_progress(
downloaded_size: int,
total_size: int | None,
start_time: float,
) -> None:
elapsed_time = max(time.time() - start_time, 1e-6)
speed = downloaded_size / 1024 / elapsed_time # KB/s
if total_size:
percent = downloaded_size / total_size
msg = f"\r下载进度: {percent:.2%} 速度: {speed:.2f} KB/s"
else:
msg = f"\r已下载: {downloaded_size} 字节 速度: {speed:.2f} KB/s"
print(msg, end="")
```
这样可以保留目前所有的行为(包括你新增的“未知总大小”处理),同时:
- 移除多余的可变状态(`buffered`、`progress_total`)以及相关分支。
- 让 `_stream_to_file` 只专注于流式 I/O 和进度更新。
- 让进度显示行为更容易单独调整。
</issue_to_address>帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据这些反馈改进今后的审查。
Original comment in English
Hey - I've found 2 issues, and left some high level feedback:
- In
components.Image/Record.convert_to_base64, the newexcept OSErrorpath will now also map permission or I/O errors to"not a valid file"; if you only intend to hide missing-file errors, consider narrowing this toFileNotFoundError(and possiblyIsADirectoryError) to avoid masking other issues. - Both the performance benchmark and
test_io_download_filerely on the privatesite._server.socketsattribute to determine the bound port; using a small helper or fixture that inspects the server via a public API (or wraps this access in one place) would reduce duplication and the risk of breakage if aiohttp internals change.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `components.Image/Record.convert_to_base64`, the new `except OSError` path will now also map permission or I/O errors to `"not a valid file"`; if you only intend to hide missing-file errors, consider narrowing this to `FileNotFoundError` (and possibly `IsADirectoryError`) to avoid masking other issues.
- Both the performance benchmark and `test_io_download_file` rely on the private `site._server.sockets` attribute to determine the bound port; using a small helper or fixture that inspects the server via a public API (or wraps this access in one place) would reduce duplication and the risk of breakage if aiohttp internals change.
## Individual Comments
### Comment 1
<location path="astrbot/core/utils/io.py" line_range="143-152" />
<code_context>
print(f"文件大小: {total_size / 1024:.2f} KB | 文件地址: {url}")
file_obj = await asyncio.to_thread(Path(path).open, "wb")
try:
- while True:
- chunk = await resp.content.read(8192)
- if not chunk:
</code_context>
<issue_to_address>
**issue (bug_risk):** Buffered writes in `_stream_to_file` can lose data on task cancellation before the final flush.
If the task is cancelled inside the `while True` loop (e.g. during `stream.read` or before the loop exits), the function exits without reaching the final `if buffered:` block, so any accumulated (but unflushed) data up to `flush_threshold` bytes is lost. The previous version wrote each chunk immediately, so this is a regression in worst-case data loss. To avoid this, wrap the loop in a `try`/`finally` and move the `if buffered: ... write(...)` into the `finally` so buffered data is always flushed on cancellation or errors.
</issue_to_address>
### Comment 2
<location path="astrbot/core/utils/io.py" line_range="187" />
<code_context>
print()
+async def _stream_to_file(
+ stream: aiohttp.StreamReader,
+ file_obj: BinaryIO,
</code_context>
<issue_to_address>
**issue (complexity):** Consider simplifying `_stream_to_file` by removing unused configurability and moving progress printing into a helper so the function only handles straightforward streaming and progress updates.
The new `_stream_to_file` does reduce duplication but it also introduces extra state and branching (manual buffering, `flush_threshold`, `chunk_size`, `progress_total`) that callers don’t use or vary. You can keep the shared abstraction while simplifying it and separating concerns.
A concrete simplification:
- Drop manual buffering and `flush_threshold` unless you have a measured need for it.
- Don’t expose `chunk_size`/`flush_threshold` as parameters since callers always use the same values.
- Move progress formatting into a dedicated helper.
For example:
```python
async def _stream_to_file(
stream: aiohttp.StreamReader,
file_obj: BinaryIO,
*,
total_size: int,
start_time: float,
show_progress: bool,
) -> None:
downloaded_size = 0
# treat 0 as "unknown" size, but keep caller API simple
known_total = total_size if total_size > 0 else None
while True:
chunk = await stream.read(8192)
if not chunk:
break
await asyncio.to_thread(file_obj.write, chunk)
downloaded_size += len(chunk)
if show_progress:
_print_download_progress(downloaded_size, known_total, start_time)
```
```python
def _print_download_progress(
downloaded_size: int,
total_size: int | None,
start_time: float,
) -> None:
elapsed_time = max(time.time() - start_time, 1e-6)
speed = downloaded_size / 1024 / elapsed_time # KB/s
if total_size:
percent = downloaded_size / total_size
msg = f"\r下载进度: {percent:.2%} 速度: {speed:.2f} KB/s"
else:
msg = f"\r已下载: {downloaded_size} 字节 速度: {speed:.2f} KB/s"
print(msg, end="")
```
This keeps all existing behavior (including the “unknown total size” handling you added) but:
- Removes extra mutable state (`buffered`, `progress_total`) and branches.
- Keeps `_stream_to_file` focused on streaming I/O and progress updates.
- Makes progress display behavior easy to adjust independently.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - 我发现了 1 个问题
给 AI Agents 的提示
Please address the comments from this code review:
## Individual Comments
### Comment 1
<location path="astrbot/core/message/components.py" line_range="203-204" />
<code_context>
bs64_data = self.file
- elif await asyncio.to_thread(os.path.exists, self.file):
- bs64_data = await file_to_base64(self.file)
else:
- raise Exception(f"not a valid file: {self.file}")
+ try:
+ bs64_data = await file_to_base64(self.file)
+ except (FileNotFoundError, IsADirectoryError) as exc:
</code_context>
<issue_to_address>
**建议:** 在将本地文件转换为 base64 时,考虑放宽要捕获的异常范围。
目前只处理了 `FileNotFoundError` 和 `IsADirectoryError`,但其他 IO 错误(例如 `PermissionError` 或其他 `OSError` 子类)会绕过统一的“not a valid file”处理逻辑,而直接作为原始异常抛出。为了让 API 在所有 IO 失败场景下的行为保持一致,建议捕获 `OSError`(或者至少增加对 `PermissionError` 的处理)。`ImageSegment.convert_to_base64` 中的路径同样适用这一点。
建议实现方式:
```python
else:
try:
bs64_data = await file_to_base64(self.file)
except OSError as exc:
raise Exception(f"not a valid file: {self.file}") from exc
bs64_data = bs64_data.removeprefix("base64://")
return bs64_data
```
在 `ImageSegment.convert_to_base64` 中有一条类似的代码路径,也应更新为捕获 `OSError`,而不是仅捕获 `FileNotFoundError` / `IsADirectoryError`(或其他同样狭窄的异常)。在那里的实现中同样使用 `except OSError as exc:` 的模式,以保持两个组件间错误处理的一致性。
</issue_to_address>帮我变得更有用!请在每条评论上点击 👍 或 👎,我会根据你的反馈改进后续的评审。
Original comment in English
Hey - I've found 1 issue
Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments
### Comment 1
<location path="astrbot/core/message/components.py" line_range="203-204" />
<code_context>
bs64_data = self.file
- elif await asyncio.to_thread(os.path.exists, self.file):
- bs64_data = await file_to_base64(self.file)
else:
- raise Exception(f"not a valid file: {self.file}")
+ try:
+ bs64_data = await file_to_base64(self.file)
+ except (FileNotFoundError, IsADirectoryError) as exc:
</code_context>
<issue_to_address>
**suggestion:** Consider broadening the caught exceptions when converting a local file to base64.
This currently only handles `FileNotFoundError` and `IsADirectoryError`, but other IO errors (e.g. `PermissionError` or other `OSError` subclasses) will bypass the unified "not a valid file" handling and surface as raw exceptions. To keep the API behavior consistent for all IO failures, consider catching `OSError` (or at least adding `PermissionError`). The same applies to the `ImageSegment.convert_to_base64` path.
Suggested implementation:
```python
else:
try:
bs64_data = await file_to_base64(self.file)
except OSError as exc:
raise Exception(f"not a valid file: {self.file}") from exc
bs64_data = bs64_data.removeprefix("base64://")
return bs64_data
```
There is a similar code path in `ImageSegment.convert_to_base64` that should also be updated to catch `OSError` instead of only `FileNotFoundError` / `IsADirectoryError` (or equivalent narrow exceptions). Apply the same `except OSError as exc:` pattern there to keep error handling consistent across both components.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - 我发现了 1 个问题,并且给出了一些整体性的反馈:
- 新的
_stream_to_file辅助函数仍然是将每个 8KB 的块单独 offload;如果目标是减少线程切换的开销,可以考虑在每次to_thread调用中缓存多个块,或者增大块大小 / 添加一个可配置的flush_threshold,就像 PR 摘要中描述的那样。 - 在
test_core_performance_benchmarks中,bench_to_thread_exists用例调用了asyncio.to_thread(Path.exists, exists_path);改用asyncio.to_thread(exists_path.exists)会更清晰一些,并且避免依赖未绑定方法(unbound method)的调用约定。
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The new `_stream_to_file` helper still offloads every 8KB chunk write individually; if the goal is to reduce thread-switch overhead, consider buffering multiple chunks per `to_thread` call or increasing the chunk size / adding a configurable `flush_threshold` as described in the PR summary.
- In `test_core_performance_benchmarks`, the `bench_to_thread_exists` case calls `asyncio.to_thread(Path.exists, exists_path)`; using `asyncio.to_thread(exists_path.exists)` would be slightly clearer and avoids relying on the unbound method calling convention.
## Individual Comments
### Comment 1
<location path="astrbot/core/backup/exporter.py" line_range="370-379" />
<code_context>
+ for attachment in attachments:
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Attachments with empty or missing `attachment_id` will all be written to the same archive path, potentially overwriting each other.
In `_export_attachments_sync`, `attachment_id = attachment.get("attachment_id", "")` and `archive_path = f"files/attachments/{attachment_id}{ext}"` mean that missing or empty `attachment_id` values all map to `files/attachments/{ext}`, so later writes overwrite earlier ones. Since this can silently drop data in the backup, please either ensure a non-empty `attachment_id` (e.g., derive from `file_path` or original filename) or log and skip attachments with an empty `attachment_id`.
Suggested implementation:
```python
for attachment in attachments:
file_path = attachment.get("path", "")
attachment_id = attachment.get("attachment_id")
if not attachment_id:
logger.warning(
"Skipping attachment with empty attachment_id; path=%s",
file_path,
)
continue
try:
```
1. Ensure a module-level logger is defined in `astrbot/core/backup/exporter.py`, e.g.:
- `import logging` at the top of the file, and
- `logger = logging.getLogger(__name__)` near the top if it does not already exist.
2. If your logging convention uses a different logger name or helper, adjust the `logger.warning(...)` call to match the existing pattern.
</issue_to_address>Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Original comment in English
Hey - I've found 1 issue, and left some high level feedback:
- The new
_stream_to_filehelper still offloads every 8KB chunk write individually; if the goal is to reduce thread-switch overhead, consider buffering multiple chunks perto_threadcall or increasing the chunk size / adding a configurableflush_thresholdas described in the PR summary. - In
test_core_performance_benchmarks, thebench_to_thread_existscase callsasyncio.to_thread(Path.exists, exists_path); usingasyncio.to_thread(exists_path.exists)would be slightly clearer and avoids relying on the unbound method calling convention.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The new `_stream_to_file` helper still offloads every 8KB chunk write individually; if the goal is to reduce thread-switch overhead, consider buffering multiple chunks per `to_thread` call or increasing the chunk size / adding a configurable `flush_threshold` as described in the PR summary.
- In `test_core_performance_benchmarks`, the `bench_to_thread_exists` case calls `asyncio.to_thread(Path.exists, exists_path)`; using `asyncio.to_thread(exists_path.exists)` would be slightly clearer and avoids relying on the unbound method calling convention.
## Individual Comments
### Comment 1
<location path="astrbot/core/backup/exporter.py" line_range="370-379" />
<code_context>
+ for attachment in attachments:
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Attachments with empty or missing `attachment_id` will all be written to the same archive path, potentially overwriting each other.
In `_export_attachments_sync`, `attachment_id = attachment.get("attachment_id", "")` and `archive_path = f"files/attachments/{attachment_id}{ext}"` mean that missing or empty `attachment_id` values all map to `files/attachments/{ext}`, so later writes overwrite earlier ones. Since this can silently drop data in the backup, please either ensure a non-empty `attachment_id` (e.g., derive from `file_path` or original filename) or log and skip attachments with an empty `attachment_id`.
Suggested implementation:
```python
for attachment in attachments:
file_path = attachment.get("path", "")
attachment_id = attachment.get("attachment_id")
if not attachment_id:
logger.warning(
"Skipping attachment with empty attachment_id; path=%s",
file_path,
)
continue
try:
```
1. Ensure a module-level logger is defined in `astrbot/core/backup/exporter.py`, e.g.:
- `import logging` at the top of the file, and
- `logger = logging.getLogger(__name__)` near the top if it does not already exist.
2. If your logging convention uses a different logger name or helper, adjust the `logger.warning(...)` call to match the existing pattern.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey - 我在这里给出了一些总体反馈:
- 在
AstrBotExporter._export_attachments_sync中,你现在只捕获FileNotFoundError和OSError,因此任何来自zf.write的其他异常都会向上冒泡并中止整个导出流程,而之前这些异常是被记录后跳过的;请考虑是否希望通过放宽或重新梳理这里的异常处理逻辑,来保留之前这种“尽力而为(best-effort)”的行为。 - PR 描述中提到对传入的数据块进行缓冲,并在
download_file中以更大的批次进行 flush,但_stream_to_file目前仍然是对每次读取都执行一次asyncio.to_thread(file_obj.write, chunk);如果你的意图是进行批量写入,可能需要在将写操作 offload 之前累积多个 chunk,或者更新 PR 描述以匹配当前的实现。
面向 AI Agent 的提示
Please address the comments from this code review:
## Overall Comments
- In `AstrBotExporter._export_attachments_sync`, you now only catch `FileNotFoundError` and `OSError`, so any other exception from `zf.write` will bubble up and abort the whole export where previously it was logged and skipped; consider whether you want to preserve the previous "best-effort" behavior by broadening or explicitly reviewing the exception handling here.
- The PR description mentions buffering incoming chunks and flushing in larger batches for `download_file`, but `_stream_to_file` still performs one `asyncio.to_thread(file_obj.write, chunk)` per read; if the intent is to batch writes, you may want to accumulate multiple chunks before offloading the write or update the description to match the current implementation.帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈改进后续的审查。
Original comment in English
Hey - I've left some high level feedback:
- In
AstrBotExporter._export_attachments_sync, you now only catchFileNotFoundErrorandOSError, so any other exception fromzf.writewill bubble up and abort the whole export where previously it was logged and skipped; consider whether you want to preserve the previous "best-effort" behavior by broadening or explicitly reviewing the exception handling here. - The PR description mentions buffering incoming chunks and flushing in larger batches for
download_file, but_stream_to_filestill performs oneasyncio.to_thread(file_obj.write, chunk)per read; if the intent is to batch writes, you may want to accumulate multiple chunks before offloading the write or update the description to match the current implementation.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `AstrBotExporter._export_attachments_sync`, you now only catch `FileNotFoundError` and `OSError`, so any other exception from `zf.write` will bubble up and abort the whole export where previously it was logged and skipped; consider whether you want to preserve the previous "best-effort" behavior by broadening or explicitly reviewing the exception handling here.
- The PR description mentions buffering incoming chunks and flushing in larger batches for `download_file`, but `_stream_to_file` still performs one `asyncio.to_thread(file_obj.write, chunk)` per read; if the intent is to batch writes, you may want to accumulate multiple chunks before offloading the write or update the description to match the current implementation.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
SourceryAI
left a comment
There was a problem hiding this comment.
Hey - 我发现了两个问题,并给出了一些高层反馈:
- 在
_stream_to_file中,你仍然是针对每个网络数据块各自调用一次file_obj.write并 offload;如果你想像 PR 中描述的那样进一步减少线程上下文切换,可以考虑先在内存中缓冲多个 chunk,然后在调用to_thread之前批量写入更大的块。 get_bound_tcp_port依赖私有属性site._server及其sockets属性,在不同 aiohttp 版本之间可能不太稳定;你可能需要用更明确的回退逻辑来保护这一点,或者加一个注释解释所依赖的 aiohttp 版本假设,以及为什么在测试中这样做是可以接受的。
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `_stream_to_file`, you still offload one `file_obj.write` per network chunk; if you want to further reduce thread context switches as described in the PR, consider buffering multiple chunks in memory and writing them in larger batches before calling `to_thread`.
- `get_bound_tcp_port` relies on the private `site._server` and its `sockets` attribute, which may be brittle across aiohttp versions; you might want to either guard this with clearer fallbacks or add a comment explaining the version assumptions and why this is acceptable for tests.
## Individual Comments
### Comment 1
<location path="tests/unit/test_message_components_paths.py" line_range="10-18" />
<code_context>
+ return max(1, value * scale)
+
+
+@pytest.mark.asyncio
+@pytest.mark.slow
+async def test_core_performance_benchmarks(tmp_path: Path) -> None:
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding tests for URL/base64 branches of `convert_to_file_path`
The new tests cover local file handling and error mapping, but `convert_to_file_path` also has URL and base64 branches that aren’t exercised. Please add tests that:
- For `base64://`, use a small payload, then assert the returned path is absolute and the file exists.
- For HTTP, use a local aiohttp server (similar to the `download_file` test) and assert the returned path is absolute and the file contents match.
This will confirm the `_absolute_path`/`_absolute_path_if_exists` refactor didn’t change behavior for non-local inputs.
</issue_to_address>
### Comment 2
<location path="tests/fixtures/helpers.py" line_range="28-37" />
<code_context>
return None
+def get_bound_tcp_port(site: Any) -> int:
+ """Resolve bound aiohttp TCP site port with public API first."""
+ parsed = urlparse(getattr(site, "name", ""))
+ if parsed.port is not None and parsed.port > 0:
+ return parsed.port
+
+ server = getattr(site, "_server", None)
+ sockets = getattr(server, "sockets", None) if server else None
+ if sockets:
+ return sockets[0].getsockname()[1]
+
+ raise RuntimeError("Unable to resolve bound TCP port from aiohttp site")
+
+
</code_context>
<issue_to_address>
**suggestion (testing):** Consider unit tests for `get_bound_tcp_port` error path
Since this is now a shared test helper, a small targeted test will make future changes safer. For example:
- Construct an object lacking `name`, `_server`, and `sockets` and assert `get_bound_tcp_port` raises `RuntimeError`.
- Optionally cover the success cases with minimal fakes: one where `name` encodes the port, and one where `_server.sockets[0].getsockname()` returns a port.
This helps catch regressions here rather than in unrelated tests or benchmarks.
Suggested implementation:
```python
from typing import Any, Callable
from urllib.parse import urlparse
from unittest.mock import AsyncMock, MagicMock
import pytest
```
```python
def get_bound_tcp_port(site: Any) -> int:
"""Resolve bound aiohttp TCP site port with public API first."""
parsed = urlparse(getattr(site, "name", ""))
if parsed.port is not None and parsed.port > 0:
return parsed.port
server = getattr(site, "_server", None)
sockets = getattr(server, "sockets", None) if server else None
if sockets:
return sockets[0].getsockname()[1]
raise RuntimeError("Unable to resolve bound TCP port from aiohttp site")
class _DummySiteNoAttrs:
"""Dummy site lacking name, _server, and sockets for error-path testing."""
pass
class _DummySocket:
def __init__(self, port: int) -> None:
self._port = port
def getsockname(self):
# Minimal tuple-like interface: (host, port)
return ("127.0.0.1", self._port)
class _DummyServer:
def __init__(self, port: int) -> None:
self.sockets = [_DummySocket(port)]
class _DummySiteWithName:
def __init__(self, port: int) -> None:
self.name = f"http://localhost:{port}"
class _DummySiteWithServer:
def __init__(self, port: int) -> None:
self._server = _DummyServer(port)
def test_get_bound_tcp_port_raises_on_unresolvable_site():
site = _DummySiteNoAttrs()
with pytest.raises(RuntimeError, match="Unable to resolve bound TCP port"):
get_bound_tcp_port(site)
def test_get_bound_tcp_port_uses_name_port_when_available():
site = _DummySiteWithName(8081)
assert get_bound_tcp_port(site) == 8081
def test_get_bound_tcp_port_falls_back_to_server_sockets():
site = _DummySiteWithServer(9092)
assert get_bound_tcp_port(site) == 9092
```
</issue_to_address>Hi @zouyonghe! 👋
感谢你通过评论 @sourcery-ai review 来试用 Sourcery!🚀
安装 sourcery-ai 机器人 来在每个 Pull Request 上自动获取代码审查 ✨
帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈改进后续的审查。Original comment in English
Hey - I've found 2 issues, and left some high level feedback:
- In
_stream_to_file, you still offload onefile_obj.writeper network chunk; if you want to further reduce thread context switches as described in the PR, consider buffering multiple chunks in memory and writing them in larger batches before callingto_thread. get_bound_tcp_portrelies on the privatesite._serverand itssocketsattribute, which may be brittle across aiohttp versions; you might want to either guard this with clearer fallbacks or add a comment explaining the version assumptions and why this is acceptable for tests.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `_stream_to_file`, you still offload one `file_obj.write` per network chunk; if you want to further reduce thread context switches as described in the PR, consider buffering multiple chunks in memory and writing them in larger batches before calling `to_thread`.
- `get_bound_tcp_port` relies on the private `site._server` and its `sockets` attribute, which may be brittle across aiohttp versions; you might want to either guard this with clearer fallbacks or add a comment explaining the version assumptions and why this is acceptable for tests.
## Individual Comments
### Comment 1
<location path="tests/unit/test_message_components_paths.py" line_range="10-18" />
<code_context>
+ return max(1, value * scale)
+
+
+@pytest.mark.asyncio
+@pytest.mark.slow
+async def test_core_performance_benchmarks(tmp_path: Path) -> None:
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding tests for URL/base64 branches of `convert_to_file_path`
The new tests cover local file handling and error mapping, but `convert_to_file_path` also has URL and base64 branches that aren’t exercised. Please add tests that:
- For `base64://`, use a small payload, then assert the returned path is absolute and the file exists.
- For HTTP, use a local aiohttp server (similar to the `download_file` test) and assert the returned path is absolute and the file contents match.
This will confirm the `_absolute_path`/`_absolute_path_if_exists` refactor didn’t change behavior for non-local inputs.
</issue_to_address>
### Comment 2
<location path="tests/fixtures/helpers.py" line_range="28-37" />
<code_context>
return None
+def get_bound_tcp_port(site: Any) -> int:
+ """Resolve bound aiohttp TCP site port with public API first."""
+ parsed = urlparse(getattr(site, "name", ""))
+ if parsed.port is not None and parsed.port > 0:
+ return parsed.port
+
+ server = getattr(site, "_server", None)
+ sockets = getattr(server, "sockets", None) if server else None
+ if sockets:
+ return sockets[0].getsockname()[1]
+
+ raise RuntimeError("Unable to resolve bound TCP port from aiohttp site")
+
+
</code_context>
<issue_to_address>
**suggestion (testing):** Consider unit tests for `get_bound_tcp_port` error path
Since this is now a shared test helper, a small targeted test will make future changes safer. For example:
- Construct an object lacking `name`, `_server`, and `sockets` and assert `get_bound_tcp_port` raises `RuntimeError`.
- Optionally cover the success cases with minimal fakes: one where `name` encodes the port, and one where `_server.sockets[0].getsockname()` returns a port.
This helps catch regressions here rather than in unrelated tests or benchmarks.
Suggested implementation:
```python
from typing import Any, Callable
from urllib.parse import urlparse
from unittest.mock import AsyncMock, MagicMock
import pytest
```
```python
def get_bound_tcp_port(site: Any) -> int:
"""Resolve bound aiohttp TCP site port with public API first."""
parsed = urlparse(getattr(site, "name", ""))
if parsed.port is not None and parsed.port > 0:
return parsed.port
server = getattr(site, "_server", None)
sockets = getattr(server, "sockets", None) if server else None
if sockets:
return sockets[0].getsockname()[1]
raise RuntimeError("Unable to resolve bound TCP port from aiohttp site")
class _DummySiteNoAttrs:
"""Dummy site lacking name, _server, and sockets for error-path testing."""
pass
class _DummySocket:
def __init__(self, port: int) -> None:
self._port = port
def getsockname(self):
# Minimal tuple-like interface: (host, port)
return ("127.0.0.1", self._port)
class _DummyServer:
def __init__(self, port: int) -> None:
self.sockets = [_DummySocket(port)]
class _DummySiteWithName:
def __init__(self, port: int) -> None:
self.name = f"http://localhost:{port}"
class _DummySiteWithServer:
def __init__(self, port: int) -> None:
self._server = _DummyServer(port)
def test_get_bound_tcp_port_raises_on_unresolvable_site():
site = _DummySiteNoAttrs()
with pytest.raises(RuntimeError, match="Unable to resolve bound TCP port"):
get_bound_tcp_port(site)
def test_get_bound_tcp_port_uses_name_port_when_available():
site = _DummySiteWithName(8081)
assert get_bound_tcp_port(site) == 8081
def test_get_bound_tcp_port_falls_back_to_server_sockets():
site = _DummySiteWithServer(9092)
assert get_bound_tcp_port(site) == 9092
```
</issue_to_address>Hi @zouyonghe! 👋
Thanks for trying out Sourcery by commenting with @sourcery-ai review! 🚀
Install the sourcery-ai bot to get automatic code reviews on every pull request ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.| @pytest.mark.asyncio | ||
| async def test_image_convert_to_file_path_returns_absolute_path(tmp_path): | ||
| file_path = tmp_path / "img.bin" | ||
| file_path.write_bytes(b"img") | ||
|
|
||
| image = Image(file=str(file_path)) | ||
| resolved = await image.convert_to_file_path() | ||
|
|
||
| assert resolved == os.path.abspath(str(file_path)) |
There was a problem hiding this comment.
suggestion (testing): 可以考虑为 convert_to_file_path 的 URL/base64 分支补充测试用例。
新的测试覆盖了本地文件处理和错误映射,但 convert_to_file_path 中的 URL 和 base64 分支目前还没有被执行到。请添加如下测试:
- 对于
base64://,使用一个较小的负载,然后断言返回路径是绝对路径且对应的文件存在。 - 对于 HTTP,使用本地 aiohttp 服务器(类似于
download_file的测试),并断言返回路径是绝对路径且文件内容匹配。
这样可以确认 _absolute_path/_absolute_path_if_exists 的重构没有改变针对非本地输入时的行为。
Original comment in English
suggestion (testing): Consider adding tests for URL/base64 branches of convert_to_file_path
The new tests cover local file handling and error mapping, but convert_to_file_path also has URL and base64 branches that aren’t exercised. Please add tests that:
- For
base64://, use a small payload, then assert the returned path is absolute and the file exists. - For HTTP, use a local aiohttp server (similar to the
download_filetest) and assert the returned path is absolute and the file contents match.
This will confirm the _absolute_path/_absolute_path_if_exists refactor didn’t change behavior for non-local inputs.
|
@sourcery-ai review |
There was a problem hiding this comment.
嗨——我已经 review 了你的改动,看起来非常棒!
帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据你的反馈来改进 review 质量。
Original comment in English
Hey - I've reviewed your changes and they look great!
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
No docs changes were generated in this run (docs repo had no updates). Docs repo: AstrBotDevs/AstrBot-docs AI change summary (not committed):
Experimental bot notice:
|
…vs#5737) * docs: align deployment sections across multilingual readmes * docs: normalize deployment punctuation and AUR guidance * docs: fix french and russian deployment wording * perf: optimize async io hot paths and extend benchmarks * fix: address async io review feedback * fix: address follow-up async io review comments * fix: align base64 io error handling in message components * fix: harden attachment export ids and tune io chunking * fix: preserve best-effort attachment export and batch writes * test: expand path conversion and helper coverage
…vs#5737) * docs: align deployment sections across multilingual readmes * docs: normalize deployment punctuation and AUR guidance * docs: fix french and russian deployment wording * perf: optimize async io hot paths and extend benchmarks * fix: address async io review feedback * fix: address follow-up async io review comments * fix: align base64 io error handling in message components * fix: harden attachment export ids and tune io chunking * fix: preserve best-effort attachment export and batch writes * test: expand path conversion and helper coverage
…vs#5737) * docs: align deployment sections across multilingual readmes * docs: normalize deployment punctuation and AUR guidance * docs: fix french and russian deployment wording * perf: optimize async io hot paths and extend benchmarks * fix: address async io review feedback * fix: address follow-up async io review comments * fix: align base64 io error handling in message components * fix: harden attachment export ids and tune io chunking * fix: preserve best-effort attachment export and batch writes * test: expand path conversion and helper coverage
…vs#5737) * docs: align deployment sections across multilingual readmes * docs: normalize deployment punctuation and AUR guidance * docs: fix french and russian deployment wording * perf: optimize async io hot paths and extend benchmarks * fix: address async io review feedback * fix: address follow-up async io review comments * fix: align base64 io error handling in message components * fix: harden attachment export ids and tune io chunking * fix: preserve best-effort attachment export and batch writes * test: expand path conversion and helper coverage
…vs#5737) * docs: align deployment sections across multilingual readmes * docs: normalize deployment punctuation and AUR guidance * docs: fix french and russian deployment wording * perf: optimize async io hot paths and extend benchmarks * fix: address async io review feedback * fix: address follow-up async io review comments * fix: align base64 io error handling in message components * fix: harden attachment export ids and tune io chunking * fix: preserve best-effort attachment export and batch writes * test: expand path conversion and helper coverage
…vs#5737) * docs: align deployment sections across multilingual readmes * docs: normalize deployment punctuation and AUR guidance * docs: fix french and russian deployment wording * perf: optimize async io hot paths and extend benchmarks * fix: address async io review feedback * fix: address follow-up async io review comments * fix: align base64 io error handling in message components * fix: harden attachment export ids and tune io chunking * fix: preserve best-effort attachment export and batch writes * test: expand path conversion and helper coverage
…vs#5737) * docs: align deployment sections across multilingual readmes * docs: normalize deployment punctuation and AUR guidance * docs: fix french and russian deployment wording * perf: optimize async io hot paths and extend benchmarks * fix: address async io review feedback * fix: address follow-up async io review comments * fix: align base64 io error handling in message components * fix: harden attachment export ids and tune io chunking * fix: preserve best-effort attachment export and batch writes * test: expand path conversion and helper coverage
…vs#5737) * docs: align deployment sections across multilingual readmes * docs: normalize deployment punctuation and AUR guidance * docs: fix french and russian deployment wording * perf: optimize async io hot paths and extend benchmarks * fix: address async io review feedback * fix: address follow-up async io review comments * fix: align base64 io error handling in message components * fix: harden attachment export ids and tune io chunking * fix: preserve best-effort attachment export and batch writes * test: expand path conversion and helper coverage
…vs#5737) * docs: align deployment sections across multilingual readmes * docs: normalize deployment punctuation and AUR guidance * docs: fix french and russian deployment wording * perf: optimize async io hot paths and extend benchmarks * fix: address async io review feedback * fix: address follow-up async io review comments * fix: align base64 io error handling in message components * fix: harden attachment export ids and tune io chunking * fix: preserve best-effort attachment export and batch writes * test: expand path conversion and helper coverage
…vs#5737) * docs: align deployment sections across multilingual readmes * docs: normalize deployment punctuation and AUR guidance * docs: fix french and russian deployment wording * perf: optimize async io hot paths and extend benchmarks * fix: address async io review feedback * fix: address follow-up async io review comments * fix: align base64 io error handling in message components * fix: harden attachment export ids and tune io chunking * fix: preserve best-effort attachment export and batch writes * test: expand path conversion and helper coverage
…vs#5737) * docs: align deployment sections across multilingual readmes * docs: normalize deployment punctuation and AUR guidance * docs: fix french and russian deployment wording * perf: optimize async io hot paths and extend benchmarks * fix: address async io review feedback * fix: address follow-up async io review comments * fix: align base64 io error handling in message components * fix: harden attachment export ids and tune io chunking * fix: preserve best-effort attachment export and batch writes * test: expand path conversion and helper coverage
…vs#5737) * docs: align deployment sections across multilingual readmes * docs: normalize deployment punctuation and AUR guidance * docs: fix french and russian deployment wording * perf: optimize async io hot paths and extend benchmarks * fix: address async io review feedback * fix: address follow-up async io review comments * fix: align base64 io error handling in message components * fix: harden attachment export ids and tune io chunking * fix: preserve best-effort attachment export and batch writes * test: expand path conversion and helper coverage
Summary
This PR targets performance regressions introduced during the ASYNC230/ASYNC240 cleanup and expands benchmark coverage so we can track async I/O costs continuously.
The main goal is to reduce unnecessary thread-pool context switches in hot async paths while preserving non-blocking behavior and lint compliance.
User-facing Impact
Before this change, heavy attachment export and some message/file conversion paths paid avoidable overhead from repeated
await asyncio.to_thread(...)calls in tight loops.In practical terms:
Root Cause
a9c16febcorrectly removed blocking calls from async code, but several hotspots became too fine-grained:exists + abspathwere split into multiple async offloads incomponents.py._export_attachmentsperformed a thread switch per attachment foros.path.exists.download_fileperformed a thread switch per 8KB chunk write.What Changed
1) Reduce thread switching in message component paths
to_threadwhere needed:_absolute_path(...)_absolute_path_if_exists(...)exists+abspathpatterns with single offloads.Files:
astrbot/core/message/components.py2) Optimize backup attachment export path
await asyncio.to_thread(self._export_attachments_sync, ...)_read_text_if_existsto merge configexists + readinto one synchronous call.Files:
astrbot/core/backup/exporter.py3) Optimize download write path
download_fileto use_stream_to_file(...).flush_threshold), reducingto_thread(file_obj.write, ...)call count significantly.Files:
astrbot/core/utils/io.py4) Expand benchmark coverage
Record.convert_to_base64Image.convert_to_file_path(micro-batched)File.get_file(local)_export_attachments(existing/mixed)download_file(local_http_512KB)ASTRBOT_BENCHMARK_SCALE.Files:
tests/performance/test_benchmarks.py5) Add regression/behavior tests
download_filecorrectness against a local aiohttp server.Files:
tests/unit/test_message_components_paths.pytests/unit/test_io_download_file.pytests/test_backup.pyBenchmark Results
Controlled A/B comparison (same script, baseline
c25c558fvs this branch):file_to_base64_256kbimage_convert_to_base64to_thread_path_existsexport_attachments_64x2kbNotes:
Validation
Executed and passing:
uv run ruff check .uv run pytest tests/unit/test_message_components_paths.py tests/unit/test_io_download_file.py tests/unit/test_io_file_to_base64.py -quv run pytest tests/test_backup.py -quv run pytest tests/performance/test_benchmarks.py -qRisk Assessment
由 Sourcery 生成的摘要
优化异步 I/O 路径,并扩展文件下载、消息组件和备份导出方面的性能与回归测试覆盖率。
增强内容:
测试:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
优化异步 I/O 路径,并扩展文件下载、消息组件和备份导出方面的性能与回归测试覆盖率。
增强内容:
测试:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
优化异步 I/O 路径,并扩展文件下载、消息组件和备份导出方面的性能与回归测试覆盖率。
增强内容:
测试:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
优化异步 I/O 路径,并扩展文件下载、消息组件和备份导出方面的性能与回归测试覆盖率。
增强内容:
测试:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests:
Original summary in English
由 Sourcery 生成的摘要
在扩展核心 I/O 和消息组件的性能及回归测试覆盖范围的同时,优化异步文件与下载处理。
新功能:
缺陷修复:
改进:
测试:
download_file及其流式助手添加单元测试,以在本地 aiohttp 服务器上验证其批处理行为和正确性。Original summary in English
Summary by Sourcery
Optimize async file and download handling while expanding performance and regression test coverage for core I/O and message components.
New Features:
Bug Fixes:
Enhancements:
Tests: