You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: blacksheep/docs/openapi.md
+124Lines changed: 124 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -222,6 +222,130 @@ components: {}
222
222
tags: []
223
223
```
224
224
225
+
### Request body binders support
226
+
227
+
/// admonition | Enhanced in BlackSheep 2.6.0
228
+
type: info
229
+
230
+
BlackSheep 2.6.0 adds full OpenAPI documentation support for `FromText` and `FromFiles` binders. These binders are now automatically documented with appropriate request body schemas and content types.
231
+
232
+
///
233
+
234
+
BlackSheep automatically generates OpenAPI documentation for various request body binders:
235
+
236
+
#### FromJSON
237
+
238
+
Documented with `application/json` content type and the appropriate schema:
Copy file name to clipboardExpand all lines: blacksheep/docs/requests.md
+158-7Lines changed: 158 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -177,6 +177,13 @@ kinds.
177
177
178
178
#### Reading a form request body
179
179
180
+
/// admonition | Improved in BlackSheep 2.6.0
181
+
type: info
182
+
183
+
Starting from BlackSheep 2.6.0, `request.form()` and `request.multipart()` use `SpooledTemporaryFile` for memory-efficient file handling. Small files (<1MB) are kept in memory, while larger files automatically spill to temporary disk files. The framework automatically cleans up resources at the end of each request.
184
+
185
+
///
186
+
180
187
=== "Using binders (recommended)"
181
188
182
189
```python
@@ -259,35 +266,179 @@ kinds.
259
266
# data is bytes
260
267
```
261
268
262
-
#### Reading files
269
+
#### Reading files and multipart/form-data
270
+
271
+
/// admonition | Significantly improved in BlackSheep 2.6.0
272
+
type: info
273
+
274
+
BlackSheep 2.6.0 introduces significant improvements for handling `multipart/form-data` with memory-efficient streaming and file handling:
263
275
264
-
Files read from `multipart/form-data` payload.
276
+
-**Memory-efficient file handling**: Files use `SpooledTemporaryFile` - small files (<1MB) stay in memory, larger files automatically spill to temporary disk files
277
+
-**True streaming parsing**: New `Request.multipart_stream()` method for streaming multipart data without buffering the entire request body
278
+
-**Automatic resource cleanup**: The framework automatically calls `Request.dispose()` at the end of each request to clean up file resources
279
+
-**Better API**: `FileBuffer` class provides clean methods (`read()`, `seek()`, `close()`, `save_to()`) for uploaded files
280
+
-**Streaming parts**: `FormPart.stream()` method to stream part data in chunks
281
+
-**OpenAPI support**: `FromText` and `FromFiles` are now properly documented in OpenAPI
282
+
283
+
///
284
+
285
+
Files are read from `multipart/form-data` payload.
265
286
266
287
=== "Using binders (recommended)"
267
288
268
289
```python
269
-
from blacksheep import FromFiles
290
+
from blacksheep import FromFiles, post
270
291
271
292
272
-
@post("/something")
293
+
@post("/upload")
273
294
async def post_files(files: FromFiles):
274
-
data = files.value
295
+
# files.value is a list of FormPart objects
296
+
for file_part in files.value:
297
+
# Access file metadata
298
+
file_name = file_part.file_name.decode() if file_part.file_name else "unknown"
299
+
content_type = file_part.content_type.decode() if file_part.content_type else None
300
+
301
+
# file_part.file is a FileBuffer instance with efficient memory handling
302
+
# Small files (<1MB) are kept in memory, larger files use temporary disk files
BlackSheep automatically manages file resources. The framework calls `Request.dispose()` at the end of each request-response cycle to clean up temporary files. However, if you need manual control:
392
+
393
+
```python
394
+
from blacksheep import post, Request
395
+
396
+
397
+
@post("/manual-cleanup")
398
+
asyncdefmanual_file_handling(request: Request):
399
+
try:
400
+
files =await request.files()
401
+
402
+
for part in files:
403
+
# Process files
404
+
pass
405
+
finally:
406
+
# Manually clean up resources if needed
407
+
# (normally not required as framework does this automatically)
408
+
request.dispose()
409
+
```
410
+
411
+
##### FileBuffer API
412
+
413
+
The `FileBuffer` class wraps `SpooledTemporaryFile` and provides these methods:
414
+
415
+
-`read(size: int = -1) -> bytes`: Read file content
416
+
-`seek(offset: int, whence: int = 0) -> int`: Change file position
417
+
-`close() -> None`: Close the file
418
+
-`save_to(file_path: str) -> None`: Save file to disk (async)
419
+
420
+
```python
421
+
from blacksheep import FromFiles, post
422
+
423
+
424
+
@post("/process-file")
425
+
asyncdefprocess_file(files: FromFiles):
426
+
for file_part in files.value:
427
+
file_buffer = file_part.file
428
+
429
+
# Read first 100 bytes
430
+
header = file_buffer.read(100)
431
+
432
+
# Go back to start
433
+
file_buffer.seek(0)
434
+
435
+
# Read entire content
436
+
full_content = file_buffer.read()
437
+
438
+
# Save to disk
439
+
await file_buffer.save_to("./output.bin")
440
+
```
441
+
291
442
#### Reading streams
292
443
293
444
Reading streams enables reading large-sized bodies using an asynchronous
0 commit comments