Skip to content

Commit 1f83dc5

Browse files
authored
Merge branch 'master' into claude/apply-pr-684-changes-NHKYB
2 parents 8704b34 + 2c691d0 commit 1f83dc5

40 files changed

Lines changed: 817 additions & 1578 deletions

.github/workflows/_check_code.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ jobs:
1818
- name: Checkout repository
1919
uses: actions/checkout@v6
2020
- name: Run actionlint
21-
uses: rhysd/actionlint@v1.7.11
21+
uses: rhysd/actionlint@v1.7.12
2222

2323
spell_check:
2424
name: Spell check

.github/workflows/manual_release_stable.yaml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -157,12 +157,13 @@ jobs:
157157
rm -rf "versioned_docs/version-${MAJOR_MINOR}"
158158
rm -rf "versioned_sidebars/version-${MAJOR_MINOR}-sidebars.json"
159159
jq 'map(select(. != env.MAJOR_MINOR))' versions.json > tmp.json && mv tmp.json versions.json
160-
# Copy changelog
161-
cp ../CHANGELOG.md ../docs/changelog.md
162160
# Build API reference and create version snapshots
163161
bash build_api_reference.sh
164162
npx docusaurus docs:version "$MAJOR_MINOR"
165163
npx docusaurus api:version "$MAJOR_MINOR"
164+
# Changelog is not versioned - it is copied from root at build time
165+
rm -f "versioned_docs/version-${MAJOR_MINOR}/changelog.md"
166+
echo "changelog.md" > "versioned_docs/version-${MAJOR_MINOR}/.gitignore"
166167
167168
- name: Commit and push versioned docs
168169
id: commit_versioned_docs

CHANGELOG.md

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,20 +3,29 @@
33
All notable changes to this project will be documented in this file.
44

55
<!-- git-cliff-unreleased-start -->
6-
## 1.6.1 - **not yet released**
6+
## 1.6.2 - **not yet released**
7+
8+
### 🐛 Bug Fixes
9+
10+
- **file-system:** Reclaim orphaned in-progress requests on RQ recovery ([#1825](https://github.com/apify/crawlee-python/pull/1825)) ([e86794a](https://github.com/apify/crawlee-python/commit/e86794a6e5605432c9331c7cd99edf885527a3eb)) by [@vdusek](https://github.com/vdusek)
11+
- Prevent premature `EventManager` shutdown when multiple crawlers share it ([#1810](https://github.com/apify/crawlee-python/pull/1810)) ([2efb668](https://github.com/apify/crawlee-python/commit/2efb668ad54fb3e8d740066446563d1e8a39d2e8)) by [@Mantisus](https://github.com/Mantisus), closes [#1805](https://github.com/apify/crawlee-python/issues/1805), [#1808](https://github.com/apify/crawlee-python/issues/1808)
12+
13+
14+
<!-- git-cliff-unreleased-end -->
15+
## [1.6.1](https://github.com/apify/crawlee-python/releases/tag/v1.6.1) (2026-03-30)
716

817
### 🐛 Bug Fixes
918

1019
- Handle invalid URLs in `RequestList` ([#1803](https://github.com/apify/crawlee-python/pull/1803)) ([0b2e3fc](https://github.com/apify/crawlee-python/commit/0b2e3fc5cbca371131b54085e052a6cda6361b0f)) by [@Mantisus](https://github.com/Mantisus), closes [#1802](https://github.com/apify/crawlee-python/issues/1802)
1120
- **playwright:** Filter unsupported context options in persistent browser ([#1796](https://github.com/apify/crawlee-python/pull/1796)) ([69ad22e](https://github.com/apify/crawlee-python/commit/69ad22e60ef558d8c26e84e2bd165fe03f116b7f)) by [@sushant-mutnale](https://github.com/sushant-mutnale), closes [#1784](https://github.com/apify/crawlee-python/issues/1784)
1221
- Remove double usage_count increment in Session.retire() ([#1816](https://github.com/apify/crawlee-python/pull/1816)) ([c40d411](https://github.com/apify/crawlee-python/commit/c40d411b024ba2aae531a3c97609f78ad2c2757e)) by [@vdusek](https://github.com/vdusek)
22+
- Defer page object cleanup to make it accessible in error handlers ([#1814](https://github.com/apify/crawlee-python/pull/1814)) ([7eeb500](https://github.com/apify/crawlee-python/commit/7eeb5007cfb911901203ea21e1fd40127641feb1)) by [@janbuchar](https://github.com/janbuchar), closes [#1482](https://github.com/apify/crawlee-python/issues/1482)
1323

1424
### ⚡ Performance
1525

1626
- Offload BeautifulSoup parsing to a thread via `asyncio.to_thread` ([#1817](https://github.com/apify/crawlee-python/pull/1817)) ([d612ffa](https://github.com/apify/crawlee-python/commit/d612ffa1730f2aacfb7a28ae2b0ce2f4eda77692)) by [@vdusek](https://github.com/vdusek)
1727

1828

19-
<!-- git-cliff-unreleased-end -->
2029
## [1.6.0](https://github.com/apify/crawlee-python/releases/tag/v1.6.0) (2026-03-20)
2130

2231
### 🚀 Features

docs/guides/avoid_blocking.mdx

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ import RunnableCodeBlock from '@site/src/components/RunnableCodeBlock';
1010

1111
import PlaywrightDefaultFingerprintGenerator from '!!raw-loader!roa-loader!./code_examples/avoid_blocking/playwright_with_fingerprint_generator.py';
1212
import PlaywrightWithCamoufox from '!!raw-loader!roa-loader!../examples/code_examples/playwright_crawler_with_camoufox.py';
13+
import PlaywrightWithCloakBrowser from '!!raw-loader!roa-loader!./code_examples/avoid_blocking/playwright_with_cloakbrowser.py';
1314

1415
import PlaywrightDefaultFingerprintGeneratorWithArgs from '!!raw-loader!./code_examples/avoid_blocking/default_fingerprint_generator_with_args.py';
1516

@@ -41,6 +42,14 @@ In some cases even <ApiLink to="class/PlaywrightCrawler">`PlaywrightCrawler`</Ap
4142
{PlaywrightWithCamoufox}
4243
</RunnableCodeBlock>
4344

45+
## Using CloakBrowser
46+
47+
For sites with aggressive anti-bot protection, [CloakBrowser](https://github.com/CloakHQ/CloakBrowser) takes a different approach. Instead of overriding fingerprints at the JavaScript level (which anti-bot scripts can detect as tampering), CloakBrowser ships a custom Chromium binary with fingerprints modified directly in the C++ source code. It is also Chromium-based, which can matter when a target site behaves differently with Firefox than with Chrome. Install it separately with `pip install cloakbrowser` — the plugin calls `ensure_binary()` which automatically downloads and caches the Chromium binary on first run.
48+
49+
<RunnableCodeBlock className="language-python" language="python">
50+
{PlaywrightWithCloakBrowser}
51+
</RunnableCodeBlock>
52+
4453
**Related links**
4554

4655
- [Fingerprint Suite Docs](https://github.com/apify/fingerprint-suite)
Lines changed: 81 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,81 @@
1+
import asyncio
2+
3+
# CloakBrowser is an external package. Install it separately.
4+
from cloakbrowser.config import IGNORE_DEFAULT_ARGS, get_default_stealth_args
5+
from cloakbrowser.download import ensure_binary
6+
from typing_extensions import override
7+
8+
from crawlee.browsers import (
9+
BrowserPool,
10+
PlaywrightBrowserController,
11+
PlaywrightBrowserPlugin,
12+
)
13+
from crawlee.crawlers import PlaywrightCrawler, PlaywrightCrawlingContext
14+
15+
16+
class CloakBrowserPlugin(PlaywrightBrowserPlugin):
17+
"""Example browser plugin that uses CloakBrowser's patched Chromium,
18+
but otherwise keeps the functionality of PlaywrightBrowserPlugin.
19+
"""
20+
21+
@override
22+
async def new_browser(self) -> PlaywrightBrowserController:
23+
if not self._playwright:
24+
raise RuntimeError('Playwright browser plugin is not initialized.')
25+
26+
binary_path = ensure_binary()
27+
stealth_args = get_default_stealth_args()
28+
29+
# Merge CloakBrowser stealth args with any user-provided launch options.
30+
launch_options = dict(self._browser_launch_options)
31+
launch_options.pop('executable_path', None)
32+
launch_options.pop('chromium_sandbox', None)
33+
existing_args = list(launch_options.pop('args', []))
34+
launch_options['args'] = [*existing_args, *stealth_args]
35+
36+
return PlaywrightBrowserController(
37+
browser=await self._playwright.chromium.launch(
38+
executable_path=binary_path,
39+
ignore_default_args=IGNORE_DEFAULT_ARGS,
40+
**launch_options,
41+
),
42+
max_open_pages_per_browser=1,
43+
# CloakBrowser handles fingerprints at the binary level.
44+
header_generator=None,
45+
)
46+
47+
48+
async def main() -> None:
49+
crawler = PlaywrightCrawler(
50+
# Limit the crawl to max requests. Remove or increase it for crawling all links.
51+
max_requests_per_crawl=10,
52+
# Custom browser pool. Gives users full control over browsers used by the crawler.
53+
browser_pool=BrowserPool(plugins=[CloakBrowserPlugin()]),
54+
)
55+
56+
# Define the default request handler, which will be called for every request.
57+
@crawler.router.default_handler
58+
async def request_handler(context: PlaywrightCrawlingContext) -> None:
59+
context.log.info(f'Processing {context.request.url} ...')
60+
61+
# Extract some data from the page using Playwright's API.
62+
posts = await context.page.query_selector_all('.athing')
63+
for post in posts:
64+
# Get the HTML elements for the title and rank within each post.
65+
title_element = await post.query_selector('.title a')
66+
67+
# Extract the data we want from the elements.
68+
title = await title_element.inner_text() if title_element else None
69+
70+
# Push the extracted data to the default dataset.
71+
await context.push_data({'title': title})
72+
73+
# Find a link to the next page and enqueue it if it exists.
74+
await context.enqueue_links(selector='.morelink')
75+
76+
# Run the crawler with the initial list of URLs.
77+
await crawler.run(['https://news.ycombinator.com/'])
78+
79+
80+
if __name__ == '__main__':
81+
asyncio.run(main())

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ build-backend = "hatchling.build"
44

55
[project]
66
name = "crawlee"
7-
version = "1.6.1"
7+
version = "1.6.2"
88
description = "Crawlee for Python"
99
authors = [{ name = "Apify Technologies s.r.o.", email = "support@apify.com" }]
1010
license = { file = "LICENSE" }

renovate.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"extends": ["config:base", ":semanticCommitTypeAll(chore)"],
3-
"ignorePaths": ["docs/**", "src/crawlee/project_template/**"],
3+
"ignorePaths": ["docs/**", "src/crawlee/project_template/**", "website/versioned_docs/**"],
44
"pinVersions": false,
55
"separateMajorMinor": false,
66
"dependencyDashboard": false,

src/crawlee/_types.py

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
import json
1515
import logging
1616
import re
17-
from collections.abc import Callable, Coroutine, Sequence
17+
from collections.abc import Awaitable, Coroutine, Sequence
1818

1919
from typing_extensions import NotRequired, Required, Self, Unpack
2020

@@ -39,6 +39,9 @@
3939

4040
HttpPayload = bytes
4141

42+
DeferredCleanupCallback = Callable[[], 'Awaitable[Any]']
43+
"""An async callback to be called after request processing completes (including error handlers)."""
44+
4245
RequestTransformAction = Literal['skip', 'unchanged']
4346

4447
EnqueueStrategy = Literal['all', 'same-domain', 'same-hostname', 'same-origin']
@@ -661,6 +664,9 @@ class BasicCrawlingContext:
661664
log: logging.Logger
662665
"""Logger instance."""
663666

667+
register_deferred_cleanup: Callable[[DeferredCleanupCallback], None]
668+
"""Register an async callback to be called after request processing completes (including error handlers)."""
669+
664670
async def get_snapshot(self) -> PageSnapshot:
665671
"""Get snapshot of crawled page."""
666672
return PageSnapshot()

src/crawlee/crawlers/_adaptive_playwright/_adaptive_playwright_crawler.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -320,6 +320,7 @@ async def get_input_state(
320320
get_key_value_store=result.get_key_value_store,
321321
use_state=use_state_function,
322322
log=context.log,
323+
register_deferred_cleanup=context.register_deferred_cleanup,
323324
)
324325

325326
try:

src/crawlee/crawlers/_basic/_basic_crawler.py

Lines changed: 34 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -768,27 +768,36 @@ def sigint_handler() -> None:
768768
return final_statistics
769769

770770
async def _run_crawler(self) -> None:
771-
event_manager = self._service_locator.get_event_manager()
771+
local_event_manager = self._service_locator.get_event_manager()
772+
global_event_manager = service_locator.get_event_manager()
773+
if local_event_manager is global_event_manager:
774+
local_event_manager = None # Avoid entering the same event manager context twice
775+
776+
# The event managers are always entered.
777+
contexts_to_enter: list[Any] = (
778+
[global_event_manager, local_event_manager] if local_event_manager else [global_event_manager]
779+
)
772780

773781
# Collect the context managers to be entered. Context managers that are already active are excluded,
774782
# as they were likely entered by the caller, who will also be responsible for exiting them.
775-
contexts_to_enter = [
776-
cm
777-
for cm in (
778-
event_manager,
779-
self._snapshotter,
780-
self._statistics,
781-
self._session_pool if self._use_session_pool else None,
782-
self._http_client,
783-
self._crawler_state_rec_task,
784-
*self._additional_context_managers,
785-
)
786-
if cm and getattr(cm, 'active', False) is False
787-
]
783+
contexts_to_enter.extend(
784+
[
785+
cm
786+
for cm in (
787+
self._snapshotter,
788+
self._statistics,
789+
self._session_pool if self._use_session_pool else None,
790+
self._http_client,
791+
self._crawler_state_rec_task,
792+
*self._additional_context_managers,
793+
)
794+
if cm and getattr(cm, 'active', False) is False
795+
]
796+
)
788797

789798
async with AsyncExitStack() as exit_stack:
790799
for context in contexts_to_enter:
791-
await exit_stack.enter_async_context(context) # ty: ignore[invalid-argument-type]
800+
await exit_stack.enter_async_context(context)
792801

793802
await self._autoscaled_pool.run()
794803

@@ -1413,6 +1422,8 @@ async def __run_task_function(self) -> None:
14131422
proxy_info = await self._get_proxy_info(request, session)
14141423
result = RequestHandlerRunResult(key_value_store_getter=self.get_key_value_store, request=request)
14151424

1425+
deferred_cleanup: list[Callable[[], Awaitable[None]]] = []
1426+
14161427
context = BasicCrawlingContext(
14171428
request=result.request,
14181429
session=session,
@@ -1423,6 +1434,7 @@ async def __run_task_function(self) -> None:
14231434
get_key_value_store=result.get_key_value_store,
14241435
use_state=self.use_state,
14251436
log=self._logger,
1437+
register_deferred_cleanup=deferred_cleanup.append,
14261438
)
14271439
self._context_result_map[context] = result
14281440

@@ -1509,6 +1521,13 @@ async def __run_task_function(self) -> None:
15091521
)
15101522
raise
15111523

1524+
finally:
1525+
for cleanup in deferred_cleanup:
1526+
try:
1527+
await cleanup()
1528+
except Exception: # noqa: PERF203
1529+
self._logger.exception('Error in deferred cleanup')
1530+
15121531
async def _run_request_handler(self, context: BasicCrawlingContext) -> None:
15131532
context.request.state = RequestState.BEFORE_NAV
15141533
await self._context_pipeline(

0 commit comments

Comments
 (0)