Skip to content

Commit be6acc8

Browse files
authored
fix(academy): use export_data() with keyword arguments (#2171)
Enabled by apify/crawlee-python#1597 Fixes #2112 <!-- CURSOR_SUMMARY --> --- > [!NOTE] > Switches Python Crawlee examples to the unified `crawler.export_data()` API in the Academy lessons, replacing deprecated `export_data_json`/`export_data_csv` calls while preserving parameters. > > - In `12_framework.md`, update export examples (including logging and exercise solutions) to `await crawler.export_data(path='dataset.json', ensure_ascii=False, indent=2)` and `await crawler.export_data(path='dataset.csv')` > - In `13_platform.md`, update export calls in the platform lesson to the same `export_data()` usage > > <sup>Written by [Cursor Bugbot](https://cursor.com/dashboard?tab=bugbot) for commit 3546f60. Configure [here](https://cursor.com/dashboard?tab=bugbot).</sup> <!-- /CURSOR_SUMMARY -->
1 parent 7fed112 commit be6acc8

2 files changed

Lines changed: 8 additions & 8 deletions

File tree

sources/academy/webscraping/scraping_basics_python/12_framework.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -331,9 +331,9 @@ async def main():
331331

332332
await crawler.run(["https://warehouse-theme-metal.myshopify.com/collections/sales"])
333333
# highlight-next-line
334-
await crawler.export_data_json(path='dataset.json', ensure_ascii=False, indent=2)
334+
await crawler.export_data(path='dataset.json', ensure_ascii=False, indent=2)
335335
# highlight-next-line
336-
await crawler.export_data_csv(path='dataset.csv')
336+
await crawler.export_data(path='dataset.csv')
337337
```
338338

339339
After running the scraper again, there should be two new files in your directory, `dataset.json` and `dataset.csv`, containing all the data. If we peek into the JSON file, it should have indentation.
@@ -389,8 +389,8 @@ async def main():
389389

390390
# highlight-next-line
391391
crawler.log.info("Exporting data")
392-
await crawler.export_data_json(path='dataset.json', ensure_ascii=False, indent=2)
393-
await crawler.export_data_csv(path='dataset.csv')
392+
await crawler.export_data(path='dataset.json', ensure_ascii=False, indent=2)
393+
await crawler.export_data(path='dataset.csv')
394394

395395
def parse_variant(variant):
396396
text = variant.text.strip()
@@ -500,7 +500,7 @@ If you export the dataset as JSON, it should look something like this:
500500
})
501501

502502
await crawler.run(["https://www.f1academy.com/Racing-Series/Drivers"])
503-
await crawler.export_data_json(path='dataset.json', ensure_ascii=False, indent=2)
503+
await crawler.export_data(path='dataset.json', ensure_ascii=False, indent=2)
504504

505505
if __name__ == '__main__':
506506
asyncio.run(main())
@@ -598,7 +598,7 @@ When navigating to the first IMDb search result, you might find it helpful to kn
598598
})
599599

600600
await crawler.run(["https://www.netflix.com/tudum/top10"])
601-
await crawler.export_data_json(path='dataset.json', ensure_ascii=False, indent=2)
601+
await crawler.export_data(path='dataset.json', ensure_ascii=False, indent=2)
602602

603603
if __name__ == '__main__':
604604
asyncio.run(main())

sources/academy/webscraping/scraping_basics_python/13_platform.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -130,8 +130,8 @@ async def main():
130130
await crawler.run(["https://warehouse-theme-metal.myshopify.com/collections/sales"])
131131

132132
crawler.log.info("Exporting data")
133-
await crawler.export_data_json(path='dataset.json', ensure_ascii=False, indent=2)
134-
await crawler.export_data_csv(path='dataset.csv')
133+
await crawler.export_data(path='dataset.json', ensure_ascii=False, indent=2)
134+
await crawler.export_data(path='dataset.csv')
135135

136136
def parse_variant(variant):
137137
text = variant.text.strip()

0 commit comments

Comments
 (0)