You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+69-2Lines changed: 69 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -414,11 +414,78 @@ Feel free to experiment more by looking at other part of the records, or extract
414
414
415
415
## Task 3: Index the WARC, WET, and WAT
416
416
417
-
TBA
417
+
The example WARC files we've been using are tiny and easy to work with. The real WARC files are around a gigabyte in size and contain about 30,000 webpages each. What's more, we have around 24 million of these files! To read all of them, we could iterate, but what if we wanted random access so we could read just one particular record? We do that with an index.
418
+
```mermaid
419
+
flowchart LR
420
+
warc --> indexer --> cdxj & columnar
421
+
warc@{shape: cyl}
422
+
cdxj@{ shape: stored-data}
423
+
columnar@{ shape: stored-data}
424
+
```
425
+
426
+
427
+
We have two versions of the index: the CDX index and the columnar index. The CDX index is useful for looking up single pages, whereas the columnar index is better suited to analytical and bulk queries. We'll look at both in this tour, starting with the CDX index.
428
+
429
+
### CDX(J) index
430
+
431
+
The CDX index files are sorted plain-text files, with each line containing information about a single capture in the WARC. Technically, Common Crawl uses CDXJ index files since the information about each capture is formatted as JSON. We'll use CDX and CDXJ interchangeably in this tour for legacy reasons 💅
432
+
433
+
We can create our own CDXJ index from the local WARCs by running:
434
+
435
+
```make cdxj```
436
+
437
+
This uses the JWARC library and, partially, a home-cooked code that we wrote to support WET and WAT records, to generate CDXJ index files for our WARC files by running the code below:
Now look at the `.cdxj` files with `cat whirlwind*.cdxj`. You'll see that each file has one entry in the index. The WARC only has the response record indexed, since by default cdxj-indexer guesses that you won't ever want to random-access the request or metadata. WET and WAT have the conversion and metadata records indexed (Common Crawl doesn't publish a WET or WAT index, just WARC).
452
+
453
+
For each of these records, there's one text line in the index - yes, it's a flat file! It starts with a string like `org,wikipedia,an)/wiki/escopete 20240518015810`, followed by a JSON blob. The starting string is the primary key of the index. The first thing is a [SURT](http://crawler.archive.org/articles/user_manual/glossary.html#surt) (Sort-friendly URI Reordering Transform). The big integer is a date, in ISO-8601 format with the delimiters removed.
454
+
455
+
What is the purpose of this funky format? It's done this way because these flat files (300 gigabytes total per crawl) can be sorted on the primary key using any out-of-core sort utility e.g. the standard Linux `sort`, or one of the Hadoop-based out-of-core sort functions.
456
+
457
+
The JSON blob has enough information to cleanly isolate the raw data of a single record: it defines which WARC file the record is in, and the byte offset and length of the record within this file. We'll use that in the next section.
418
458
419
459
## Task 4: Use the CDXJ index to extract a subset of raw content from the local WARC, WET, and WAT
420
460
421
-
TBA
461
+
Normally, compressed files aren't random access. However, the WARC files use a trick to make this possible, which is that every record needs to be separately compressed. The `gzip` compression utility supports this, but it's rarely used.
462
+
463
+
To extract one record from a warc file, all you need to know is the filename and the offset into the file. If you're reading over the web, then it really helps to know the exact length of the record.
464
+
465
+
Run:
466
+
467
+
```make extract```
468
+
469
+
to run a set of extractions from your local
470
+
`whirlwind.*.gz` files with `JWARC` using the commands below:
471
+
472
+
<details>
473
+
<summary>Click to view code</summary>
474
+
475
+
```
476
+
creating extraction.* from local warcs, the offset numbers are from the cdxj index
ones as in the index. Look at the three output files: `extraction.html`, `extraction.txt`, and `extraction.json` (pretty-print the json with `python -m json.tool extraction.json`).
487
+
488
+
Notice that we extracted HTML from the WARC, text from WET, and JSON from the WAT (as shown in the different file extensions). This is because the payload in each file type is formatted differently!
0 commit comments