Skip to content

Commit da3ffd0

Browse files
[doc] Fix typos in docs and code comments (#7341)
1 parent c846333 commit da3ffd0

23 files changed

Lines changed: 24 additions & 24 deletions

File tree

docs/content/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ Paimon offers the following core capabilities:
4545

4646
## Try Paimon
4747

48-
If youre interested in playing around with Paimon, check out our
48+
If you're interested in playing around with Paimon, check out our
4949
quick start guide with [Flink]({{< ref "flink/quick-start" >}}) or [Spark]({{< ref "spark/quick-start" >}}). It provides a step by
5050
step introduction to the APIs and guides you through real applications.
5151

docs/content/append-table/blob.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -576,7 +576,7 @@ public class BlobDescriptorExample {
576576
long fileSize = 2L * 1024 * 1024 * 1024; // 2GB
577577

578578
BlobDescriptor descriptor = new BlobDescriptor(externalUri, 0, fileSize);
579-
// file io should be accessable to externalUri
579+
// file io should be accessible to externalUri
580580
FileIO fileIO = Table.fileIO();
581581
UriReader uriReader = UriReader.fromFile(fileIO);
582582
Blob blob = Blob.fromDescriptor(uriReader, descriptor);

docs/content/append-table/incremental-clustering.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,7 @@ only support running Incremental Clustering in batch mode.
101101

102102
To run a Incremental Clustering job, follow these instructions.
103103

104-
You dont need to specify any clustering-related parameters when running Incremental Clustering,
104+
You don't need to specify any clustering-related parameters when running Incremental Clustering,
105105
these options are already defined as table options. If you need to change clustering settings, please update the corresponding table options.
106106

107107
{{< tabs "incremental-clustering" >}}

docs/content/concepts/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ For streaming engines like Apache Flink, there are typically three types of conn
5656
intermediate stages in this pipeline, to guarantee the latency stay
5757
within seconds.
5858
- OLAP system, such as ClickHouse, it receives processed data in
59-
streaming fashion and serving users ad-hoc queries.
59+
streaming fashion and serving user's ad-hoc queries.
6060
- Batch storage, such as Apache Hive, it supports various operations
6161
of the traditional batch processing, including `INSERT OVERWRITE`.
6262

docs/content/ecosystem/starrocks.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ SELECT * FROM paimon_catalog.test_db.partition_tbl$partitions;
7979
## StarRocks to Paimon type mapping
8080

8181
This section lists all supported type conversion between StarRocks and Paimon.
82-
All StarRockss data types can be found in this doc [StarRocks Data type overview](https://docs.starrocks.io/docs/sql-reference/data-types/).
82+
All StarRocks's data types can be found in this doc [StarRocks Data type overview](https://docs.starrocks.io/docs/sql-reference/data-types/).
8383

8484
<table class="table table-bordered">
8585
<thead>

docs/content/flink/procedures.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -703,7 +703,7 @@ All available procedures are listed below.
703703
<td>
704704
To expire partitions. Argument:
705705
<li>table: the target table identifier. Cannot be empty.</li>
706-
<li>expiration_time: the expiration interval of a partition. A partition will be expired if its lifetime is over this value. Partition time is extracted from the partition value.</li>
706+
<li>expiration_time: the expiration interval of a partition. A partition will be expired if it's lifetime is over this value. Partition time is extracted from the partition value.</li>
707707
<li>timestamp_formatter: the formatter to format timestamp from string.</li>
708708
<li>timestamp_pattern: the pattern to get a timestamp from partitions.</li>
709709
<li>expire_strategy: specifies the expiration strategy for partition expiration, possible values: 'values-time' or 'update-time' , 'values-time' as default.</li>

docs/content/learn-paimon/understand-files.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -496,5 +496,5 @@ Maybe you think the 5 files for the primary key table are actually okay, but the
496496
may have 50 small files in a single bucket, which is very difficult to accept. Worse still, partitions that
497497
are no longer active also keep so many small files.
498498
499-
Configure full-compaction.delta-commits perform full-compaction periodically in Flink writing. And it can ensure
499+
Configure 'full-compaction.delta-commits' perform full-compaction periodically in Flink writing. And it can ensure
500500
that partitions are full compacted before writing ends.

docs/content/maintenance/filesystems.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -391,7 +391,7 @@ Please refer to [Trino S3](https://trino.io/docs/current/object-storage/file-sys
391391

392392
### S3 Compliant Object Stores
393393

394-
The S3 Filesystem also support using S3 compliant object stores such as MinIO, Tencent's COS and IBMs Cloud Object
394+
The S3 Filesystem also support using S3 compliant object stores such as MinIO, Tencent's COS and IBM's Cloud Object
395395
Storage. Just configure your endpoint to the provider of the object store service.
396396

397397
```yaml

docs/content/primary-key-table/chain-table.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Chain table is a new capability for primary key tables that transforms how you p
3030
Imagine a scenario where you periodically store a full snapshot of data (for example, once a day), even
3131
though only a small portion changes between snapshots. ODS binlog dump is a typical example of this pattern.
3232

33-
Taking a daily binlog dump job as an example. A batch job merges yesterdays full dataset with today’s
33+
Taking a daily binlog dump job as an example. A batch job merges yesterday's full dataset with today's
3434
incremental changes to produce a new full dataset. This approach has two clear drawbacks:
3535
* Full computation: Merge operation includes all data, and it will involve shuffle, which results in poor performance.
3636
* Full storage: Store a full set of data every day, and the changed data usually accounts for a very small proportion.

docs/content/primary-key-table/changelog-producer.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ By specifying `'changelog-producer' = 'input'`, Paimon writers rely on their inp
5858

5959
## Lookup
6060

61-
If your input cant produce a complete changelog but you still want to get rid of the costly normalized operator, you
61+
If your input can't produce a complete changelog but you still want to get rid of the costly normalized operator, you
6262
may consider using the `'lookup'` changelog producer.
6363

6464
By specifying `'changelog-producer' = 'lookup'`, Paimon will generate changelog through `'lookup'` during compaction (You can also enable [Async Compaction]({{< ref "primary-key-table/compaction#asynchronous-compaction" >}})). By default, lookup compaction is performed before committing written data unless disabled by `write-only` property.

0 commit comments

Comments
 (0)