diff --git a/docs/integrations/data-ingestion/etl-tools/fivetran/index.md b/docs/integrations/data-ingestion/etl-tools/fivetran/index.md
index 26ec1a9a687..5662c943815 100644
--- a/docs/integrations/data-ingestion/etl-tools/fivetran/index.md
+++ b/docs/integrations/data-ingestion/etl-tools/fivetran/index.md
@@ -2,13 +2,14 @@
sidebar_label: 'Fivetran'
slug: /integrations/fivetran
sidebar_position: 2
-description: 'You can transform and model your data in ClickHouse using dbt'
+description: 'Use Fivetran to move data from any source into ClickHouse Cloud with automated schema creation, deduplication, and History Mode (SCD Type 2).'
title: 'Fivetran and ClickHouse Cloud'
doc_type: 'guide'
integration:
- support_level: 'core'
- category: 'data_ingestion'
-keywords: ['fivetran', 'data movement', 'etl', 'clickhouse destination', 'automated data platform']
+ - website: 'https://github.com/ClickHouse/clickhouse-fivetran-destination'
+keywords: ['fivetran', 'data movement', 'etl', 'clickhouse destination', 'automated data platform', 'history mode', 'SCD Type 2']
---
import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
@@ -21,10 +22,12 @@ import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
[Fivetran](https://www.fivetran.com) is the automated data movement platform moving data out of, into and across your cloud data platforms.
-[ClickHouse Cloud](https://clickhouse.com/cloud) is supported as a [Fivetran destination](https://fivetran.com/docs/destinations/clickhouse), allowing users to load data from various sources into ClickHouse.
+[ClickHouse Cloud](https://clickhouse.com/cloud) is supported as a [Fivetran destination](https://fivetran.com/docs/destinations/clickhouse), allowing you to load data from various sources into ClickHouse. Open Source ClickHouse version isn't supported as a destination.
+
+The destination connector is developed and maintained together by ClickHouse and Fivetran. The source code is available on [GitHub](https://github.com/ClickHouse/clickhouse-fivetran-destination).
:::note
-[ClickHouse Cloud destination](https://fivetran.com/docs/destinations/clickhouse) is currently in private preview, please contact ClickHouse support in the case of any problems.
+[ClickHouse Cloud destination](https://fivetran.com/docs/destinations/clickhouse) is currently in **Beta** but we are working to make it generally available soon.
:::
@@ -39,13 +42,32 @@ import ClickHouseSupportedBadge from '@theme/badges/ClickHouseSupported';
-## ClickHouse Cloud destination {#clickhouse-cloud-destination}
+## Key features {#key-features}
+- **ClickHouse Cloud compatible**: use your ClickHouse Cloud database as a Fivetran destination.
+- **SaaS deployment model**: fully managed by Fivetran, no need to manage your own infrastructure.
+- **History Mode (SCD Type 2)**: preserves complete history of all record versions for point-in-time analysis and audit trails.
+- **Configurable batch sizes**: You can adapt Fivetran to your particular use case by tuning write, select, mutation, and hard delete batch sizes via a JSON configuration file.
+
+## Limitations {#limitations}
+- Schema migrations are not supported yet, but we are working on it.
+- Adding, removing, or modifying primary key columns is not supported.
+- Custom ClickHouse settings on `CREATE TABLE` statements are not supported.
+- Role-based grants are not fully supported. The connector's grants check only queries direct user grants. Use [direct grants](/integrations/fivetran/troubleshooting#role-based-grants) instead.
+
+## Related pages {#related-pages}
+- [Technical Reference](/integrations/fivetran/reference): type mappings, table engines, metadata columns and advanced configurations
+- [Troubleshooting & Best Practices](/integrations/fivetran/troubleshooting): common errors and optimization tips and debugging queries
+- [ClickHouse Fivetran destination on GitHub](https://github.com/ClickHouse/clickhouse-fivetran-destination)
+
+## Setup guide {#setup-guide}
+- If you're looking for configurations and general technical details, please refer to the [technical reference](/integrations/fivetran/reference).
+- For a comprehensive guide, check the [setup guide](https://fivetran.com/docs/destinations/clickhouse/setup-guide) in the Fivetran documentation.
-See the official documentation on the Fivetran website:
+## Contact and support {#contact-us}
-- [ClickHouse destination overview](https://fivetran.com/docs/destinations/clickhouse)
-- [ClickHouse destination setup guide](https://fivetran.com/docs/destinations/clickhouse/setup-guide)
+The ClickHouse Fivetran destination has a split ownership model:
-## Contact us {#contact-us}
+- **ClickHouse** develops and maintains the destination connector code.
+- **Fivetran** hosts the connector and is responsible for data movement, pipeline scheduling, and source connectors.
-If you have any questions, or if you have a feature request, please open a [support ticket](/about-us/support).
+Both Fivetran and ClickHouse provide support for the Fivetran ClickHouse destination. For general inquiries, we recommend reaching out to Fivetran, as they are the experts on the Fivetran platform. For any ClickHouse-specific questions or issues, our support team is happy to help. Create a [support ticket](/about-us/support) to ask a question or report an issue.
diff --git a/docs/integrations/data-ingestion/etl-tools/fivetran/reference.md b/docs/integrations/data-ingestion/etl-tools/fivetran/reference.md
new file mode 100644
index 00000000000..6b0065e337f
--- /dev/null
+++ b/docs/integrations/data-ingestion/etl-tools/fivetran/reference.md
@@ -0,0 +1,334 @@
+---
+sidebar_label: 'Technical reference'
+slug: /integrations/fivetran/reference
+sidebar_position: 3
+description: 'Type mappings, table engine details, metadata columns, and debugging queries for the Fivetran ClickHouse destination.'
+title: 'Technical reference'
+doc_type: 'guide'
+keywords: ['fivetran', 'clickhouse destination', 'technical reference']
+---
+
+# Technical reference
+
+## Setup details {#setup-details}
+
+### User and role management {#user-and-role-management}
+
+Consider not using the `default` user; instead, create a dedicated one to use it with this Fivetran
+destination only. The following commands, executed with the `default` user, will create a new `fivetran_user` with the
+required privileges.
+
+```sql
+CREATE USER fivetran_user IDENTIFIED BY ''; -- use a secure password generator
+
+GRANT CURRENT GRANTS ON *.* TO fivetran_user;
+```
+
+Additionally, you can revoke access to certain databases from the `fivetran_user`.
+For example, by executing the following statement, we restrict access to the `default` database:
+
+```sql
+REVOKE ALL ON default.* FROM fivetran_user;
+```
+
+You can execute these statements in the ClickHouse SQL console.
+
+### Advanced configuration {#advanced-configuration}
+
+The ClickHouse Cloud destination supports an optional JSON configuration file for advanced use cases. This file allows you to fine-tune destination behavior by overriding the default settings that control batch sizes, parallelism, connection pools, and request timeouts.
+
+:::note
+This configuration is entirely optional. If no file is uploaded, the destination uses sensible defaults that work well for most use cases.
+:::
+
+The file must be valid JSON and conform to the schema described below.
+
+If you need to modify the configuration after the initial setup, you can edit the destination configurations in the Fivetran dashboard and upload an updated file.
+
+The configuration file has a top-level section:
+
+```json
+{
+ "destination_configurations": { ... }
+}
+```
+
+Inside of it you can specify the following configurations that control the internal behavior of the ClickHouse destination connector itself.
+These configurations affect how the connector processes data before sending it to ClickHouse.
+
+| Setting | Type | Default | Allowed Range | Description |
+|---------|------|---------|---------------|-------------|
+| `write_batch_size` | integer | `100000` | 5,000 – 100,000 | Number of rows per batch for insert, update, and replace operations. |
+| `select_batch_size` | integer | `1500` | 200 – 1,500 | Number of rows per batch for SELECT queries used during updates. |
+| `mutation_batch_size` | integer | `1500` | 200 – 1,500 | Number of rows per batch for ALTER TABLE UPDATE mutations in history mode. Lower it if you are experiencing large SQL statements. |
+| `hard_delete_batch_size` | integer | `1500` | 200 – 1,500 | Number of rows per batch for hard delete operations in normal syncs and in history mode. Lower it if you are experiencing large SQL statements. |
+
+All fields are optional. If a field is not specified, the default value is used.
+If a value is outside the allowed range, the destination will report an error during sync.
+Unknown fields are silently ignored (a warning is logged) and do not cause errors, which allows forward compatibility when new settings are added.
+
+Example:
+
+```json
+{
+ "destination_configurations": {
+ "write_batch_size": 50000,
+ "select_batch_size": 200
+ }
+}
+```
+
+## Type transformation mapping {#type-mapping}
+
+The Fivetran ClickHouse destination maps [Fivetran data types](https://fivetran.com/docs/destinations#datatypes) to ClickHouse types as follows:
+
+| Fivetran type | ClickHouse type |
+|---------------|--------------------------------------------------------------|
+| BOOLEAN | [Bool](/sql-reference/data-types/boolean) |
+| SHORT | [Int16](/sql-reference/data-types/int-uint) |
+| INT | [Int32](/sql-reference/data-types/int-uint) |
+| LONG | [Int64](/sql-reference/data-types/int-uint) |
+| BIGDECIMAL | [Decimal(P, S)](/sql-reference/data-types/decimal) |
+| FLOAT | [Float32](/sql-reference/data-types/float) |
+| DOUBLE | [Float64](/sql-reference/data-types/float) |
+| LOCALDATE | [Date32](/sql-reference/data-types/date32) |
+| LOCALDATETIME | [DateTime64(0, 'UTC')](/sql-reference/data-types/datetime64) |
+| INSTANT | [DateTime64(9, 'UTC')](/sql-reference/data-types/datetime64) |
+| STRING | [String](/sql-reference/data-types/string) |
+| LOCALTIME | [String](/sql-reference/data-types/string) \* \*\* |
+| BINARY | [String](/sql-reference/data-types/string) \* |
+| XML | [String](/sql-reference/data-types/string) \* |
+| JSON | [String](/sql-reference/data-types/string) \* |
+
+:::note
+\* BINARY, XML, LOCALTIME, and JSON are stored as [String](/sql-reference/data-types/string) because ClickHouse's `String` type can represent an arbitrary set of bytes. The destination adds a column comment to indicate the original data type. The ClickHouse [JSON](/sql-reference/data-types/newjson) data type is not used as it was marked as obsolete and never recommended for production usage.
+\*\* NOTE: Issue to track the support for LOCALTIME type: [clickhouse-fivetran-destination #15](https://github.com/ClickHouse/clickhouse-fivetran-destination/issues/15).
+:::
+
+### Date and time value ranges {#date-and-time-value-ranges}
+
+Fivetran sources can send date and time values in the range [0001-01-01, 9999-12-31](https://fivetran.com/docs/destinations#dateandtimevaluerange).
+ClickHouse Cloud date types have narrower ranges, so values outside the supported range are silently clamped to the nearest boundary:
+
+| Fivetran type | ClickHouse Cloud type | Min value | Max value |
+|---------------|------------------------|---------------------------|---------------------------|
+| LOCALDATE | Date32 | 1900-01-01 | 2299-12-31 |
+| LOCALDATETIME | DateTime64(0, 'UTC') | 1900-01-01 00:00:00 | 2262-04-11 23:47:16 |
+| INSTANT | DateTime64(9, 'UTC') | 1900-01-01 00:00:00 | 2262-04-11 23:47:16 |
+
+- The INSTANT upper bound is 2262-04-11 23:47:16 because DateTime64(9) stores nanoseconds since epoch as int64, and 2^63 - 1 nanoseconds corresponds to this date.
+ClickHouse itself supports DateTime64 with precision \<= 9 up to 2299-12-31 23:59:59.
+- The LOCALDATETIME upper bound is also limited to 2262-04-11 23:47:16 due to a [known bug](https://github.com/ClickHouse/clickhouse-go/issues/1311) in the Go ClickHouse driver, where `time.Time.UnixNano()` is called for all DateTime64 precisions before scaling, causing int64 overflow for dates beyond 2262 even at precision 0.
+
+## Destination tables {#table-structure}
+
+The ClickHouse Cloud destination uses
+[Replacing](/engines/table-engines/mergetree-family/replacingmergetree) engine type of
+[SharedMergeTree](/cloud/reference/shared-merge-tree) family
+(specifically, `SharedReplacingMergeTree`), versioned by the `_fivetran_synced` column.
+
+Every column except primary (ordering) keys and Fivetran metadata columns is created
+as [Nullable(T)](/sql-reference/data-types/nullable), where `T` is a
+ClickHouse Cloud type based on the [data types mapping](#type-mapping).
+
+The table structure varies depending on the Fivetran
+[sync mode](https://fivetran.com/docs/using-fivetran/features#deletedrowhandling)
+configured for the connector: **soft delete** (default) or **history mode** (SCD Type 2).
+
+### Soft delete mode {#soft-delete-mode}
+
+In soft delete mode, every destination table includes the following metadata columns:
+
+| Column | Type | Description |
+|--------|------|-------------|
+| `_fivetran_synced` | `DateTime64(9, 'UTC')` | Timestamp when the record was synced by Fivetran. Used as the version column for `SharedReplacingMergeTree`. |
+| `_fivetran_deleted` | `Bool` | Soft delete marker. Set to `true` when the source record is deleted. |
+| `_fivetran_id` | `String` | Auto-generated unique identifier. Only present when the source table has no primary keys. |
+
+#### Single primary key in the source table {#single-pk}
+
+For example, source table `users` has a primary key column `id` (`INT`) and a regular column `name` (`STRING`).
+The destination table will be defined as follows:
+
+```sql
+CREATE TABLE `users`
+(
+ `id` Int32,
+ `name` Nullable(String),
+ `_fivetran_synced` DateTime64(9, 'UTC'),
+ `_fivetran_deleted` Bool
+) ENGINE = SharedReplacingMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}', _fivetran_synced)
+ORDER BY id
+SETTINGS index_granularity = 8192
+```
+
+In this case, the `id` column is chosen as a table sorting key.
+
+#### Multiple primary keys in the source table {#multiple-pks}
+
+If the source table has multiple primary keys, they are used in order of their appearance in the Fivetran source table
+definition.
+
+For example, there is a source table `items` with primary key columns `id` (`INT`) and `name` (`STRING`), plus an
+additional regular column `description` (`STRING`). The destination table will be defined as follows:
+
+```sql
+CREATE TABLE `items`
+(
+ `id` Int32,
+ `name` String,
+ `description` Nullable(String),
+ `_fivetran_synced` DateTime64(9, 'UTC'),
+ `_fivetran_deleted` Bool
+) ENGINE = SharedReplacingMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}', _fivetran_synced)
+ORDER BY (id, name)
+SETTINGS index_granularity = 8192
+```
+
+In this case, `id` and `name` columns are chosen as table sorting keys.
+
+#### No primary keys in the source table {#no-pks}
+
+If the source table has no primary keys, a unique identifier will be added by Fivetran as a `_fivetran_id` column.
+Consider an `events` table that only has the `event` (`STRING`) and `timestamp` (`LOCALDATETIME`) columns in the source.
+The destination table in that case is as follows:
+
+```sql
+CREATE TABLE events
+(
+ `event` Nullable(String),
+ `timestamp` Nullable(DateTime),
+ `_fivetran_id` String,
+ `_fivetran_synced` DateTime64(9, 'UTC'),
+ `_fivetran_deleted` Bool
+) ENGINE = SharedReplacingMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}', _fivetran_synced)
+ORDER BY _fivetran_id
+SETTINGS index_granularity = 8192
+```
+
+Since `_fivetran_id` is unique and there are no other primary key options, it is used as a table sorting key.
+
+### History mode (SCD Type 2) {#history-mode}
+
+When [history mode](https://fivetran.com/docs/using-fivetran/features#historymode) is enabled,
+the destination preserves every version of each record rather than overwriting previous values.
+This implements [Slowly Changing Dimension Type 2](https://en.wikipedia.org/wiki/Slowly_changing_dimension#Type_2:_add_new_row) (SCD Type 2),
+maintaining a complete audit trail of all changes.
+
+In history mode, every destination table includes the following metadata columns:
+
+| Column | Type | Description |
+|--------|------|-------------|
+| `_fivetran_synced` | `DateTime64(9, 'UTC')` | Timestamp when the record was synced by Fivetran. Used as the version column for `SharedReplacingMergeTree`. |
+| `_fivetran_start` | `DateTime64(9, 'UTC')` | Timestamp when this version of the record became active. Part of the table's sorting key. |
+| `_fivetran_end` | `Nullable(DateTime64(9, 'UTC'))` | Timestamp when this version was superseded. Set to `2262-04-11 23:47:16` for currently active records. |
+| `_fivetran_active` | `Nullable(Bool)` | Whether this is the currently active version of the record. |
+| `_fivetran_id` | `String` | Auto-generated unique identifier. Only present when the source table has no primary keys. |
+
+The `_fivetran_start` column is always included in the `ORDER BY` clause as the last element of the compound sorting key.
+This allows multiple versions of the same record (with different start times) to coexist in the table.
+
+When a record is updated:
+- The previous version's `_fivetran_end` is set to the new version's `_fivetran_start` minus one nanosecond, and `_fivetran_active` is set to `false`.
+- The new version is inserted with `_fivetran_active` set to `true` and `_fivetran_end` set to `2262-04-11 23:47:16.000000000` (the maximum `DateTime64(9)` value).
+
+#### Single primary key in the source table {#history-single-pk}
+
+For example, source table `users` has a primary key column `id` (`INT`) and regular columns `name` (`STRING`) and `status` (`STRING`).
+The destination table in history mode will be defined as follows:
+
+```sql
+CREATE TABLE `users`
+(
+ `id` Int32,
+ `name` Nullable(String),
+ `status` Nullable(String),
+ `_fivetran_synced` DateTime64(9, 'UTC'),
+ `_fivetran_start` DateTime64(9, 'UTC'),
+ `_fivetran_end` Nullable(DateTime64(9, 'UTC')),
+ `_fivetran_active` Nullable(Bool)
+) ENGINE = SharedReplacingMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}', _fivetran_synced)
+ORDER BY (id, _fivetran_start)
+SETTINGS index_granularity = 8192
+```
+
+In this case, `id` and `_fivetran_start` form the compound sorting key.
+
+After a few syncs, the table might contain the following data:
+
+| id | name | status | \_fivetran\_start | \_fivetran\_end | \_fivetran\_active |
+|----|---------|--------|----------------------------------|----------------------------------|--------------------|
+| 1 | name 1 | TODO | 2025-11-10 20:57:00.000000000 | 2025-11-11 20:56:59.999000000 | false |
+| 1 | name 11 | TODO | 2025-11-11 20:57:00.000000000 | 2262-04-11 23:47:16.000000000 | true |
+| 2 | name 2 | TODO | 2025-11-10 20:57:00.000000000 | 2262-04-11 23:47:16.000000000 | true |
+
+Record `id=1` has two versions: the original (`name 1`, inactive) and the updated one (`name 11`, active).
+Record `id=2` has only one version, which is currently active.
+
+#### Multiple primary keys in the source table {#history-multiple-pks}
+
+If the source table has multiple primary keys, they are all included in the `ORDER BY` together with `_fivetran_start` as the last element.
+
+For example, there is a source table `items` with primary key columns `id` (`INT`) and `name` (`STRING`), plus an
+additional regular column `description` (`STRING`). The destination table in history mode will be defined as follows:
+
+```sql
+CREATE TABLE `items`
+(
+ `id` Int32,
+ `name` String,
+ `description` Nullable(String),
+ `_fivetran_synced` DateTime64(9, 'UTC'),
+ `_fivetran_start` DateTime64(9, 'UTC'),
+ `_fivetran_end` Nullable(DateTime64(9, 'UTC')),
+ `_fivetran_active` Nullable(Bool)
+) ENGINE = SharedReplacingMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}', _fivetran_synced)
+ORDER BY (id, name, _fivetran_start)
+SETTINGS index_granularity = 8192
+```
+
+In this case, `id`, `name`, and `_fivetran_start` form the compound sorting key.
+
+#### No primary keys in the source table {#history-no-pks}
+
+If the source table has no primary keys, a unique identifier will be added by Fivetran as a `_fivetran_id` column,
+and `_fivetran_start` is appended to the sorting key.
+Consider an `events` table that only has the `event` (`STRING`) and `timestamp` (`LOCALDATETIME`) columns in the source.
+The destination table in history mode is as follows:
+
+```sql
+CREATE TABLE events
+(
+ `event` Nullable(String),
+ `timestamp` Nullable(DateTime),
+ `_fivetran_id` String,
+ `_fivetran_synced` DateTime64(9, 'UTC'),
+ `_fivetran_start` DateTime64(9, 'UTC'),
+ `_fivetran_end` Nullable(DateTime64(9, 'UTC')),
+ `_fivetran_active` Nullable(Bool)
+) ENGINE = SharedReplacingMergeTree('/clickhouse/tables/{uuid}/{shard}', '{replica}', _fivetran_synced)
+ORDER BY (_fivetran_id, _fivetran_start)
+SETTINGS index_granularity = 8192
+```
+
+Since `_fivetran_id` and `_fivetran_start` form the compound sorting key.
+
+### Selecting the latest version of the data without duplicates {#selecting-latest-version}
+
+`SharedReplacingMergeTree` performs background data deduplication
+[only during merges at an unknown time](/engines/table-engines/mergetree-family/replacingmergetree).
+However, selecting the latest version of the data without duplicates ad-hoc is possible with the `FINAL` keyword:
+
+```sql
+SELECT *
+FROM example FINAL
+LIMIT 1000
+```
+
+Check out the [optimizing reading queries](/integrations/fivetran/troubleshooting#optimizing-reading-queries)" section in the troubleshooting guide for query optimization tips.
+
+## Retries on network failures {#retries-on-network-failures}
+
+The ClickHouse Cloud destination retries transient network errors using the exponential backoff algorithm.
+This is safe even when the destination inserts the data, as any potential duplicates are handled by
+the `SharedReplacingMergeTree` table engine.
diff --git a/docs/integrations/data-ingestion/etl-tools/fivetran/troubleshooting.md b/docs/integrations/data-ingestion/etl-tools/fivetran/troubleshooting.md
new file mode 100644
index 00000000000..69848d26f6e
--- /dev/null
+++ b/docs/integrations/data-ingestion/etl-tools/fivetran/troubleshooting.md
@@ -0,0 +1,295 @@
+---
+sidebar_label: 'Troubleshooting & best practices'
+slug: /integrations/fivetran/troubleshooting
+sidebar_position: 4
+description: 'Common errors, debugging tips, and best practices for the Fivetran ClickHouse destination.'
+title: 'Troubleshooting & best practices'
+doc_type: 'guide'
+keywords: ['fivetran', 'clickhouse destination', 'troubleshooting', 'best practices', 'debugging']
+---
+
+# Troubleshooting & best practices
+
+## Common errors {#common-errors}
+
+### Grants test failed or operations are failing related to permissions {#grants-test-failed}
+
+**Error message:**
+
+```sh
+Test grants failed, cause: user is missing the required grants on *.*: ALTER, CREATE DATABASE, CREATE TABLE, INSERT, SELECT
+```
+
+**Cause:** The Fivetran user does not have the required privileges. The connector requires `ALTER`, `CREATE DATABASE`, `CREATE TABLE`, `INSERT`, and `SELECT` grants on `*.*` (all databases and tables).
+
+:::note
+The grants check queries `system.grants` and only matches direct user grants. Privileges assigned through a ClickHouse role are not detected. See the [role-based grants](/integrations/fivetran/troubleshooting#role-based-grants) section for more details.
+:::
+
+**Solution:**
+
+Grant the required privileges directly to the Fivetran user:
+
+```sql
+GRANT CURRENT GRANTS ON *.* TO fivetran_user;
+```
+
+### Error while waiting for all mutations to be completed {#mutations-not-completed}
+
+**Error message:**
+
+```sh
+error while waiting for all mutations to be completed: ... initial cause: ...
+```
+
+**Cause:** An `ALTER TABLE ... UPDATE` or `ALTER TABLE ... DELETE` mutation was submitted, but the connector timed out waiting for it to complete across all replicas. The "initial cause" part of the error often contains the original ClickHouse error (commonly code 341, "Unfinished").
+
+This can happen when:
+- The ClickHouse Cloud cluster is under heavy load.
+- One or more nodes went down during the mutation execution.
+
+**Solutions:**
+
+1. **Check mutation progress**: Run the following query to check for pending mutations:
+ ```sql
+ SELECT database, table, mutation_id, command, create_time, is_done
+ FROM system.mutations
+ WHERE NOT is_done
+ ORDER BY create_time DESC;
+ ```
+2. **Check cluster health**: Ensure all nodes are healthy.
+3. **Wait and retry**: Mutations eventually complete once the cluster is healthy. Fivetran will retry the sync automatically.
+
+### Column mismatch error {#column-mismatch-error}
+
+**Error message:**
+
+Different errors may happen if the columns mismatch is due to a schema change in the source. For example:
+
+```sh
+columns count in ClickHouse table (8) does not match the input file (6). Expected columns: id, name, ..., got: id, name, ...
+```
+
+Or:
+
+```sh
+column user_email was not found in the table definition. Table columns: ...; input file columns: ...
+```
+
+**Cause:** The columns in the ClickHouse destination table does not match the columns in the data being synced. This can happen when:
+- Columns were manually added or removed from the ClickHouse table.
+- A schema change in the source was not properly propagated.
+
+**Solutions:**
+
+1. **Remember to not manually modify Fivetran-managed tables.** See [best practices](/integrations/fivetran/troubleshooting#dont-modify-tables).
+2. **Alter the column back**: If you are aware of which type the column should be, alter the column back to the expected type using the [type transformation mapping](/integrations/fivetran/reference#type-mapping) as a reference.
+3. **Re-sync the table**: In the Fivetran dashboard, trigger a historical re-sync for the affected table.
+4. **Drop and re-create**: As a last resort, drop the destination table and let Fivetran re-create it during the next sync.
+
+### AST is too big (code 168) {#ast-too-big}
+
+**Error message:**
+
+```sh
+code: 168, message: AST is too big. Maximum: 50000
+```
+or
+```sh
+code: 62, message: Max query size exceeded
+```
+
+**Cause:** Large UPDATE or DELETE batches generate SQL statements with very complex abstract syntax trees. Common with wide tables or history mode enabled.
+
+**Solution:**
+
+Lower `mutation_batch_size` and `hard_delete_batch_size` in the [advanced configuration](/integrations/fivetran/reference#advanced-configuration) file. Both default to `1500` and accept values between `200` and `1500`.
+
+---
+
+### Memory limit exceeded / OOM (code 241) {#memory-limit-exceeded}
+
+**Error message:**
+
+```sh
+code: 241, message: (total) memory limit exceeded: would use 14.01 GiB
+```
+
+**Cause:** The INSERT operation requires more memory than available. Happens usually during large initial syncs, with wide tables, or concurrent batch operations.
+
+**Solutions:**
+
+1. **Reduce `write_batch_size`**: Try lowering it to 50,000 for large tables.
+2. **Reduce database load**: Check the load on the ClickHouse Cloud service to see if it's overloaded.
+3. **Scale up the ClickHouse Cloud service** to provide more memory.
+
+---
+
+### Unexpected EOF / Connection error {#unexpected-eof}
+
+**Error message:**
+
+```sh
+ClickHouse connection error: unexpected EOF
+```
+
+Or `FAILURE_WITH_TASK` with no stack trace in Fivetran logs.
+
+**Cause:**
+- IP access list not configured to allow Fivetran traffic.
+- Transient network issues between Fivetran and ClickHouse Cloud.
+- Corrupted or invalid source data causing the destination connector to crash.
+
+**Solutions:**
+
+1. **Check IP access list**: In ClickHouse Cloud, go to **Settings > Security** and add the [Fivetran IP addresses](https://fivetran.com/docs/using-fivetran/ips) or allow access from anywhere.
+2. **Retry**: Recent connector versions automatically retry EOF errors. Sporadic errors (1–2 per day) are likely transient.
+3. **If the issue persists**: Open a support ticket with ClickHouse providing the error time window. Also ask Fivetran support to investigate source data quality.
+
+---
+
+### Can't map type UInt64 {#uint64-type-error}
+
+**Error message:**
+
+```sh
+cause: can't map type UInt64 to Fivetran types
+```
+
+**Cause:** The connector maps `LONG` to `Int64`, never `UInt64`. This error occurs when a column type is manually altered in a Fivetran-managed table.
+
+**Solutions:**
+
+1. **Do not manually modify column types** in Fivetran-managed tables.
+2. **To recover**: Alter the column back to the expected type (e.g., `Int64`) or delete and re-sync the table.
+3. **For custom types**: Create a [materialized view](/sql-reference/statements/create/view#materialized-view) on top of the Fivetran-managed table.
+
+---
+
+### No primary keys for table {#no-primary-keys}
+
+**Error message:**
+
+```sh
+Failed to alter table ... cause: no primary keys for table
+```
+
+**Cause:** Every ClickHouse table requires an `ORDER BY`. When the source has no primary key, Fivetran adds `_fivetran_id` automatically. This error occurs in edge cases where the source defines a PK but the data does not contain it.
+
+**Solutions:**
+
+1. **Contact Fivetran support** to investigate the source pipeline.
+2. **Check the source schema**: Ensure primary key columns are present in the data.
+
+---
+
+### Role-based grants failing {#role-based-grants}
+
+**Error message:**
+
+```sh
+user is missing the required grants on *.*: ALTER, CREATE DATABASE, CREATE TABLE, INSERT, SELECT
+```
+
+**Cause:** The connector checks grants with:
+
+```sql
+SELECT access_type, database, table, column FROM system.grants WHERE user_name = 'my_user'
+```
+
+This only returns direct grants. Privileges assigned via a ClickHouse role have `user_name = NULL` and `role_name = 'my_role'`, so they are invisible to this check.
+
+**Solution:**
+
+**Grant privileges directly** to the Fivetran user:
+```sql
+GRANT CURRENT GRANTS ON *.* TO fivetran_user;
+```
+
+---
+
+## Best practices {#best-practices}
+
+### Dedicated ClickHouse service for Fivetran {#dedicated-service}
+
+In case of high ingestion load, consider using ClickHouse Cloud's [compute-compute separation](/cloud/reference/warehouses) to create a dedicated service for Fivetran write workloads. This isolates ingestion from analytical queries and prevents resource contention.
+
+For example, the following architecture can be used:
+
+- **Service A (writer)**: Fivetran destination + other ingestion tools (ClickPipes, Kafka connectors)
+- **Service B (reader)**: BI tools, dashboards, ad-hoc queries
+
+### Optimizing reading queries {#optimizing-reading-queries}
+
+ClickHouse uses `SharedReplacingMergeTree` for Fivetran destination tables, which is the version of the [`ReplacingMergeTree` table engine](/guides/replacing-merge-tree) in ClickHouse Cloud. Duplicate rows with the same primary key are normal — deduplication happens asynchronously during background merges. At read time, you need to be careful to avoid returning duplicate rows, as some rows may not have been deduplicated yet.
+
+Using the `FINAL` keyword is the simplest way to avoid duplicate rows, as it forces a merge of any not-yet-deduplicated rows at read time:
+
+```sql
+SELECT * FROM schema.table FINAL WHERE ...
+```
+
+There are ways to optimize this `FINAL` operation — for example, by filtering on key columns using a `WHERE` condition. For more details, see the [FINAL performance](/guides/replacing-merge-tree#final-performance) section of the ReplacingMergeTree guide.
+
+If those optimizations are not sufficient, you have additional options that avoid using `FINAL` while still handling duplicates correctly:
+- If you want to query a numeric column that is always incrementing, [you can use `max(the_column)`](/guides/developer/deduplication#avoiding-final).
+- If you need to retrieve the latest value for some columns for a particular key, you can use [`argMax(the_column, _fivetran_id)`](https://clickhouse.com/blog/10-best-practice-tips#perfecting_replacingmergetree).
+
+### Primary key and ORDER BY optimization {#primary-key-optimization}
+
+Fivetran replicates the source table's primary key as the ClickHouse `ORDER BY` clause. When the source has no PK, `_fivetran_id` (a UUID) becomes the sorting key, which can lead to poor query performance because ClickHouse builds its [sparse primary index](/guides/best-practices/sparse-primary-indexes) from the `ORDER BY` columns.
+
+**Recommendations in this case if any other optimization is not sufficient:**
+
+1. **Treat Fivetran tables as raw staging tables.** Do not query them directly for analytics.
+2. **If queries are still not performant enough**, use a [Refreshable Materialized View](/materialized-view/refreshable-materialized-view) to create a copy of the table with an `ORDER BY` optimized for your query patterns. Unlike incremental materialized views, refreshable materialized views re-run the full query on a schedule, which correctly handles the `UPDATE` and `DELETE` operations that Fivetran issues during syncs:
+ ```sql
+ CREATE MATERIALIZED VIEW schema.table_optimized
+ REFRESH EVERY 1 HOUR
+ ENGINE = ReplacingMergeTree()
+ ORDER BY (user_id, event_date)
+ AS SELECT * FROM schema.table_raw FINAL;
+ ```
+
+ :::note
+ Avoid incremental (non-refreshable) materialized views for Fivetran-managed tables. Because Fivetran issues `UPDATE` and `DELETE` operations to keep data in sync, incremental materialized views will not reflect these changes and will contain stale or incorrect data.
+ :::
+
+### Don't manually modify Fivetran-managed tables {#dont-modify-tables}
+
+Avoid manual DDL changes (e.g., `ALTER TABLE ... MODIFY COLUMN`) to tables managed by Fivetran. The connector expects the schema it created. Manual changes can cause [type mapping errors](#uint64-type-error) and schema mismatch failures.
+
+Use materialized views for custom transformations.
+
+## Debugging operations {#debugging}
+
+When diagnosing failures:
+- Check the ClickHouse `system.query_log` for server-side issues.
+- Request Fivetran for help with client-side issues.
+
+For connector bugs, [create a GitHub issue](https://github.com/ClickHouse/clickhouse-fivetran-destination/issues) or contact [ClickHouse Support](/about-us/support).
+
+### Debugging Fivetran syncs {#debugging-fivetran-syncs}
+
+Use the following queries to diagnose sync failures on the ClickHouse side.
+
+#### Check recent ClickHouse errors related to Fivetran {#check-errors}
+
+```sql
+SELECT event_time, query, exception_code, exception
+FROM system.query_log
+WHERE client_name LIKE 'fivetran-destination%'
+ AND exception_code > 0
+ORDER BY event_time DESC
+LIMIT 50;
+```
+
+#### Check recent Fivetran user activity {#check-activity}
+
+```sql
+SELECT event_time, query_kind, query, exception_code, exception
+FROM system.query_log
+WHERE user = '{fivetran_user}'
+ORDER BY event_time DESC
+LIMIT 100;
+```
diff --git a/sidebars.js b/sidebars.js
index c8199306f88..cd2f15925e0 100644
--- a/sidebars.js
+++ b/sidebars.js
@@ -1116,7 +1116,18 @@ const sidebars = {
},
'integrations/data-ingestion/etl-tools/dlt-and-clickhouse',
'integrations/data-ingestion/etl-tools/estuary',
- 'integrations/data-ingestion/etl-tools/fivetran/index',
+ {
+ type: 'category',
+ label: 'Fivetran',
+ className: 'top-nav-item',
+ collapsed: true,
+ collapsible: true,
+ link: { type: 'doc', id: 'integrations/data-ingestion/etl-tools/fivetran/index' },
+ items: [
+ 'integrations/data-ingestion/etl-tools/fivetran/reference',
+ 'integrations/data-ingestion/etl-tools/fivetran/troubleshooting',
+ ],
+ },
'integrations/data-ingestion/etl-tools/nifi-and-clickhouse',
'integrations/data-ingestion/etl-tools/vector-to-clickhouse',
{