Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 1 addition & 12 deletions docs/docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -326,18 +326,7 @@
"pages": [
"python-sdk/api-reference/overview",
"python-sdk/api-reference/test-decorators",
"python-sdk/api-reference/table-assets",
"python-sdk/api-reference/tests",
"python-sdk/api-reference/test-executions"
]
},
{
"group": "Guides",
"pages": [
"python-sdk/guides/authentication",
"python-sdk/guides/test-decorators",
"python-sdk/guides/sending-data",
"python-sdk/guides/best-practices"
"python-sdk/api-reference/table-assets"
]
}
]
Expand Down
32 changes: 20 additions & 12 deletions docs/python-sdk/api-reference/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ client = ElementaryCloudClient(project_id, api_key, url)
Where:
- `project_id` is your Python project identifier (chosen by you, used to deduplicate and identify reported assets)
- `api_key` is your API token (generated from the steps above)
- `url` is the full SDK ingest endpoint URL: `{base_url}/sdk-ingest/{env_id}/batch`
- `url` is the full SDK ingest endpoint URL (the Elementary team will provide you with this URL): `{base_url}/sdk-ingest/{env_id}/batch`
- Example: `https://app.elementary-data.com/sdk-ingest/a6b2425d-36e2-4e13-8458-9825688ca1f2/batch`

## Test Context
Expand All @@ -58,6 +58,17 @@ with elementary_test_context(asset=asset) as ctx:
client.send_to_cloud(ctx)
```

### `raise_on_error`

By default, `elementary_test_context` uses `raise_on_error=False`. This means that if a decorated test (or something inside the context) raises an exception, the SDK **captures it and records an `ERROR` execution** so you can still send results to Elementary Cloud without crashing your pipeline.

If you prefer **fail-fast** behavior (for example in CI), pass `raise_on_error=True` to re-raise exceptions after they are recorded:

```python
with elementary_test_context(asset=asset, raise_on_error=True) as ctx:
run_my_tests(df)
```

## Test Decorators

The SDK provides decorators to define tests:
Expand All @@ -75,15 +86,17 @@ You can also use context managers for inline tests:
with elementary_test_context(asset=asset) as ctx:
# Using context managers
with ctx.boolean_test(name="my_test", description="Inline test") as my_bool_test:
my_bool_test.assert_value(False)
my_bool_test.assert_value(my_test_function())

with ctx.expected_values_test(
name="country_count",
expected=[2, 3],
allow_none=True,
metadata={"my_metadata_field": "my_metadata_value"},
) as my_expected_values_test:
# This will fail
my_expected_values_test.assert_value(5)
# This will pass
my_expected_values_test.assert_value(3)

with ctx.expected_range_test(
Expand All @@ -103,17 +116,14 @@ with elementary_test_context(asset=asset) as ctx:

## Supported Objects

The SDK supports three types of objects:
The SDK supports reporting table assets and test results.

<CardGroup cols={3}>
<CardGroup cols={2}>
<Card title="Table Assets" icon="table" href="/python-sdk/api-reference/table-assets" >
Register tables and views in your data warehouse
</Card>
<Card title="Tests" icon="flask" href="/python-sdk/api-reference/tests" >
Define data quality tests
</Card>
<Card title="Test Executions" icon="play" href="/python-sdk/api-reference/test-executions" >
Report test execution results
<Card title="Test Decorators" icon="flask" href="/python-sdk/api-reference/test-decorators" >
Define data quality tests using decorators
</Card>
</CardGroup>

Expand Down Expand Up @@ -143,7 +153,6 @@ except Exception as e:
- **Run multiple tests in one context** - All tests in a single `elementary_test_context` are automatically batched
- **Use descriptive test names** - Clear names help identify tests in the Elementary UI
- **Include asset metadata** - Add descriptions, owners, tags, and dependencies to assets
- **Handle errors gracefully** - Wrap `send_to_cloud` calls in try-except blocks

<Tip>
All tests run within a single `elementary_test_context` are automatically batched and sent together.
Expand All @@ -153,6 +162,5 @@ All tests run within a single `elementary_test_context` are automatically batche

- [Test Decorators](/python-sdk/api-reference/test-decorators) - Complete reference for all test decorators
- [Table Assets](/python-sdk/api-reference/table-assets) - Learn about table asset structure
- [Tests](/python-sdk/api-reference/tests) - Understand test definitions
- [Test Executions](/python-sdk/api-reference/test-executions) - See how to report test results
- [Quickstart](/python-sdk/quickstart) - Send your first test results to Elementary Cloud

9 changes: 4 additions & 5 deletions docs/python-sdk/api-reference/table-assets.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ asset = TableAsset(
description="string", # Optional: Table description
owners=["string"], # Optional: List of owners (emails or usernames)
tags=["string"], # Optional: List of tags
depends_on=["string"] # Optional: List of upstream asset IDs
depends_on=["string"] # Optional: List of upstream fully qualified table names
)
```

Expand All @@ -37,7 +37,7 @@ asset = TableAsset(
| `description` | string | Human-readable description of the table |
| `owners` | list[string] | List of owners (email addresses or usernames) |
| `tags` | list[string] | List of tags for categorization |
| `depends_on` | list[string] | List of upstream asset IDs (e.g., `["prod.public.customers", "prod.public.orders"]`) for lineage tracking |
| `depends_on` | list[string] | List of upstream fully qualified table names (e.g., `["prod.public.customers", "prod.public.orders"]`) for lineage tracking |

## Example

Expand Down Expand Up @@ -75,7 +75,6 @@ Table assets are updated on each ingest, so include all current metadata in ever

## Related Documentation

- [Tests](/python-sdk/api-reference/tests) - Define tests for your table assets
- [Test Executions](/python-sdk/api-reference/test-executions) - Report test results
- [Sending Data Guide](/python-sdk/guides/sending-data) - Learn how to send table assets
- [Test Decorators](/python-sdk/api-reference/test-decorators) - Define tests for your table assets
- [API Reference](/python-sdk/api-reference/overview) - Overview of the SDK API

34 changes: 14 additions & 20 deletions docs/python-sdk/api-reference/test-decorators.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,8 @@ def test_function(df: pd.DataFrame) -> bool:
| `tags` | list[str] | No | `None` | List of tags |
| `owners` | list[str] | No | `None` | List of owners |
| `metadata` | dict | No | `None` | Additional metadata |
| `quality_dimension` | QualityDimension | No | `None` | Quality dimension (defaults to VALIDITY if column_name is set) |
| `skip` | bool | No | `False` | Whether to skip this test |
| `quality_dimension` | QualityDimension | No | `None` | Quality dimension (defaults to VALIDITY) |
| `skip` | bool | No | `False` | Whether to skip this test. Useful if you want the test to appear in Elementary Cloud, but you don't want to execute it in this run. |

### Example

Expand All @@ -68,7 +68,7 @@ def test_unique_ids(df: pd.DataFrame) -> bool:

## @expected_range

Tests that return a numeric value that should fall within a range.
Tests that return a numeric value that should fall within a range. They can also return a list of numeric values or a pandas Series.

### Signature

Expand All @@ -86,9 +86,11 @@ Tests that return a numeric value that should fall within a range.
quality_dimension: QualityDimension | None = None,
skip: bool = False,
)
def test_function(df: pd.DataFrame) -> float:
def test_function(df: pd.DataFrame) -> float | list[float] | pd.Series:
# Your test logic
return 25.5 # Numeric value
return df["age"].mean() # Numeric value
# return [1, 2, 3] # Numeric values
# return df["age"] # pandas Series
```

### Parameters
Expand All @@ -98,10 +100,7 @@ def test_function(df: pd.DataFrame) -> float:
| `name` | str | Yes | - | Test name |
| `min` | float | No | `None` | Minimum expected value (inclusive) |
| `max` | float | No | `None` | Maximum expected value (inclusive) |
| `severity` | str | No | `"ERROR"` | Test severity |
| `description` | str | No | `None` | Test description |
| `column_name` | str | No | `None` | Column being tested |
| `tags`, `owners`, `metadata`, `quality_dimension`, `skip` | - | No | - | Same as `@boolean_test` |
| `severity`, `description`, `column_name`, `tags`, `owners`, `metadata`, `quality_dimension`, `skip` | - | No | - | Same as `@boolean_test` |

### Example

Expand All @@ -120,7 +119,7 @@ def test_average_age(df: pd.DataFrame) -> float:

## @expected_values

Tests that return a value that should match one of a list of expected values.
Tests that return a value (or values) that should match one of a list of expected values.

### Signature

Expand Down Expand Up @@ -150,10 +149,7 @@ def test_function(df: pd.DataFrame) -> Any:
| `name` | str | Yes | - | Test name |
| `expected` | Any \| list[Any] | Yes | - | Expected value(s) - can be single value or list |
| `allow_none` | bool | No | `False` | Whether to allow None values |
| `severity` | str | No | `"ERROR"` | Test severity |
| `description` | str | No | `None` | Test description |
| `column_name` | str | No | `None` | Column being tested |
| `tags`, `owners`, `metadata`, `quality_dimension`, `skip` | - | No | - | Same as `@boolean_test` |
| `severity`, `description`, `column_name`, `tags`, `owners`, `metadata`, `quality_dimension`, `skip` | - | No | - | Same as `@boolean_test` |

### Example

Expand Down Expand Up @@ -199,9 +195,7 @@ def test_function(df: pd.DataFrame) -> Sized:
| `name` | str | Yes | - | Test name |
| `min` | int | No | `None` | Minimum expected row count (inclusive) |
| `max` | int | No | `None` | Maximum expected row count (inclusive) |
| `severity` | str | No | `"ERROR"` | Test severity |
| `description` | str | No | `None` | Test description |
| `tags`, `owners`, `metadata`, `skip` | - | No | - | Same as `@boolean_test` |
| `severity`, `description`, `tags`, `owners`, `metadata`, `skip` | - | No | - | Same as `@boolean_test` |

### Example

Expand All @@ -214,7 +208,7 @@ def test_function(df: pd.DataFrame) -> Sized:
description="Validate user count is within expected range",
)
def get_users_df(df: pd.DataFrame) -> pd.DataFrame:
"""Return the dataframe - decorator calls len() on it."""
"""Return the DataFrame; the decorator calls len() on it."""
return df
```

Expand All @@ -240,6 +234,6 @@ All decorators support these common parameters:
## Related Documentation

- [Quickstart](/python-sdk/quickstart) - Get started with test decorators
- [Sending Data](/python-sdk/guides/sending-data) - Learn how to send test results
- [Best Practices](/python-sdk/guides/best-practices) - Best practices for using the SDK
- [API Reference](/python-sdk/api-reference/overview) - Overview of the SDK API
- [Table Assets](/python-sdk/api-reference/table-assets) - Register tables and views in your data warehouse

156 changes: 0 additions & 156 deletions docs/python-sdk/api-reference/test-executions.mdx

This file was deleted.

Loading
Loading