This directory contains end-to-end (e2e) tests, written using the Godog framework.
Godog is a Behavior-Driven Development (BDD) framework that allows you to write tests in a human-readable format called Gherkin. Tests are written as scenarios using Given-When-Then syntax, making them accessible to both technical and non-technical stakeholders.
Benefits:
- Readable: Tests serve as living documentation
- Maintainable: Reusable step definitions reduce code duplication
- Collaborative: Product owners and developers share the same test specifications
- Structured: Clear separation between test scenarios and implementation
test/e2e/
├── README.md # This file
├── features_test.go # Test runner and suite initialization
├── features/ # Gherkin feature files
│ ├── install.feature # ClusterExtension installation scenarios
│ ├── update.feature # ClusterExtension update scenarios
│ ├── recover.feature # Recovery scenarios
│ ├── status.feature # ClusterExtension status scenarios
│ └── metrics.feature # Metrics endpoint scenarios
└── steps/ # Step definitions and test utilities
├── steps.go # Step definition implementations
├── hooks.go # Test hooks and scenario context
└── testdata/ # Test data (RBAC templates)
├── serviceaccount-template.yaml
├── olm-sa-helm-rbac-template.yaml
├── olm-sa-boxcutter-rbac-template.yaml
├── pvc-probe-sa-boxcutter-rbac-template.yaml
├── cluster-admin-rbac-template.yaml
└── metrics-reader-rbac-template.yaml
The main test entry point that configures and runs the Godog test suite.
Gherkin files that describe test scenarios in natural language.
Structure:
Feature: [Feature Name]
[Feature description]
Background:
[Common setup steps for all scenarios]
Scenario: [Scenario Name]
Given [precondition]
When [action]
Then [expected result]
And [additional assertions]Example:
Feature: Install ClusterExtension
Background:
Given OLM is available
And "test" catalog serves bundles
And Service account "olm-sa" with needed permissions is available in test namespace
Scenario: Install latest available version from the default channel
When ClusterExtension is applied
"""
apiVersion: olm.operatorframework.io/v1
kind: ClusterExtension
metadata:
name: ${NAME}
spec:
namespace: ${TEST_NAMESPACE}
serviceAccount:
name: olm-sa
source:
sourceType: Catalog
catalog:
packageName: test
selector:
matchLabels:
"olm.operatorframework.io/metadata.name": test-catalog
...
"""
Then ClusterExtension is rolled out
And ClusterExtension is availableGo functions that implement the steps defined in feature files. Each step is registered with a regex pattern that matches the Gherkin text.
Registration:
func RegisterSteps(sc *godog.ScenarioContext) {
sc.Step(`^OLM is available$`, OLMisAvailable)
sc.Step(`^bundle "([^"]+)" is installed in version "([^"]+)"$`, BundleInstalled)
sc.Step(`^ClusterExtension is applied$`, ResourceIsApplied)
// ... more steps
}Step Implementation Pattern:
func BundleInstalled(ctx context.Context, name, version string) error {
sc := scenarioCtx(ctx)
waitFor(ctx, func() bool {
v, err := kubectl("get", "clusterextension", sc.clusterExtensionName, "-o", "jsonpath={.status.install.bundle}")
if err != nil {
return false
}
var bundle map[string]interface{}
json.Unmarshal([]byte(v), &bundle)
return bundle["name"] == name && bundle["version"] == version
})
return nil
}Manages test lifecycle and scenario-specific context.
Hooks:
CheckFeatureTags: Skips scenarios based on feature gate tags (e.g.,@WebhookProviderCertManager)CreateScenarioContext: Creates unique namespace and names for each scenarioScenarioCleanup: Cleans up resources after each scenario
Variable Substitution:
Replaces ${TEST_NAMESPACE}, ${NAME}, ${SCENARIO_ID}, ${PACKAGE:<name>}, and ${CATALOG:<name>} with scenario-specific values.
Create a new .feature file in test/e2e/features/:
Feature: Your Feature Name
Description of what this feature tests
Background:
Given OLM is available
And "test" catalog serves bundles
Scenario: Your scenario description
When [some action]
Then [expected outcome]Add step implementations in steps/steps.go:
func RegisterSteps(sc *godog.ScenarioContext) {
// ... existing steps
sc.Step(`^your step pattern "([^"]+)"$`, YourStepFunction)
}
func YourStepFunction(ctx context.Context, param string) error {
sc := scenarioCtx(ctx)
// Implementation
return nil
}Leverage existing steps for common operations:
- Setup:
Given OLM is available,And "test" catalog serves bundles - Resource Management:
When ClusterExtension is applied,And resource is applied - Assertions:
Then ClusterExtension is available,And bundle "..." is installed - Conditions:
Then ClusterExtension reports Progressing as True with Reason Retrying:
Use these variables in YAML templates:
${NAME}: Scenario-specific ClusterExtension name (e.g.,ce-123)${COS_NAME}: Scenario-specific ClusterObjectSet name (e.g.,cos-123; for applying ClusterObjectSets directly)${TEST_NAMESPACE}: Scenario-specific namespace (e.g.,ns-123)${SCENARIO_ID}: Unique scenario identifier used for resource name isolation${PACKAGE:<name>}: Parameterized package name (e.g.,${PACKAGE:test}expands totest-<scenario-id>)${CATALOG:<name>}: Catalog resource name (e.g.,${CATALOG:test}expands totest-catalog-<scenario-id>)
Tags can be used for different purposes in the test suite:
Use tags to conditionally run scenarios based on feature gates:
@WebhookProviderCertManager
Scenario: Install operator having webhooksScenarios are skipped if the feature gate is not enabled on the deployed controller.
By default, scenarios run concurrently (up to 100 parallel scenarios). However, some tests must run serially, typically because they:
- Modify shared cluster resources (e.g., cluster-wide TLS configuration)
- Have resource constraints that prevent parallel execution
- Require exclusive access to a resource
To mark a test for serial execution, add the @Serial tag:
@Serial
Feature: TLS profile enforcement on metrics endpoints
Scenario: Test TLS configuration
Given the "catalogd" deployment is configured with custom TLS settings
...The test runner automatically separates scenarios:
- Scenarios without
@Serialrun concurrently in the first test phase - Scenarios with
@Serialrun sequentially in a separate serial test phase
make test-e2eor
make test-experimental-e2ego test test/e2e/features_test.go -- features/install.featurego test test/e2e/features_test.go --godog.tags="@WebhookProviderCertManager"Note that setting the tags in this way will disable the automatic test parallelization. If running in parallel with custom tags is desired, set --godog.concurrency=100 for instance to re-enable. If this is done adding && ~@Serial to the tags as well is highly recommended:
go test test/e2e/features_test.go --godog.tags="@WebhookProviderCertManager && ~@Serial" --godog.concurrency=100go test -v test/e2e/features_test.go --log.debugGodog options can be passed after --:
go test test/e2e/features_test.go \
--godog.format=pretty \
--godog.tags="@WebhookProviderCertManager"Available formats: pretty, cucumber, progress, junit
Custom Flags:
--log.debug: Enable debug logging (development mode)--k8s.cli=<path>: Specify path to Kubernetes CLI (default:kubectl)- Useful for using
ocor a specific kubectl binary
- Useful for using
Example:
go test test/e2e/features_test.go --log.debug --k8s.cli=ocKUBECONFIG: Path to kubeconfig file (defaults to~/.kube/config)E2E_SUMMARY_OUTPUT: Path to write test summary (optional)CLUSTER_REGISTRY_HOST: In-cluster registry host for pulling catalog images
Each scenario runs in its own namespace with unique resource names, ensuring complete isolation:
- Namespace:
ns-{scenario-id} - ClusterExtension:
ce-{scenario-id}
The ScenarioCleanup hook ensures all resources are deleted after each scenario:
- Deletes ClusterExtensions and ClusterObjectSets
- Deletes ClusterCatalogs
- Deletes namespaces
- Deletes added resources
Resources are managed declaratively using YAML templates embedded in feature files as docstrings:
When ClusterExtension is applied
"""
apiVersion: olm.operatorframework.io/v1
kind: ClusterExtension
metadata:
name: ${NAME}
spec:
...
"""All asynchronous operations use waitFor with consistent timeout (300s) and tick (1s):
waitFor(ctx, func() bool {
// Check condition
return conditionMet
})Tests automatically detect enabled feature gates from the running controller and skip scenarios that require disabled features.
A list of available, implemented steps can be obtained by running:
go test test/e2e/features_test.go -dWhen working with Claude Code, run the /list-e2e-steps command to get a categorized
reference of all step definitions including parameters, DocString expectations, polling behavior, and handler locations.
This is useful when writing new feature files or debugging existing scenarios.
- Keep scenarios focused: Each scenario should test one specific behavior
- Use Background wisely: Common setup steps belong in Background
- Reuse steps: Leverage existing step definitions before creating new ones
- Meaningful names: Scenario names should clearly describe what is being tested
- Avoid implementation details: Focus on behavior, not implementation