This guide covers both the practical aspects of setting up and contributing to the CrowdStrike Terraform Provider as well as the architectural decisions and design patterns that guide its development.
- Contributing to the CrowdStrike Terraform Provider
- Go 1.21+ installed and configured.
- Terraform v1.8+ installed locally.
- pre-commit for code quality hooks (recommended).
Create a .terraformrc file in your home directory:
touch ~/.terraformrcEdit the .terraformrc file to look like this:
provider_installation {
dev_overrides {
"registry.terraform.io/crowdstrike/crowdstrike" = "/path/to/go/bin"
}
direct {}
}The value of /path/to/go/bin will be your GOBIN path. You can run go env GOBIN to find the path Go installs your binaries.
If go env GOBIN is not set, then use the default path of /Users/<Username>/go/bin.
Terraform will now use the locally built provider when you run terraform configurations that reference the CrowdStrike provider.
Clone the repository:
git clone https://github.com/CrowdStrike/terraform-provider-crowdstrike.git
cd terraform-provider-crowdstrikeBuild the CrowdStrike provider:
make buildRun make build anytime new changes are added to the provider or you pull new code from the repository to update your local installation.
Pre-commit hooks help ensure code quality and consistency by running automated checks before each commit. They catch common issues early and auto-fix many formatting problems.
First, install pre-commit if you haven't already:
# https://pre-commit.com/#install
pip install pre-commitAfter cloning the repository, install the pre-commit hooks:
pre-commit installThis installs the hooks defined in .pre-commit-config.yaml to run automatically on each git commit. If you do not want the hooks to run automatically, you can do pre-commit run to run them manually.
Automatic: If you have installed the pre-commit hooks, they will run automatically on each commit. If any hook fails or makes changes, the commit will be aborted. Review the changes and commit again.
Manual execution:
# Run hooks on staged files only
pre-commit run
# Run hooks on all files
pre-commit run -a- Go linting & formatting:
golangci-lintruns comprehensive linting including formatting, static analysis, and style checks with auto-fix - Module cleanup:
go mod tidykeeps dependencies clean - Documentation:
go generatekeeps docs up-to-date (only runs when files in examples/ or internal/ change) - Terraform formatting:
terraform fmtformats .tf files - General quality: Hooks for general code quality.
Performance: Hooks are designed to be fast and efficient, only running on relevant file changes.
Follow these commit message conventions for consistency. Since we use squash merges, maintainers will ensure final messages follow these standards.
<type>(<scope>): <description> (#PR)
[optional body]
[optional footer]
feat: New features/resourcesfix: Bug fixesrefactor: Code refactoringtest: Test additions/changeschore: Maintenance tasks
<resource_name>: Resource names (dropcrowdstrike_prefix)provider: Core provider functionalitydocs: Documentation updatestools: Development toolingci: CI/CD pipeline changesdeps: Dependency updatestests: Test-specific changes
- Imperative mood: Use "add" not "added", "fix" not "fixed"
- Lowercase: Start description with lowercase letter after the colon
- Length: Keep subject line under 72 characters
- Issue reference: Include issue number in footer when applicable
- Be specific: Clearly describe what changed, not how
# Resource changes
feat(sensor_visibility_exclusion): add new resource
fix(default_sensor_update_policy): require replace on updates
feat(prevention_policy_attachment): add new resource
# System changes
chore(deps): bump gofalcon to v0.13.4
fix(docs): default content update policy categorization
chore(ci): add pre-commit hooks configuration
# Multi-line example
feat(host_group): add advanced filtering support
Add support for complex filtering expressions in host group queries.
This enables more precise host targeting for policy assignments.
Closes #145Follow these steps to add a new Terraform resource to the provider:
-
Scaffold the Resource
- Use the resource generator to scaffold files:
go run tools/resource/gen.go <ResourceName> # Example: go run tools/resource/gen.go host_group
- This creates a Go file in
internal/<resource>/, an example inexamples/resources/, and an import script.
- Use the resource generator to scaffold files:
-
Implement Resource Logic
- Fill in the CRUD (Create, Read, Update, Delete) methods in the generated Go file.
- Design the schema according to the Resource Schema Patterns section below.
- Implement
ValidateConfigfor resource-specific validation logic. - Register your new resource in
internal/provider/provider.go.
-
Add Acceptance Tests
- Create a test file in the appropriate internal package (e.g.,
internal/<resource>/<resource>_resource_test.go). - Ensure tests cover the full resource lifecycle: create, update, destroy, and attribute checks.
- Create a test file in the appropriate internal package (e.g.,
-
Add Example and Import Script
- Add a usage example in
examples/resources/<resource>/resource.tf. - Provide an import script in
examples/resources/<resource>/import.sh.
- Add a usage example in
-
Generate Documentation
- Run
go generate ./...to update generated docs.
- Run
-
Build and Test
- Run
make buildto build the provider. - Run
golangci-lint run ./...to check for lint errors. - Run tests to verify your changes work as expected.
- Run
- Resource Implementation:
internal/<resource>/<resource>_resource.go - Acceptance Tests:
internal/<resource>/<resource>_resource_test.go - Examples:
examples/resources/<resource>/ - Docs: Auto-generated in
docs/resources/from schema and examples.
This section explains the architectural decisions, idioms, and patterns that guide development of the CrowdStrike Terraform provider.
- Single Source of Truth: All API interactions must go through the
gofalconlibrary. This ensures consistency and leverages upstream model validation. - No Direct HTTP: Never use direct HTTP calls or undocumented endpoints, even for edge cases—extend
gofalconif necessary.
- User Experience First: Resource schemas are designed for clarity and usability, not just to mirror the API. Group related fields and use Terraform idioms (e.g., sets for collections).
- Request vs. Response Models: Only fields present in the API's request models (
Create...ReqV1,Update...ReqV1) are user-settable. Fields only in response models are marked asComputed. - Plan Modifiers: Use
RequiresReplacefor immutable fields,UseStateForUnknownfor IDs, etc., to ensure correct lifecycle behavior.
- Early Feedback: All resource-specific validation is implemented in
ValidateConfig, not in CRUD methods, to provide early feedback duringterraform plan. - Conditional Logic: Use
ValidateConfigfor mutually exclusive fields, conditionally required attributes, and complex validation that cannot be expressed with simple validators.
- Actionable Errors: Error messages should be actionable and user-focused, especially for common issues like insufficient API scopes.
The Terraform Plugin Framework provides a structured logging system called tflog that should be used for logging information during provider execution:
-
Log Levels:
tflog.Trace(): Most detailed, for very granular debugging informationtflog.Debug(): For information useful during development and debuggingtflog.Info(): For general operational informationtflog.Warn(): For potentially problematic situations that don't prevent executiontflog.Error(): For errors that don't necessarily halt execution
-
Structured Logging: Prefer structured fields over string interpolation:
// Good tflog.Debug(ctx, "Processing resource", map[string]interface{}{ "id": id, "name": name, }) // Avoid tflog.Debug(ctx, fmt.Sprintf("Processing resource with id %s and name %s", id, name))
-
Context Fields: Use context to attach fields that will appear in all subsequent logs:
ctx = tflog.SetField(ctx, "resource_id", id) // All logs using this ctx will include the resource_id field
-
Sensitive Data: Never log credentials or sensitive information:
// Use MaskLogString for sensitive values that appear in logs tflog.Debug(ctx, "Using configuration", map[string]interface{}{ "endpoint": endpoint, "token": tflog.MaskLogString(token), })
-
Viewing Logs: Users can see these logs by setting the
TF_LOGenvironment variable:# For all logs TF_LOG=TRACE terraform apply # Provider-specific logs TF_LOG_PROVIDER=TRACE terraform apply
- Follow the patterns in the Terraform Testing documentation.
- Ensure tests cover the full resource lifecycle and verify all attributes work as expected.
If you need to debug complex issues or see the raw API calls:
TF_LOG=DEBUG TF_ACC=1 go test ./... -v -timeout 120mThis prints detailed logs, including raw API calls from gofalcon, which is helpful for troubleshooting.
This section provides concrete examples of the code patterns that should be followed when contributing to the CrowdStrike Terraform Provider.
Implement a .wrap() method on your resource models to convert API responses to Terraform model data. This pattern ensures consistent handling of API data and separation of concerns:
// wrap transforms API response values to their terraform model values.
func (d *preventionPolicyAttachmentResourceModel) wrap(
ctx context.Context,
policy models.PreventionPolicyV1,
) diag.Diagnostics {
var diags diag.Diagnostics
d.ID = types.StringValue(*policy.ID)
// Convert API types to Terraform types
hostGroupSet, diag := hostgroups.ConvertHostGroupsToSet(ctx, policy.Groups)
diags.Append(diag...)
if diags.HasError() {
return diags
}
if !d.HostGroups.IsNull() || len(hostGroupSet.Elements()) != 0 {
d.HostGroups = hostGroupSet
}
// More field conversions...
return diags
}
// Usage in resource methods
func (r *Resource) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) {
var state resourceModel
resp.Diagnostics.Append(req.State.Get(ctx, &state)...)
if resp.Diagnostics.HasError() {
return
}
// Get data from API
policy, diags := getPolicy(ctx, r.client, state.ID.ValueString())
resp.Diagnostics.Append(diags...)
if resp.Diagnostics.HasError() {
return
}
// Update state with API response
resp.Diagnostics.Append(state.wrap(ctx, *policy)...)
resp.Diagnostics.Append(resp.State.Set(ctx, &state)...)
}Schema descriptions must follow a specific format to be correctly processed by the documentation generator. The preferred approach is to use the utils.MarkdownDescription helper function which handles proper formatting and inclusion of required API scopes:
var (
documentationSection string = "Prevention Policy"
resourceMarkdownDescription string = "This resource allows managing the host groups attached to a prevention policy."
requiredScopes []scopes.Scope = []scopes.Scope{
{
Name: "Prevention policies",
Read: true,
Write: true,
},
}
)
// Then in your Schema method
resp.Schema = schema.Schema{
MarkdownDescription: utils.MarkdownDescription(
documentationSection,
resourceMarkdownDescription,
requiredScopes,
),
// Schema attributes...
}This helper function automatically:
- Uses the documentation section as the service grouping before the
---separator - Places your resource description after the separator
- Adds a formatted list of required API scopes for the resource
The preferred pattern in this codebase is to append diagnostics from state operations in a single line using the ellipsis operator (...):
// Preferred pattern - Get state in a single line
var state HostGroupResourceModel
resp.Diagnostics.Append(req.State.Get(ctx, &state)...)
// Preferred pattern - Set state directly
resp.Diagnostics.Append(resp.State.Set(ctx, &model)...)
// Not Preferred - Avoid separating the operation from diagnostics collection
diags := resp.State.Set(ctx, &model)
resp.Diagnostics.Append(diags...)When creating resources, set any information required for deletion as early as possible in the Create method. This ensures that even if subsequent operations fail, Terraform can still track and clean up the resource:
// Create the resource via API
createResponse, err := r.client.CreateResource(¶ms)
if err != nil {
resp.Diagnostics.AddError("Failed to create resource", err.Error())
return
}
// IMPORTANT: Set the ID early, immediately after creation succeeds
plan.ID = types.StringValue(*createResponse.Payload.Resources[0].ID)
// Store this ID in state ASAP so Terraform can track the resource
resp.Diagnostics.Append(resp.State.SetAttribute(ctx, path.Root("id"), plan.ID)...)
if resp.Diagnostics.HasError() {
return
}
// Now continue with additional operations that might fail
// If these fail, Terraform will still have the ID to attempt cleanupThis pattern is essential for complex resources where multiple API calls are needed to fully configure them. By setting the ID in state as soon as possible, you ensure that even if subsequent operations fail and the apply errors out, Terraform can still attempt to delete the partially created resources during a destroy operation.