Sysdig LSP is a Language Server Protocol (LSP) implementation written in Rust. It integrates container image vulnerability scanning and Infrastructure-as-Code (IaC) analysis directly into code editors (e.g. VS Code, Helix, Neovim).
It is designed to detect issues early in the development workflow by scanning:
- Dockerfiles
- Docker Compose files
- Kubernetes manifests
- Other IaC files
The server is built on top of the tower-lsp framework and integrates with Sysdig’s Secure backend via a dedicated scanner binary and HTTP APIs.
- Vulnerability scanning of base images and dependencies.
- Code Lens support (e.g. “Scan base image” on
FROMlines). - Layered analysis for container images.
- Integration with Sysdig’s Secure backend APIs through a CLI scanner binary.
The project follows a modular, three-layer, Hexagonal-like architecture that cleanly separates domain logic, application orchestration, and infrastructure concerns.
- Rust workspace with entrypoint in
src/main.rs(initializesLSPServerwithtower-lspand configures logging). - Library exports in
src/lib.rs, which also enforces linting rules (deniesunwrap/expectin production code). - LSP orchestration / use-cases live in
src/app. - Domain types and business logic live in
src/domain. - Adapters and integrations (infrastructure) live in
src/infra. - Integration tests and shared fixtures live under
tests/:tests/general.rstests/common.rstests/fixtures/(sample Dockerfiles, scan results, etc.)
- Documentation for user-facing capabilities is under
docs/features/. - Build tooling and shortcuts are defined in
Justfileandflake.nix.
The domain layer contains pure business logic and domain models.
Key module:
scanresult/: defines core entities and value objects:ScanResult: core aggregate representing a full scan result.Vulnerability: CVE, severity, package details, etc.Package: name, version, package type.Layer: container image layer information.Policy: policy evaluation results.- Value objects such as
Severity,Architecture,OperatingSystem.
The application layer orchestrates domain and infrastructure components and implements LSP-specific behavior.
Key components:
LSPServer(lsp_server/) – main LSP implementation built ontower-lsp:lsp_server_inner.rs: core LSP protocol handlers (initialize, text sync, code lenses, commands, diagnostics, hover, etc.).commands/: concrete LSP command implementations (e.g.scan_base_image,build_and_scan).command_generator.rs: generates Code Lens entries and associated commands.supported_commands.rs: registry of available commands exposed to the client.
LspInteractor– manages communication with the LSP client and document state.ImageScanner– trait for scanning container images (implemented by infrastructure components).ImageBuilder– trait for building Docker images.DocumentDatabase(document_database.rs) – in-memory store for:- Document text
- Diagnostics (LSP warnings/errors for vulnerabilities)
- Hover documentation (detailed vulnerability explanations)
markdown/– formats scan results into Markdown tables for display in editors.ComponentFactory– abstract factory for dependency injection and component creation.
The infrastructure layer implements technical concerns and external integrations.
Key components:
-
SysdigImageScanner- Integrates with the Sysdig CLI scanner binary and Sysdig Secure backend.
- Downloads and manages scanner binary versions.
- Parses JSON scan results (e.g. via
sysdig_image_scanner_json_scan_result_v1.rs).
-
DockerImageBuilder- Builds container images using Bollard (Docker API client).
-
docker_socket_discovery- Automatically discovers and connects to Docker-compatible sockets.
- Supports multiple socket locations: standard Docker, Colima, Lima, containerd, and Podman.
- Checks sockets in priority order:
DOCKER_HOSTenv var,/var/run/docker.sock,$HOME/.colima/docker.sock,$HOME/.colima/default/docker.sock,$HOME/.colima/default/containerd.sock,$HOME/.lima/default/sock/docker.sock, and$XDG_RUNTIME_DIR/podman/podman.sock. - Uses the first available and connectable socket.
-
Dockerfile / Compose / K8s Manifest AST Parsers
- Parse Dockerfiles to extract image references from
FROMinstructions (including multi-stage builds). - Parse Docker Compose YAML (e.g. service
image:fields). - Parse Kubernetes manifests YAML (e.g.
containers[].imageandinitContainers[].imagefields).- K8s manifests are detected by checking for both
apiVersion:andkind:fields in YAML files. - Supports all common K8s resource types: Pods, Deployments, StatefulSets, DaemonSets, Jobs, CronJobs.
- K8s manifests are detected by checking for both
- Handle complex scenarios such as build args and multi-platform images.
- Implemented via modules like
dockerfile_ast_parser.rs,compose_ast_parser.rs, andk8s_manifest_ast_parser.rs.
- Parse Dockerfiles to extract image references from
-
ScannerBinaryManager- Downloads the Sysdig CLI scanner binary on demand.
- Caches binaries and checks GitHub releases for the latest version compatible with the current platform.
-
LSPLoggertracingsubscriber that logs diagnostics and events to the LSP client or stderr.
-
ConcreteComponentFactory- Production wiring of dependencies implementing the
ComponentFactorytrait.
- Production wiring of dependencies implementing the
The high-level LSP flow is:
- Initialize – Client sends configuration (e.g.
api_url,api_token) viainitializationOptions. didOpen/didChange– Document updates trigger parsing and analysis.codeLens– The server generates “Scan base image” code lenses on relevant lines (e.g. DockerfileFROMinstructions).executeCommand– Clicking a lens triggers commands likescan_base_imageorbuild_and_scan.publishDiagnostics– Vulnerability findings are sent as diagnostics to the editor.hover– Hovering on diagnostics or vulnerable elements shows detailed vulnerability information.
Document state is managed in-memory via InMemoryDocumentDatabase (an implementation of DocumentDatabase), maintaining per-document:
- Raw document text.
- Diagnostics with vulnerability details.
- Pre-computed hover documentation.
This allows the LSP to provide rich, contextual information without re-running scans on every request.
nix develop– enter a reproducible development shell with the exact Rust toolchain and dependencies required by the project, as defined inflake.nix. You can assume the user already started the development shell.
cargo build– build the server in debug mode.cargo build --release– build an optimized release binary.nix build .#sysdig-lsp– Nix-based build, with cross targets available (e.g. CI or other architectures).- Cross-compilation example:
nix build .#sysdig-lsp-linux-amd64.
The resulting sysdig-lsp binary is designed to be run by an LSP client (editor), rather than directly by users.
The project uses just as a command runner to encapsulate common workflows.
-
just test- Runs the test suite via
cargo nextest run(primary test runner). - Some tests require the
SECURE_API_TOKENenvironment variable.
- Runs the test suite via
-
just lint- Runs
cargo checkandcargo clippyfor quick static analysis.
- Runs
-
just fmt- Runs
cargo fmtaccording torustfmt.toml.
- Runs
-
just fix- Runs
cargo fixandcargo machete/cargo machete --fixto clean up unused dependencies and minor issues.
- Runs
-
just watch- Provides a watch mode to run tests (or other commands) on file changes.
Additional helpful commands:
cargo test -- --nocapture– run tests with full output when debugging.cargo test --lib– run only unit tests (faster than running all tests).
Important: The tests infra::sysdig_image_scanner::tests::it_scans_popular_images_correctly_test::case_* are very slow because they scan real container images. These tests should only be run when making changes to the image scanner. For day-to-day development, skip them or run focused tests instead.
Pre-commit hooks are configured in .pre-commit-config.yaml to run:
- Formatting (
cargo fmt). cargo check.cargo clippy.
These should run cleanly before opening a PR.
They are automatically executed before a commit is done.
If they are not executed, you need to execute: pre-commit install to configure it.
If any of the steps of the pre-commit fails for whatever reason, you need to understand that the commit was not created.
- Language: Rust (Edition 2024).
- LSP Framework:
tower-lsp. - Async Runtime:
tokio. - HTTP Client:
reqwest. - Serialization:
serde. - Logging:
tracing(plusLSPLoggerintegration). - CLI Args:
clap. - Testing Libraries:
rstest,mockall,serial_test, along withcargo nextest.
- Use standard Rust formatting (
rustfmt) with 4-space indentation. - Naming:
snake_casefor modules and functions.CamelCasefor types.SCREAMING_SNAKE_CASEfor constants.
- Import ordering uses
reorder_imports = trueinrustfmt.toml. - Prefer trait-based abstractions over concrete types for testability and clear architecture boundaries.
- Keep public APIs documented and keep modules small, mirroring the
app/domain/infraboundaries. - Use
tracingfor structured logging, sending logs to the LSP client or stderr viaLSPLogger.
Error handling is intentionally strict:
- No
unwrap()orexpect()in non-test code.- Enforced by clippy rules and
src/lib.rsconfiguration.
- Enforced by clippy rules and
- Use
Resulttypes with explicit error propagation. - Prefer
thiserrorfor custom error types with rich context. - Optionally use
anyhow::Contextstyle patterns for additional context at call sites. - Convert domain-level errors to appropriate LSP-facing errors at the application boundary.
The ComponentFactory trait centralizes creation of major application components and supports testing:
- Receives configuration (e.g.
api_url,api_token) from the client. - Produces
Componentssuch as:ImageScannerimplementations.ImageBuilderimplementations.
ConcreteComponentFactorywires real components in production.- Tests can provide mock factories to inject fake scanners/builders for deterministic behavior.
All I/O operations, including scanning, building, and LSP communication, are asynchronous using the tokio runtime.
- Shared state within the LSP server uses
RwLock(or similar primitives) to support concurrent reads with controlled writes.
- Integration tests live in the
tests/directory, using real fixtures (e.g. Dockerfiles, sample scan results). - Fixtures are stored under
tests/fixtures/. serial_testis used to prevent parallel execution conflicts (e.g. sharing global resources or temporary directories).mockallis used for mocking traits likeImageScannerin unit tests.rstestcan be used for parameterized tests.- Environment: tests may require
SECURE_API_TOKENfor scenarios that depend on authenticated scanning.
- Primary test runner is
cargo nextest(viajust test). - Add integration coverage in
tests/*.rsand reuse fixtures intests/fixtures/. - Name tests descriptively (
should_*or behavior-oriented names). - Avoid direct network calls inside tests; prefer fixture-based or mocked interactions instead.
- Add focused unit tests alongside modules using
#[cfg(test)]for local behavior. - Broader flows and end-to-end LSP interactions belong in
tests/general.rs. - For debugging,
cargo test -- --nocapturecan be used to see all test output. - Some tests, such as
infra::sysdig_image_scanner::tests::it_scans_popular_images_correctly_test, are slow because they scan real container images. It is recommended to run them in a focused way or skip them in local development to speed up the feedback loop.
Clients configure Sysdig LSP via initializationOptions in the LSP initialize request, for example:
{
"sysdig": {
"api_url": "https://secure.sysdig.com",
"api_token": "optional, falls back to SECURE_API_TOKEN env var"
}
}Key points:
api_urlshould be validated and not hard-coded to environment-specific endpoints in code.api_tokenis optional; if absent, the server falls back to theSECURE_API_TOKENenvironment variable.
- Do not commit API tokens or other secrets to the repository.
- Prefer environment variables (e.g.
SECURE_API_TOKEN) or editor initialization options (sysdig.api_token). - Always validate URLs provided via configuration (
sysdig.api_url).
- The
sysdig-lspbinary is not meant to be run manually; it is launched and driven by an LSP client (such as VS Code, Helix, or Neovim) that speaks the Language Server Protocol.
The workflow in .github/workflows/release.yml will create a new release automatically when the version of the crate changes in Cargo.toml in the default git branch. So, if you attempt to release a new version, you need to update this version. You should try releasing a new version when you do any meaningful change that the user can benefit from. The guidelines to follow would be:
- New feature is implemented -> Release new version.
- Bug fixes -> Release new version.
- CI/Refactorings/Internal changes -> No need to release new version.
- Documentation changes -> No need to release new version.
The current version of the LSP is not stable yet, so you need to follow the Semver spec, with the following guidelines:
- Unless specified, do not attempt to stabilize the version. That is, do not try to update the version to >=1.0.0. Versions for now should be <1.0.0.
- For minor changes, update only the Y in 0.X.Y. For example: 0.5.2 -> 0.5.3
- For major/feature changes, update the X in 0.X.Y and set the Y to 0. For example: 0.5.2 -> 0.6.0
After the commit is merged into the default branch the workflow will cross-compile the project, create a GitHub release of that version, and upload the artifacts to the release. Check the workflow file in case of doubt.
This section documents important patterns, findings, and gotchas discovered during development that are critical for maintaining consistency and avoiding common pitfalls.
When adding support for a new file type (e.g. Kubernetes manifests, Terraform files), follow this pattern established by Docker Compose and K8s manifest implementations:
-
Create parser in
src/infra/: e.g.k8s_manifest_ast_parser.rs- Define an
ImageInstructionstruct withimage_nameandrange(LSP Range) - Create a
parse_*function that returnsResult<Vec<ImageInstruction>, ParseError> - Use
marked_yamlfor YAML parsing to preserve position information for accurate LSP ranges - Include comprehensive unit tests covering:
- Simple cases
- Multiple images
- Edge cases (empty, null, invalid YAML)
- Complex image names with registries
- Quoted values
- Define an
-
Export the parser in
src/infra/mod.rs:mod k8s_manifest_ast_parser; pub use k8s_manifest_ast_parser::parse_k8s_manifest;
- Update
src/app/lsp_server/command_generator.rs:- Add import for the new parser
- Create a detection function (e.g.
is_k8s_manifest_file())- IMPORTANT: Detect by content, not just file extension to avoid false positives
- Example: K8s manifests must contain both
apiVersion:andkind:fields
- Add branch in
generate_commands_for_uri()to route to the new file type - Create a
generate_*_commands()function following the established pattern:fn generate_k8s_manifest_commands(url: &Url, content: &str) -> Result<Vec<CommandInfo>, String> { let mut commands = vec![]; match parse_k8s_manifest(content) { Ok(instructions) => { for instruction in instructions { commands.push( SupportedCommands::ExecuteBaseImageScan { location: Location::new(url.clone(), instruction.range), image: instruction.image_name, } .into(), ); } } Err(err) => return Err(format!("{}", err)), } Ok(commands) }
- Create fixture in
tests/fixtures/: e.g.k8s-deployment.yaml - Add integration test in
tests/general.rs:- Test code lens generation
- Verify correct ranges and image names
- Use existing patterns from compose tests as reference
- Update
README.md: Add feature to the features table with version number - Update
AGENTS.md: Document the parser in architecture section - Create feature doc: Add
docs/features/<feature>.mdwith examples - Update
docs/features/README.md: Add entry for the new feature
❌ DON'T: Rely solely on file extensions for detection
// BAD: Matches ALL YAML files including compose files
fn is_k8s_manifest_file(file_uri: &str) -> bool {
file_uri.ends_with(".yaml") || file_uri.ends_with(".yml")
}✅ DO: Combine file extension with content-based detection
// GOOD: Checks both extension AND content
fn is_k8s_manifest_file(file_uri: &str, content: &str) -> bool {
if !(file_uri.ends_with(".yaml") || file_uri.ends_with(".yml")) {
return false;
}
content.contains("apiVersion:") && content.contains("kind:")
}Why: File extensions alone can cause false positives. Docker Compose files, K8s manifests, and generic YAML files all use .yaml/.yml extensions. Content-based detection ensures accurate routing.
The diagnostic severity shown in the editor should reflect the actual vulnerability severity, not just policy evaluation results.
Current Implementation (in src/app/lsp_server/commands/scan_base_image.rs):
diagnostic.severity = Some(if *critical_count > 0 || *high_count > 0 {
DiagnosticSeverity::ERROR // Red
} else if *medium_count > 0 {
DiagnosticSeverity::WARNING // Yellow
} else {
DiagnosticSeverity::INFORMATION // Blue
});Gotcha: The previous implementation used scan_result.evaluation_result().is_passed() which only reflected policy pass/fail. This caused High/Critical vulnerabilities to show as INFORMATION (blue) if the policy passed, which was confusing for users.
When modifying severity logic: Always base it on vulnerability counts/severity, not policy evaluation.
When parsing files to extract ranges for code lenses:
- Use position-aware parsers:
marked_yamlfor YAML, custom parsers for Dockerfiles - Account for quotes: Image names might be quoted in YAML (
"nginx:latest"or'nginx:latest')let mut raw_len = image_name.len(); if let Some(c) = first_char && (c == '"' || c == '\'') { raw_len += 2; // Include quotes in range }
- Test with various formats: Unquoted, single-quoted, double-quoted values
- 0-indexed LSP positions: LSP uses 0-indexed line/character positions, but some parsers (like
marked_yaml) use 1-indexed positions - convert accordingly:let start_line = start.line() as u32 - 1; let start_char = start.column() as u32 - 1;
Unit Tests (#[cfg(test)] in modules):
- Test parser logic in isolation
- Use string literals for test input
- Cover edge cases exhaustively
- Run fast (no I/O)
Integration Tests (tests/general.rs):
- Test full LSP flow:
did_open→code_lens→execute_command - Use fixtures from
tests/fixtures/ - Mock external dependencies (ImageScanner) with
mockall - Verify JSON serialization of LSP responses
Slow Tests to Skip:
infra::sysdig_image_scanner::tests::it_scans_popular_images_correctly_test::case_*- These scan real container images over the network
- Only run when changing scanner-related code
- Use
cargo test --lib -- --skip it_scans_popular_images_correctly_testfor faster feedback
When adding new LSP commands:
- Define in
supported_commands.rs: Add toSupportedCommandsenum - Implement in
commands/directory: Create a struct implementingLspCommandtrait - Wire in
lsp_server_inner.rs: Add execution handler - Generate in
command_generator.rs: Create CommandInfo for code lenses - Test in
tests/general.rs: Verify command execution and results
Follow semantic versioning for unstable versions (0.X.Y):
- Patch (0.X.Y → 0.X.Y+1): Bug fixes, documentation, refactoring
- Minor (0.X.Y → 0.X+1.0): New features, enhancements
- Don't stabilize (1.0.0) unless explicitly instructed
When to release:
- ✅ New feature implemented
- ✅ Bug fixes
- ❌ CI/refactoring/internal changes (no user impact)
- ❌ Documentation-only changes
Release process:
- Update version in
Cargo.toml - Commit and merge to default branch
- GitHub Actions workflow automatically creates release with cross-compiled binaries
To keep history clean and reviews manageable:
- Use conventional-style commits similar to existing history, e.g.:
feat(scope): messagefix(scope): messagerefactor: message
- Commits should only have a title, no body/description.
- Before opening a commit, run at least:
just fmtjust lintjust test- Any relevant
nix buildinvocations when touching build tooling. - (You can assume they are executed before the commit is created, see Section 3.4)
- Keep commits scoped and reversible; smaller, reviewable PRs are preferred over large, monolithic changes.
- You must also modify AGENTS.md and README.md if applicable for any change you create, so both files are in sync with the project and the documentation does not become obsolete.