Skip to content

LOG-8967: Add TLS Scanner e2e test#3269

Open
jcantrill wants to merge 1 commit intoopenshift:masterfrom
jcantrill:log8967_tls_scanner
Open

LOG-8967: Add TLS Scanner e2e test#3269
jcantrill wants to merge 1 commit intoopenshift:masterfrom
jcantrill:log8967_tls_scanner

Conversation

@jcantrill
Copy link
Copy Markdown
Contributor

@jcantrill jcantrill commented May 5, 2026

Description

This PR:

  • adds an e2e test to integrate the TLS Scanner for workload compliance
  • checks the namespace workloads match the TLS Profile defined by the cluster config

Links

Summary by CodeRabbit

  • Tests

    • Added a comprehensive E2E TLS validation suite that runs a TLS scanner and verifies operator and operand TLS compliance against the cluster TLS profile.
    • New tests cover ClusterLogForwarder, operator, and log-file-metric-exporter TLS endpoints and assert scan results.
  • Chores

    • Introduced configurable TLS scanner image for test runs.
    • Updated default image tag for the log-file-metric-exporter to "latest".
    • Added an application name label to operator deployment metadata.

@openshift-ci-robot openshift-ci-robot added the jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. label May 5, 2026
@openshift-ci-robot
Copy link
Copy Markdown

openshift-ci-robot commented May 5, 2026

@jcantrill: This pull request references LOG-8967 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the task to target the "4.8.0" version, but no target version was set.

Details

In response to this:

Description

This PR:

  • adds an e2e test to integrate the TLS Scanner for workload compliance
  • checks the namespace workloads match the TLS Profile defined by the cluster config

Links

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@jcantrill
Copy link
Copy Markdown
Contributor Author

/hold

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 5, 2026

Walkthrough

Adds a TLS scanner testing framework and end-to-end tests: new types and scanner helper that deploys a Kubernetes Job, collects and parses scan results, and validates TLS compliance; integrates tests into the e2e suite and exposes an IMAGE_TLS_SCANNER Makefile variable.

Changes

TLS scanner test infrastructure

Layer / File(s) Summary
Data Shape
test/framework/e2e/tls/types.go
Adds exported structs for scan output and per-row results (ScanResult, ScanOutput, IPResult, PortResult, OpenShiftComponent, TLSReadiness).
Core Implementation
test/framework/e2e/tls/scanner.go
Adds Scanner type, NewScanner, GetImage, Deploy (ServiceAccount, ClusterRoleBindings, Job), WaitForCompletion, GetResults, parseResults, ValidateCompliance, TLS-version helpers, and cleanup registration.
Framework helpers
test/framework/e2e/framework.go
Make namespace creation accept visitor callbacks to mutate corev1.Namespace before creation and forward callers to the new variadic signatures.
E2E suite & tests
test/e2e/operator/tls/suite_test.go, test/e2e/operator/tls/e2e_test.go
Adds Ginkgo suite entrypoint and two TLS test contexts that deploy workloads/operands, run the scanner, assert component discovery, and validate TLS compliance against cluster TLS profile.
Build / Test wiring
Makefile
Adds IMAGE_TLS_SCANNER?=quay.io/openshift/tls-scanner:latest and injects IMAGE_TLS_SCANNER into test-env, test-e2e, and test-e2e-local runtime environments.

Manifests and small config updates (unrelated DAG)

Layer / File(s) Summary
CSV metadata
bundle/manifests/cluster-logging.clusterserviceversion.yaml
Updates metadata.annotations.createdAt timestamp and adds app.kubernetes.io/name: cluster-logging-operator label to the operator Deployment template labels.
Operator Deployment
config/manager/manager.yaml
Adds app.kubernetes.io/name: cluster-logging-operator to spec.template.metadata.labels.
OLM env default
olm_deploy/scripts/env.sh
Changes LOGGING_LOG_FILE_METRIC_EXPORTER_VERSION default from 6.1 to latest, affecting IMAGE_LOG_FILE_METRIC_EXPORTER when unset.
Import formatting
test/framework/e2e/splunk.go
Reorders/adjusts import block (no functional change).

Sequence Diagram

sequenceDiagram
    actor E2E_Test as E2E Test
    participant K8s as Kubernetes API
    participant Job as TLS Scanner Job
    participant Pod as Scanner Pod
    participant Parser as Result Parser
    participant Validator as Compliance Validator

    E2E_Test->>K8s: Create ServiceAccount / RBAC / Job
    K8s-->>E2E_Test: Job object created
    E2E_Test->>K8s: Watch Job status
    K8s->>Job: Schedule pod(s)
    Job->>Pod: Start initContainer (scanner) -> write /tmp/scan-results.json
    Pod->>Pod: results container cats file to logs
    K8s-->>E2E_Test: Job.Status.Succeeded
    E2E_Test->>K8s: Stream Pod logs (results container)
    K8s-->>Parser: Log stream (JSON)
    Parser-->>E2E_Test: []ScanResult
    E2E_Test->>Validator: ValidateCompliance(results, TLSProfile)
    Validator-->>E2E_Test: pass/fail
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 8 | ❌ 4

❌ Failed checks (4 warnings)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description includes a brief summary of the changes and a JIRA link, but is missing mandatory elements: no /cc reviewer assignment, no /assign approver assignment, and lacks comprehensive context about the implementation. Add /cc and /assign directives with reviewer and approver names from the OWNERS file, and expand the description to better explain the rationale and implementation details of the TLS Scanner integration.
Test Structure And Quality ⚠️ Warning Context 2 deploys LFME in It block instead of BeforeEach (violates setup/cleanup). Multiple Expect() assertions lack messages. Error silenced at line 182: results, _ := runTlsScanner masks failures. Move LFME setup to BeforeEach. Add messages to all Expect assertions. Remove error suppression and handle results properly.
Microshift Test Compatibility ⚠️ Warning TLS scanner e2e test uses config.openshift.io/v1 APIs (APIServer), unavailable on MicroShift. No protection via skip label, apigroup tag, or runtime check. Add [apigroup:config.openshift.io] tag to test name or implement MicroShift skip check with exutil.IsMicroShiftCluster()
Ipv6 And Disconnected Network Test Compatibility ⚠️ Warning Tests pull external container images (grafana/loki:3.3.2 and quay.io/jcantril/tls-scanner), failing in disconnected networks. Add [Skipped:Disconnected] to test names or migrate to internal registries. Replace personal jcantril image with quay.io/openshift/tls-scanner.
✅ Passed checks (8 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and concisely describes the main change: adding a TLS Scanner e2e test, which aligns perfectly with the primary objective of the PR.
Docstring Coverage ✅ Passed Docstring coverage is 85.71% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Stable And Deterministic Test Names ✅ Passed All Ginkgo test names are stable and deterministic, containing only descriptive static strings with no dynamic values like IDs, timestamps, namespaces, or IP addresses.
Single Node Openshift (Sno) Test Compatibility ✅ Passed Tests do not assume multi-node clusters. DaemonSets and wait logic work identically on SNO and multi-node. No pod anti-affinity, topology constraints, node selectors, or node counting detected.
Topology-Aware Scheduling Compatibility ✅ Passed Adds E2E test code for TLS validation. No production workloads with topology-specific scheduling constraints introduced. Jobs have no affinity, topology spreads, or control-plane nodeSelectors.
Ote Binary Stdout Contract ✅ Passed No stdout contract violations. Process-level code has no stdout writes. All logging uses clolog (structured logger). Test execution is in Ginkgo blocks with intercepted output.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Add TLS Scanner e2e test for workload compliance validation

🧪 Tests ✨ Enhancement

Grey Divider

Walkthroughs

Description
• Add comprehensive e2e test suite for TLS Scanner integration
• Validate operator and operand TLS configurations match cluster profile
• Detect and verify TLS endpoints for both operator and operands
• Implement TLS Scanner framework with job deployment and result parsing
• Export TLS Scanner image configuration in Makefile for test execution
Diagram
flowchart LR
  A["E2E Test Suite"] -->|deploys| B["TLS Scanner Job"]
  B -->|scans| C["Operator & Operand Endpoints"]
  C -->|generates| D["CSV Results"]
  D -->|validates| E["Cluster TLS Profile Compliance"]
  F["Scanner Framework"] -->|manages| B
  G["Makefile Config"] -->|exports| F
Loading

Grey Divider

File Changes

1. test/e2e/operator/tls/e2e_test.go 🧪 Tests +203/-0

E2E test cases for TLS Scanner validation

• Implements two e2e test cases for TLS Scanner validation
• First test validates operator and operand TLS configurations match cluster TLS profile
• Second test verifies TLS Scanner detects endpoints for both operator and operands
• Sets up ClusterLogForwarder with HTTP and Syslog receivers for comprehensive scanning

test/e2e/operator/tls/e2e_test.go


2. test/e2e/operator/tls/suite_test.go 🧪 Tests +13/-0

Test suite setup for TLS Scanner tests

• Defines test suite initialization for TLS Scanner e2e tests
• Registers Ginkgo test framework with failure handler
• Configures test suite naming and execution

test/e2e/operator/tls/suite_test.go


3. test/framework/e2e/tls/scanner.go ✨ Enhancement +383/-0

TLS Scanner framework for deployment and validation

• Implements Scanner struct to manage TLS Scanner deployment and result retrieval
• Provides Deploy() method to create Job with ServiceAccount and ClusterRoleBinding
• Implements WaitForCompletion() to poll Job status with configurable timeout
• Implements GetResults() to retrieve and parse CSV scan results from pod logs
• Provides ValidateCompliance() function to verify TLS versions match cluster profile
• Includes helper functions for CSV parsing, TLS version comparison, and cleanup management

test/framework/e2e/tls/scanner.go


View more (1)
4. Makefile ⚙️ Configuration changes +4/-0

Configure TLS Scanner image for test execution

• Adds IMAGE_TLS_SCANNER variable with default value pointing to quay.io image
• Exports IMAGE_TLS_SCANNER in test-env target for test environment configuration
• Exports IMAGE_TLS_SCANNER in test-e2e target for OLM-based e2e tests
• Exports IMAGE_TLS_SCANNER in test-e2e-local target for local e2e test execution

Makefile


Grey Divider

ⓘ You are approaching your monthly quota for Qodo. Upgrade your plan

Qodo Logo

@qodo-code-review
Copy link
Copy Markdown

qodo-code-review Bot commented May 5, 2026

Code Review by Qodo

🐞 Bugs (2) 📘 Rule violations (0) 📎 Requirement gaps (0)

Grey Divider


Action required

1. Privileged scanner PSA blocked ✓ Resolved 🐞 Bug ☼ Reliability
Description
Scanner.Deploy creates a privileged pod, but the scanner namespace is created with no Pod Security
labels, so the Job violates the project’s own PSA convention for privileged workloads. This will
prevent the scanner Job from being admitted on clusters enforcing PodSecurity (restricted/baseline).
Code

test/framework/e2e/tls/scanner.go[R118-146]

+	backoffLimit := int32(0)
+	ttlSecondsAfterFinished := int32(600) // Keep job for 10 minutes after completion
+	privileged := true
+
+	job := &batchv1.Job{
+		ObjectMeta: metav1.ObjectMeta{
+			Name:      "tls-scanner",
+			Namespace: scannerNamespace,
+		},
+		Spec: batchv1.JobSpec{
+			BackoffLimit:            &backoffLimit,
+			TTLSecondsAfterFinished: &ttlSecondsAfterFinished,
+			Template: corev1.PodTemplateSpec{
+				Spec: corev1.PodSpec{
+					ServiceAccountName: sa.Name,
+					RestartPolicy:      corev1.RestartPolicyNever,
+					Containers: []corev1.Container{
+						{
+							Name:  "scanner",
+							Image: GetImage(),
+							Args: []string{
+								"-namespace-filter", targetNamespace,
+								"-csv-file", "/tmp/scan-results.csv",
+								"-j", "4", // Use 4 concurrent threads
+							},
+							SecurityContext: &corev1.SecurityContext{
+								Privileged: &privileged,
+							},
+						},
Evidence
The scanner namespace comes from CreateTestNamespace/CreateNamespace which sets only a name (no
labels), while the scanner Job explicitly sets SecurityContext.Privileged=true. The repo already
encodes that privileged workloads are expected to run in namespaces labeled
pod-security.kubernetes.io/enforce=privileged (e.g., openshift-logging).

test/e2e/operator/tls/e2e_test.go[45-47]
test/framework/e2e/framework.go[184-209]
test/framework/e2e/tls/scanner.go[120-146]
internal/constants/constants.go[52-54]
test/e2e/collection/security/container_security_test.go[157-160]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`Scanner.Deploy()` runs the tls-scanner container as **privileged**, but the TLS e2e test deploys it into a newly created test namespace that has **no** PodSecurity enforcement labels. This is inconsistent with the repo’s PSA approach (privileged workloads require `pod-security.kubernetes.io/enforce=privileged`) and can cause the Job to be rejected by admission.
### Issue Context
The TLS e2e test creates `scannerNS` via `CreateTestNamespace()` and then deploys the scanner there.
### Fix Focus Areas
- test/e2e/operator/tls/e2e_test.go[45-47]
- test/framework/e2e/framework.go[184-209]
- test/framework/e2e/tls/scanner.go[120-146]
- internal/constants/constants.go[52-54]
### Suggested change
Choose one:
- **Preferred**: remove the need for `Privileged` in the scanner Job if possible.
- If privileged is required, **label the scanner namespace** before deploying:
- set `pod-security.kubernetes.io/enforce=privileged` (and any other required PSA labels) on `scannerNS`.
- do this via the kube client (update Namespace labels) or via `oc label --overwrite`.
Add cleanup or ensure namespace deletion remains sufficient.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. TLS min-version check flawed ✓ Resolved 🐞 Bug ≡ Correctness
Description
ValidateCompliance accepts an endpoint as compliant when it supports any TLS version >= the cluster
minimum, even if it also supports older TLS versions below the minimum. This can make the e2e test
pass while endpoints still allow weaker protocols.
Code

test/framework/e2e/tls/scanner.go[R315-356]

+		if result.Status == "OK" {
+			if !containsTLSVersion(result.TLSVersions, expectedMinTLS) {
+				failures = append(failures, fmt.Sprintf("Pod %s/%s port %s: expected min TLS version %s, got versions: %s",
+					result.Namespace, result.Pod, result.Port, expectedMinTLS, result.TLSVersions))
+			}
+
+			// Log successful validation
+			clolog.V(2).Info("TLS endpoint validated", "pod", result.Pod, "port", result.Port,
+				"tlsVersions", result.TLSVersions, "status", result.Status)
+		}
+	}
+
+	if len(failures) > 0 {
+		return fmt.Errorf("TLS compliance validation failed:\n%s", strings.Join(failures, "\n"))
+	}
+
+	clolog.Info("TLS compliance validation passed", "results", len(results))
+	return nil
+}
+
+// containsTLSVersion checks if the TLS versions string contains at least the minimum required version
+func containsTLSVersion(tlsVersions, minVersion string) bool {
+	// If no TLS versions detected, fail validation
+	if tlsVersions == "" {
+		return false
+	}
+
+	// Parse the minimum version
+	minVersionNum := parseTLSVersion(minVersion)
+	if minVersionNum == 0 {
+		// If we can't parse the min version, assume it's valid
+		return true
+	}
+
+	// Check if any of the supported versions meets the minimum
+	versions := strings.Split(tlsVersions, ",")
+	for _, v := range versions {
+		v = strings.TrimSpace(v)
+		if vNum := parseTLSVersion(v); vNum >= minVersionNum {
+			return true
+		}
+	}
Evidence
containsTLSVersion returns true on the first version that meets the minimum, rather than ensuring
the endpoint’s *lowest supported* version is at least the minimum (or that no versions below the
minimum are supported). ValidateCompliance relies on this helper for OK results.

test/framework/e2e/tls/scanner.go[314-320]
test/framework/e2e/tls/scanner.go[335-359]
internal/tls/tls.go[52-59]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`ValidateCompliance()` currently considers an endpoint compliant if **any** supported TLS version is >= the cluster minimum. This allows endpoints that still support older TLS versions (below the required minimum) to pass validation.
### Issue Context
`ValidateCompliance()` uses `containsTLSVersion(result.TLSVersions, expectedMinTLS)` for status `OK`.
### Fix Focus Areas
- test/framework/e2e/tls/scanner.go[287-325]
- test/framework/e2e/tls/scanner.go[335-359]
### Suggested change
Replace `containsTLSVersion` with logic that fails when **any** detected version is below the minimum:
- Parse all versions in `result.TLSVersions`.
- Compute the minimum detected version (or check each version).
- Return failure if `minDetected < expectedMin`.
- Treat unparseable versions as a validation failure (or explicitly log and fail), rather than auto-pass.
Add unit tests for examples like:
- min=1.2, versions="1.0,1.2" => should fail
- min=1.2, versions="1.2" => pass

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

3. TLS scanner skip dead ✓ Resolved 🐞 Bug ≡ Correctness
Description
The TLS e2e suite tries to Skip when tlsscanner.GetImage() is empty, but GetImage always returns a
non-empty default image, so the suite can never skip. This makes the skipTLSScanner flag ineffective
and forces the tests to run even when no explicit scanner image was provided.
Code

test/e2e/operator/tls/e2e_test.go[R39-43]

+		// Check if TLS Scanner image is available
+		if tlsscanner.GetImage() == "" {
+			skipTLSScanner = true
+			Skip("TLS Scanner image not available")
+		}
Evidence
The test checks for an empty image string, but the scanner helper always falls back to DefaultImage,
so the condition is unreachable.

test/e2e/operator/tls/e2e_test.go[39-43]
test/framework/e2e/tls/scanner.go[62-68]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The TLS e2e test suite attempts to skip execution when the TLS Scanner image is "not available" by checking `tlsscanner.GetImage() == ""`, but `GetImage()` never returns an empty string (it falls back to a default). As a result, the suite can never skip and will always attempt to run.
### Issue Context
This is in `BeforeEach`, so it affects every test case in the suite.
### Fix Focus Areas
- test/e2e/operator/tls/e2e_test.go[39-43]
- test/framework/e2e/tls/scanner.go[62-68]
### Suggested change
Pick one behavior and make it consistent:
1) **Require explicit image**: change `GetImage()` to return `""` when `IMAGE_TLS_SCANNER` is unset, and keep the skip; or
2) **Always run with default**: remove the skip logic/flag entirely and rely on the default image.
Ensure the suite’s behavior matches the intended CI contract (explicit opt-in vs default execution).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


4. CSV output path unused 🐞 Bug ☼ Reliability
Description
The TLS scanner Job is configured to write results to /tmp/scan-results.csv, but GetResults parses
CSV from pod logs and never reads the file, making the -csv-file configuration dead and the parsing
dependent on log contents. This mismatch makes result retrieval fragile and harder to maintain.
Code

test/framework/e2e/tls/scanner.go[R138-145]

+							Args: []string{
+								"-namespace-filter", targetNamespace,
+								"-csv-file", "/tmp/scan-results.csv",
+								"-j", "4", // Use 4 concurrent threads
+							},
+							SecurityContext: &corev1.SecurityContext{
+								Privileged: &privileged,
+							},
Evidence
Deploy sets -csv-file /tmp/scan-results.csv while GetResults streams container logs and feeds them
directly into the CSV parser; there is no code path that reads the configured file.

test/framework/e2e/tls/scanner.go[138-142]
test/framework/e2e/tls/scanner.go[196-226]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`Deploy()` instructs tls-scanner to write CSV results to `/tmp/scan-results.csv`, but `GetResults()` reads pod logs and parses them as CSV, never using the configured file path.
### Issue Context
This creates a disconnect between how the Job is configured and how results are retrieved.
### Fix Focus Areas
- test/framework/e2e/tls/scanner.go[138-142]
- test/framework/e2e/tls/scanner.go[196-226]
### Suggested change
Make configuration and retrieval consistent:
- Option A (recommended): after Job completion, `exec` into the scanner pod and `cat /tmp/scan-results.csv`, then parse that content.
- Option B: remove `-csv-file` and ensure tls-scanner writes **only** CSV to stdout (then parsing logs is correct).
Also consider making parsing tolerant to non-CSV log lines if logs are used.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Cipher suites not validated 🐞 Bug ≡ Correctness
Description
The TLS e2e test claims to validate compliance with the cluster TLS profile and even prints the
cipher count, but ValidateCompliance never compares scanned cipher suites to the cluster profile’s
cipher list. This leaves a major part of TLS profile compliance untested.
Code

test/framework/e2e/tls/scanner.go[R287-325]

+// ValidateCompliance validates that TLS scan results comply with the cluster TLS profile
+func ValidateCompliance(results []ScanResult, profileSpec configv1.TLSProfileSpec) error {
+	clolog.Info("Validating TLS compliance", "results", len(results), "profile", profileSpec.MinTLSVersion)
+
+	var failures []string
+	expectedMinTLS := internaltls.MinTLSVersion(profileSpec)
+
+	for _, result := range results {
+		// Skip non-TLS endpoints
+		if result.Status == "NO_TLS" || result.Status == "LOCALHOST_ONLY" {
+			clolog.V(2).Info("Skipping non-TLS endpoint", "pod", result.Pod, "port", result.Port, "status", result.Status)
+			continue
+		}
+
+		// Check for error statuses
+		if result.Status == "ERROR" || result.Status == "TIMEOUT" {
+			failures = append(failures, fmt.Sprintf("Pod %s/%s port %s: scan failed with status %s",
+				result.Namespace, result.Pod, result.Port, result.Status))
+			continue
+		}
+
+		// For MTLS_REQUIRED, we can't validate the TLS version, but it's not necessarily a failure
+		if result.Status == "MTLS_REQUIRED" {
+			clolog.V(2).Info("Endpoint requires mutual TLS", "pod", result.Pod, "port", result.Port)
+			continue
+		}
+
+		// For successful scans, validate TLS version
+		if result.Status == "OK" {
+			if !containsTLSVersion(result.TLSVersions, expectedMinTLS) {
+				failures = append(failures, fmt.Sprintf("Pod %s/%s port %s: expected min TLS version %s, got versions: %s",
+					result.Namespace, result.Pod, result.Port, expectedMinTLS, result.TLSVersions))
+			}
+
+			// Log successful validation
+			clolog.V(2).Info("TLS endpoint validated", "pod", result.Pod, "port", result.Port,
+				"tlsVersions", result.TLSVersions, "status", result.Status)
+		}
+	}
Evidence
ScanResult parses a "Cipher Suites" column into the Ciphers field, and the e2e test fetches
profileSpec.Ciphers, but ValidateCompliance only checks TLS versions and ignores both cipher lists.

test/framework/e2e/tls/scanner.go[33-46]
test/framework/e2e/tls/scanner.go[260-272]
test/framework/e2e/tls/scanner.go[287-325]
test/e2e/operator/tls/e2e_test.go[131-158]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
TLS compliance validation currently ignores cipher suites even though:
- the cluster TLS profile includes a cipher list, and
- scan results include a cipher suites field.
### Issue Context
The e2e test prints cipher count but then calls `ValidateCompliance` which only validates TLS versions.
### Fix Focus Areas
- test/framework/e2e/tls/scanner.go[287-333]
- test/framework/e2e/tls/scanner.go[260-272]
- test/e2e/operator/tls/e2e_test.go[131-158]
### Suggested change
Enhance `ValidateCompliance` to validate cipher suites for relevant statuses (e.g., `OK`):
- Parse `result.Ciphers` into a set.
- Compare against `internaltls.TLSCiphers(profileSpec)` (or `profileSpec.Ciphers` with defaulting) to avoid missing defaults.
- Fail if any scanned cipher is not allowed by the profile (and/or if required ciphers are missing, depending on intended semantics).
Document how TLS1.3 ciphers are treated if tls-scanner reports them differently.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

Qodo Logo

@openshift-ci openshift-ci Bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label May 5, 2026
@openshift-ci openshift-ci Bot requested review from Clee2691 and cahartma May 5, 2026 20:21
Comment thread test/framework/e2e/tls/scanner.go
Comment thread test/framework/e2e/tls/scanner.go
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 5, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jcantrill

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci Bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 5, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
test/e2e/operator/tls/e2e_test.go (1)

182-184: ⚡ Quick win

Use strings.HasPrefix instead of manual slice indexing for pod name prefix matching

The guard len(result.Pod) > len(prefix) uses strict >, which silently misses the edge case where a pod name is exactly as long as the prefix constant (unlikely in practice but logically wrong). strings.HasPrefix is simpler and handles all cases correctly:

♻️ Proposed refactor
-			if result.Component == constants.ClusterLoggingOperator ||
-				(result.Pod != "" && len(result.Pod) > len(constants.ClusterLoggingOperator) &&
-					result.Pod[:len(constants.ClusterLoggingOperator)] == constants.ClusterLoggingOperator) {
+			if result.Component == constants.ClusterLoggingOperator ||
+				strings.HasPrefix(result.Pod, constants.ClusterLoggingOperator) {
-			if result.Pod != "" && len(result.Pod) > len(forwarderName) &&
-				result.Pod[:len(forwarderName)] == forwarderName {
+			if strings.HasPrefix(result.Pod, forwarderName) {

Also applies to: 191-192

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@test/e2e/operator/tls/e2e_test.go` around lines 182 - 184, The pod name
prefix check in the conditional that compares result.Pod to
constants.ClusterLoggingOperator uses manual slicing and a strict length check
which misses the edge case where lengths are equal; replace that manual
slice-based prefix test with strings.HasPrefix(result.Pod,
constants.ClusterLoggingOperator) (and import "strings" if missing) for the
condition that currently references result.Pod and
constants.ClusterLoggingOperator; make the same change for the analogous check
referenced at the other occurrence (the lines noted in the comment).
test/framework/e2e/tls/scanner.go (1)

153-155: 💤 Low value

Job creation does not handle AlreadyExists, unlike the SA and CRB setup above

The ServiceAccount and ClusterRoleBinding creation both skip apierrors.IsAlreadyExists. The Job creation at line 153 returns an error unconditionally if the job already exists. While today this is protected by each test creating a fresh scannerNS, if Deploy is ever called a second time in the same namespace the caller will get a confusing raw API error. A simple guard would keep it consistent.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@test/framework/e2e/tls/scanner.go` around lines 153 - 155, The Job creation
call using s.KubeClient.BatchV1().Jobs(scannerNamespace).Create currently
returns an error on any conflict; update the logic to detect and handle
apierrors.IsAlreadyExists (like the ServiceAccount/ClusterRoleBinding code
does): call the Create, and if it fails, check apierrors.IsAlreadyExists(err)
and if so either retrieve the existing job or log/ignore and continue returning
the existing job object; otherwise return the wrapped error as before. Ensure
you reference the createdJob/error from the Create call and use
apierrors.IsAlreadyExists to guard the fallback.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@test/e2e/operator/tls/e2e_test.go`:
- Around line 40-43: The test's skip is unreachable because
tlsscanner.GetImage() currently falls back to DefaultImage; update the GetImage
implementation in scanner.go so it returns an empty string when the environment
variable IMAGE_TLS_SCANNER is not set (rather than returning DefaultImage), so
tlsscanner.GetImage() can be used by the test to detect absence and trigger the
skip/AfterEach guard; ensure references to DefaultImage remain for explicit
defaulting elsewhere but not inside GetImage.

In `@test/framework/e2e/tls/scanner.go`:
- Around line 336-359: containsTLSVersion currently returns true if any
advertised version >= minVersion, which is wrong; update it so it iterates all
parsed versions (use parseTLSVersion) and returns false as soon as it finds any
version < minVersionNum, otherwise return true only if at least one version was
parsed and none were below the minimum; keep the existing behavior for empty
tlsVersions (return false) and for unparsable minVersion (minVersionNum == 0) if
you want to preserve the prior assumption.
- Around line 63-68: GetImage currently always returns a non-empty string by
falling back to DefaultImage, so the test's skip check (tlsscanner.GetImage() ==
"") is never true; fix by separating "configured vs. default" behavior: add a
new helper (e.g. IsScannerConfigured or GetConfiguredImage) that returns the raw
env var (empty if unset) and keep GetImage returning DefaultImage when env var
is empty; update Deploy to use GetImage() for the job image but change the
test's skip condition to call the new IsScannerConfigured/GetConfiguredImage
helper instead of GetImage(); reference symbols: GetImage, ImageEnvVar,
DefaultImage, Deploy, and the skip check in e2e_test.go.

---

Nitpick comments:
In `@test/e2e/operator/tls/e2e_test.go`:
- Around line 182-184: The pod name prefix check in the conditional that
compares result.Pod to constants.ClusterLoggingOperator uses manual slicing and
a strict length check which misses the edge case where lengths are equal;
replace that manual slice-based prefix test with strings.HasPrefix(result.Pod,
constants.ClusterLoggingOperator) (and import "strings" if missing) for the
condition that currently references result.Pod and
constants.ClusterLoggingOperator; make the same change for the analogous check
referenced at the other occurrence (the lines noted in the comment).

In `@test/framework/e2e/tls/scanner.go`:
- Around line 153-155: The Job creation call using
s.KubeClient.BatchV1().Jobs(scannerNamespace).Create currently returns an error
on any conflict; update the logic to detect and handle apierrors.IsAlreadyExists
(like the ServiceAccount/ClusterRoleBinding code does): call the Create, and if
it fails, check apierrors.IsAlreadyExists(err) and if so either retrieve the
existing job or log/ignore and continue returning the existing job object;
otherwise return the wrapped error as before. Ensure you reference the
createdJob/error from the Create call and use apierrors.IsAlreadyExists to guard
the fallback.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: b3f9556e-31a1-45b3-a939-e1388a5bdf1e

📥 Commits

Reviewing files that changed from the base of the PR and between ff06c7a and 9b1f900.

📒 Files selected for processing (4)
  • Makefile
  • test/e2e/operator/tls/e2e_test.go
  • test/e2e/operator/tls/suite_test.go
  • test/framework/e2e/tls/scanner.go

Comment thread test/e2e/operator/tls/e2e_test.go Outdated
Comment thread test/framework/e2e/tls/scanner.go
Comment thread test/framework/e2e/tls/scanner.go
@jcantrill
Copy link
Copy Markdown
Contributor Author

/label tide/merge-method-squash

@openshift-ci openshift-ci Bot added the tide/merge-method-squash Denotes a PR that should be squashed by tide when it merges. label May 5, 2026
@jcantrill
Copy link
Copy Markdown
Contributor Author

/test functional/target

@jcantrill
Copy link
Copy Markdown
Contributor Author

/retest

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 7, 2026

@jcantrill: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-target a136bf0 link true /test e2e-target
ci/prow/functional-target a136bf0 link true /test functional-target

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@jcantrill jcantrill force-pushed the log8967_tls_scanner branch from a136bf0 to d1e1b4d Compare May 8, 2026 19:42
@jcantrill
Copy link
Copy Markdown
Contributor Author

@rhmdnd you here for input about if this meets your expectations. I was able to successfully execute the test after some minor changes to the scanner. I've submitted a PR for that change along with one in openshift/release to mirror the image for broader use. Your input would be appreciated

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
test/framework/e2e/framework.go (1)

195-217: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Base cleanup and the return value on the final namespace object.

The new visitor hook runs after name has already been captured for cleanup, logging, and the return value. If a visitor sets namespace.Name or GenerateName, this helper creates one namespace but returns/deletes another. Register cleanup and return namespace.Name after all visitors have run.

♻️ Suggested fix
 func (tc *E2ETestFramework) CreateNamespace(name string, visitors ...func(*corev1.Namespace)) string {
-	if value, found := os.LookupEnv("GENERATOR_NS"); found {
-		name = value
-	} else {
-		tc.AddCleanup(func() error {
-			opts := metav1.DeleteOptions{}
-			return tc.KubeClient.CoreV1().Namespaces().Delete(context.TODO(), name, opts)
-		})
-	}
 	namespace := &corev1.Namespace{
 		ObjectMeta: metav1.ObjectMeta{
 			Name: name,
 		},
 	}
 	for _, visitor := range visitors {
 		visitor(namespace)
 	}
+	if value, found := os.LookupEnv("GENERATOR_NS"); found {
+		namespace.Name = value
+	} else {
+		tc.AddCleanup(func() error {
+			opts := metav1.DeleteOptions{}
+			return tc.KubeClient.CoreV1().Namespaces().Delete(context.TODO(), namespace.Name, opts)
+		})
+	}
 
 	if err := tc.Test.Recreate(namespace); err != nil {
 		clolog.Error(err, "Error")
 		os.Exit(1)
 	}
-	clolog.V(1).Info("Created namespace", "namespace", name)
-	return name
+	clolog.V(1).Info("Created namespace", "namespace", namespace.Name)
+	return namespace.Name
 }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@test/framework/e2e/framework.go` around lines 195 - 217, The CreateNamespace
helper registers cleanup and returns the original name before visitor functions
can modify namespace.Name/GenerateName, so move the tc.AddCleanup(...) call to
after the visitors run and use namespace.Name in both the cleanup Delete(...)
call and as the function return value; i.e., run visitors first, then register
cleanup that deletes context.TODO(), namespace.Name, and finally return
namespace.Name (not the local variable name).
olm_deploy/scripts/env.sh (1)

7-13: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Avoid defaulting the LFME image to a mutable :latest tag.

If this variable is left unset, the E2E tests start testing whatever image happens to be published as log-file-metric-exporter:latest at that moment, decoupling test behavior from the operator branch and risking flakiness when the tag moves. Other image versions (LOGGING_VECTOR_VERSION, LOGGING_VERSION) use pinned defaults. Please derive the default from LOGGING_VERSION or pin a branch-compatible tag instead.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@olm_deploy/scripts/env.sh` around lines 7 - 13, The LFME image default is set
to mutable ":latest" via LOGGING_LOG_FILE_METRIC_EXPORTER_VERSION, which risks
flaky tests; change the default to derive from the operator's LOGGING_VERSION
(or a branch-compatible pinned tag) by replacing the default assignment for
LOGGING_LOG_FILE_METRIC_EXPORTER_VERSION with one that uses LOGGING_VERSION
(e.g.
LOGGING_LOG_FILE_METRIC_EXPORTER_VERSION=${LOGGING_LOG_FILE_METRIC_EXPORTER_VERSION:-${LOGGING_VERSION}})
and keep IMAGE_LOG_FILE_METRIC_EXPORTER using that variable so the LFME image is
pinned consistently like IMAGE_LOGGING_VECTOR and
IMAGE_CLUSTER_LOGGING_OPERATOR.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@test/framework/e2e/tls/scanner.go`:
- Around line 273-280: The ScanResult struct is missing Namespace and Pod values
so ValidateCompliance reports "Pod / port ..." without identifiers; set
result.Namespace and result.Pod when constructing ScanResult (e.g., populate
Namespace from ipResult.OpenShiftComponent.Namespace and Pod from ipResult.Pod
or ipResult.PodName depending on the scanner field name) or, if you prefer
changing validation, update ValidateCompliance to use existing fields
(Component, IP, Port) instead; adjust the code around ScanResult creation to
assign these namespace/pod fields consistently with the scanner output.
- Around line 32-63: The hard-coded fallback DefaultImage currently points to a
personal image; change DefaultImage to the project-owned image
"quay.io/openshift/tls-scanner:latest" (the commented-out value) so GetImage()
will return the project-owned default when os.Getenv(ImageEnvVar) is empty;
update the DefaultImage constant definition (symbol: DefaultImage) and leave
GetImage() and ImageEnvVar unchanged.
- Around line 171-187: The loop around
s.KubeClient.BatchV1().Jobs(scannerNamespace).Create retries immediately on
apierrors.IsAlreadyExists; change it to call deleteFn(), then wait for the old
Job to actually disappear (poll
s.KubeClient.BatchV1().Jobs(scannerNamespace).Get(job.Name, ...) until it
returns NotFound) with a bounded retry policy or context deadline (e.g.,
maxAttempts or timeout + small sleep/backoff between polls), and if the
timeout/retries are exceeded return an error instead of looping forever; keep
adding the cleanup via s.addCleanup only after a successful Create.

---

Outside diff comments:
In `@olm_deploy/scripts/env.sh`:
- Around line 7-13: The LFME image default is set to mutable ":latest" via
LOGGING_LOG_FILE_METRIC_EXPORTER_VERSION, which risks flaky tests; change the
default to derive from the operator's LOGGING_VERSION (or a branch-compatible
pinned tag) by replacing the default assignment for
LOGGING_LOG_FILE_METRIC_EXPORTER_VERSION with one that uses LOGGING_VERSION
(e.g.
LOGGING_LOG_FILE_METRIC_EXPORTER_VERSION=${LOGGING_LOG_FILE_METRIC_EXPORTER_VERSION:-${LOGGING_VERSION}})
and keep IMAGE_LOG_FILE_METRIC_EXPORTER using that variable so the LFME image is
pinned consistently like IMAGE_LOGGING_VECTOR and
IMAGE_CLUSTER_LOGGING_OPERATOR.

In `@test/framework/e2e/framework.go`:
- Around line 195-217: The CreateNamespace helper registers cleanup and returns
the original name before visitor functions can modify
namespace.Name/GenerateName, so move the tc.AddCleanup(...) call to after the
visitors run and use namespace.Name in both the cleanup Delete(...) call and as
the function return value; i.e., run visitors first, then register cleanup that
deletes context.TODO(), namespace.Name, and finally return namespace.Name (not
the local variable name).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 9e66b57e-2996-4910-a6d1-77be8c66e1e3

📥 Commits

Reviewing files that changed from the base of the PR and between 9b1f900 and d1e1b4d.

📒 Files selected for processing (10)
  • Makefile
  • bundle/manifests/cluster-logging.clusterserviceversion.yaml
  • config/manager/manager.yaml
  • olm_deploy/scripts/env.sh
  • test/e2e/operator/tls/e2e_test.go
  • test/e2e/operator/tls/suite_test.go
  • test/framework/e2e/framework.go
  • test/framework/e2e/splunk.go
  • test/framework/e2e/tls/scanner.go
  • test/framework/e2e/tls/types.go
✅ Files skipped from review due to trivial changes (2)
  • config/manager/manager.yaml
  • test/framework/e2e/splunk.go
🚧 Files skipped from review as they are similar to previous changes (2)
  • test/e2e/operator/tls/suite_test.go
  • Makefile

Comment thread test/framework/e2e/tls/scanner.go
Comment on lines +171 to +187
for {
deleteFn := func() error {
deletePolicy := metav1.DeletePropagationForeground
return s.KubeClient.BatchV1().Jobs(scannerNamespace).Delete(context.TODO(), job.Name, metav1.DeleteOptions{
PropagationPolicy: &deletePolicy,
})
}
createdJob, err := s.KubeClient.BatchV1().Jobs(scannerNamespace).Create(context.TODO(), job, metav1.CreateOptions{})
if err != nil && apierrors.IsAlreadyExists(err) {
deleteFn()
continue
} else if err != nil {
return nil, fmt.Errorf("failed to create TLS Scanner Job: %w", err)
}
s.addCleanup(deleteFn)
return createdJob, nil
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Bound the delete-and-recreate loop.

Foreground deletion is asynchronous, but this loop retries Create immediately with no wait or deadline. If the old Job lingers, Deploy can spin forever and hang the spec. Please wait for the Job to disappear or cap retries before recreating it.

♻️ Suggested direction
-	for {
+	deadline := time.Now().Add(30 * time.Second)
+	for time.Now().Before(deadline) {
 		deleteFn := func() error {
 			deletePolicy := metav1.DeletePropagationForeground
 			return s.KubeClient.BatchV1().Jobs(scannerNamespace).Delete(context.TODO(), job.Name, metav1.DeleteOptions{
 				PropagationPolicy: &deletePolicy,
 			})
 		}
 		createdJob, err := s.KubeClient.BatchV1().Jobs(scannerNamespace).Create(context.TODO(), job, metav1.CreateOptions{})
 		if err != nil && apierrors.IsAlreadyExists(err) {
-			deleteFn()
+			if err := deleteFn(); err != nil && !apierrors.IsNotFound(err) {
+				return nil, fmt.Errorf("failed to delete existing TLS Scanner Job: %w", err)
+			}
+			time.Sleep(time.Second)
 			continue
 		} else if err != nil {
 			return nil, fmt.Errorf("failed to create TLS Scanner Job: %w", err)
 		}
 		s.addCleanup(deleteFn)
 		return createdJob, nil
 	}
+	return nil, fmt.Errorf("timed out recreating TLS Scanner Job %q", job.Name)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for {
deleteFn := func() error {
deletePolicy := metav1.DeletePropagationForeground
return s.KubeClient.BatchV1().Jobs(scannerNamespace).Delete(context.TODO(), job.Name, metav1.DeleteOptions{
PropagationPolicy: &deletePolicy,
})
}
createdJob, err := s.KubeClient.BatchV1().Jobs(scannerNamespace).Create(context.TODO(), job, metav1.CreateOptions{})
if err != nil && apierrors.IsAlreadyExists(err) {
deleteFn()
continue
} else if err != nil {
return nil, fmt.Errorf("failed to create TLS Scanner Job: %w", err)
}
s.addCleanup(deleteFn)
return createdJob, nil
}
deadline := time.Now().Add(30 * time.Second)
for time.Now().Before(deadline) {
deleteFn := func() error {
deletePolicy := metav1.DeletePropagationForeground
return s.KubeClient.BatchV1().Jobs(scannerNamespace).Delete(context.TODO(), job.Name, metav1.DeleteOptions{
PropagationPolicy: &deletePolicy,
})
}
createdJob, err := s.KubeClient.BatchV1().Jobs(scannerNamespace).Create(context.TODO(), job, metav1.CreateOptions{})
if err != nil && apierrors.IsAlreadyExists(err) {
if err := deleteFn(); err != nil && !apierrors.IsNotFound(err) {
return nil, fmt.Errorf("failed to delete existing TLS Scanner Job: %w", err)
}
time.Sleep(time.Second)
continue
} else if err != nil {
return nil, fmt.Errorf("failed to create TLS Scanner Job: %w", err)
}
s.addCleanup(deleteFn)
return createdJob, nil
}
return nil, fmt.Errorf("timed out recreating TLS Scanner Job %q", job.Name)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@test/framework/e2e/tls/scanner.go` around lines 171 - 187, The loop around
s.KubeClient.BatchV1().Jobs(scannerNamespace).Create retries immediately on
apierrors.IsAlreadyExists; change it to call deleteFn(), then wait for the old
Job to actually disappear (poll
s.KubeClient.BatchV1().Jobs(scannerNamespace).Get(job.Name, ...) until it
returns NotFound) with a bounded retry policy or context deadline (e.g.,
maxAttempts or timeout + small sleep/backoff between polls), and if the
timeout/retries are exceeded return an error instead of looping forever; keep
adding the cleanup via s.addCleanup only after a successful Create.

Comment thread test/framework/e2e/tls/scanner.go
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. tide/merge-method-squash Denotes a PR that should be squashed by tide when it merges.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants