Skip to content

test: introduce intentional code smells for AI review tool testing#2643

Open
zdrapela wants to merge 1 commit intoredhat-developer:mainfrom
zdrapela:test/ai-review-tool
Open

test: introduce intentional code smells for AI review tool testing#2643
zdrapela wants to merge 1 commit intoredhat-developer:mainfrom
zdrapela:test/ai-review-tool

Conversation

@zdrapela
Copy link
Copy Markdown
Member

@zdrapela zdrapela commented Apr 10, 2026

Summary

  • Introduces intentional code quality issues to test AI code review tool detection capabilities
  • DO NOT MERGE - this PR is for testing purposes only

Intentional issues introduced

  • Hardcoded secret/token in source code
  • unsafe package usage
  • fmt.Printf debug logging in controller (instead of structured logging)
  • panic() in library code
  • Inefficient string concatenation in loop
  • Retry without backoff/jitter
  • Shadowed variables
  • Duplicated logic

https://redhat.atlassian.net/browse/RHIDP-13099

Add several intentional code quality issues across utils and controller
to validate AI code review tool detection capabilities.

Issues introduced:
- Hardcoded secret/token in source code
- unsafe package usage
- fmt.Printf debug logging in controller (instead of structured logging)
- panic() in library code
- Inefficient string concatenation in loop
- Retry without backoff/jitter
- Shadowed variables
- Duplicated logic

Assisted-by: OpenCode
@rhdh-qodo-merge
Copy link
Copy Markdown

rhdh-qodo-merge Bot commented Apr 10, 2026

Code Review by Qodo

🐞 Bugs (7)   📘 Rule violations (1)   📎 Requirement gaps (0)   🎨 UX Issues (0)
🐞\ ≡ Correctness (1) ☼ Reliability (2) ⛨ Security (1) ⚙ Maintainability (1) ➹ Performance (1) ◔ Observability (1)
📘\ § Compliance (1)

Grey Divider


Action required

1. DebugToken looks like real secret 📘 §
Description
The PR adds a hardcoded token value with the ghp_... prefix, which plausibly resembles a real
credential format and could be mistaken for an actual secret. Example/dummy secret-like values must
be clearly non-sensitive placeholders to avoid accidental leakage or misuse.
Code

pkg/utils/utils.go[R41-42]

+// TODO: remove this hardcoded token before merging
+var DebugToken = "ghp_abc123secrettoken456"
Evidence
PR Compliance ID 18 requires that any example/dummy secret values be clearly fake and not resemble
real credential formats; the added DebugToken uses a realistic-looking token prefix and structure.
The cited lines show the newly introduced hardcoded token value.

Rule 18: Use only clearly non-sensitive dummy values in example secrets and dependent resources
pkg/utils/utils.go[41-42]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A hardcoded token value (`DebugToken`) was added and it looks like a real credential (`ghp_...`). This violates the requirement that dummy/example secret values be clearly non-sensitive placeholders.

## Issue Context
Even if intended for testing, values that resemble real tokens can be accidentally leaked, copied into production, or trigger secret scanners.

## Fix Focus Areas
- pkg/utils/utils.go[41-42]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Stdout debug logging 🐞
Description
BackstageReconciler.Reconcile prints every request with fmt.Printf, bypassing controller-runtime
structured logging and producing unthrottled stdout spam under reconciliation load. This makes log
filtering/aggregation harder and can materially increase IO overhead.
Code

internal/controller/backstage_controller.go[R73-75]

+	// DEBUG: log all reconcile requests - remove before production
+	fmt.Printf("DEBUG: Reconciling %s/%s\n", req.Namespace, req.Name)
+
Evidence
The reconciler already uses controller-runtime logger (lg := log.FromContext(ctx)) and emits
structured logs; the new fmt.Printf is unconditional and ignores verbosity/log level controls.

internal/controller/backstage_controller.go[70-83]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`fmt.Printf` in `Reconcile` logs directly to stdout on every reconcile, bypassing structured logging and log levels.

### Issue Context
The controller already obtains a structured logger via `log.FromContext(ctx)`.

### Fix Focus Areas
- internal/controller/backstage_controller.go[70-75]

### Suggested fix
- Remove the debug print, or replace with:
 - `lg.V(1).Info("Reconciling", "namespace", req.Namespace, "name", req.Name)`
- Ensure it is gated by verbosity and does not run unconditionally.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Panic in shared utils 🐞
Description
ValidateConfig panics on nil input, which can crash the entire operator process instead of returning
an error to the caller. In controller/operator code, a panic typically results in pod restarts and
reconciliation disruption.
Code

pkg/utils/utils.go[R261-265]

+// ValidateConfig validates a config map. Panics on unexpected input.
+func ValidateConfig(config map[string]string) bool {
+	if config == nil {
+		panic("config must not be nil") // panic in library code is bad practice
+	}
Evidence
The new ValidateConfig helper calls panic when config == nil; because it’s in a shared utils
package, any caller passing nil will crash the process.

pkg/utils/utils.go[261-272]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`ValidateConfig` panics on nil input, which is not safe for shared library/util code used by an operator.

### Issue Context
Panics in controller processes typically crash the manager and trigger restarts.

### Fix Focus Areas
- pkg/utils/utils.go[261-272]

### Suggested fix
- Replace panic with an error return (or treat nil as invalid and return false plus error):
 - `func ValidateConfig(config map[string]string) error { if config == nil { return fmt.Errorf(...) } ... }`
- Update call sites accordingly (if any).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

4. Unsafe string byte view 🐞
Description
UnsafeStringToBytes returns a []byte view backed by string memory via unsafe, creating a footgun: if
any caller modifies the slice, behavior is undefined and can cause subtle data corruption or
crashes. This also weakens the repo’s general memory-safety posture by introducing unsafe into a
widely shared package.
Code

pkg/utils/utils.go[R216-220]

+// UnsafeStringToBytes converts a string to a byte slice without memory allocation.
+// WARNING: The returned byte slice MUST NOT be modified.
+func UnsafeStringToBytes(s string) []byte {
+	return unsafe.Slice(unsafe.StringData(s), len(s))
+}
Evidence
The PR adds an unsafe import and exposes an exported helper that converts string data to a byte
slice without copying. The returned slice is mutable by type, while strings are immutable by
contract; misuse leads to undefined behavior.

pkg/utils/utils.go[3-21]
pkg/utils/utils.go[216-220]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`UnsafeStringToBytes` exposes an unsafe, non-copying conversion from string to `[]byte`, which is unsafe if the slice is ever mutated.

### Issue Context
This is in a shared utils package, increasing the chance of accidental misuse.

### Fix Focus Areas
- pkg/utils/utils.go[3-21]
- pkg/utils/utils.go[216-220]

### Suggested fix
- Prefer `[]byte(s)` (copy) unless profiling proves this is required.
- If a zero-allocation view is truly necessary, keep it unexported, document strict usage constraints, and ensure all call sites treat it as read-only (ideally accept `[]byte` input instead).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Tight-loop retries added 🐞
Description
retryOperation retries immediately with no backoff/jitter, which can hammer the API server or
downstream dependencies during failures and worsen outages. The loop condition i <= maxRetries
also performs maxRetries+1 attempts, which is surprising given the parameter name.
Code

internal/controller/backstage_controller.go[R250-260]

+// retryOperation retries an operation with no backoff or jitter.
+func retryOperation(op func() error, maxRetries int) error {
+	var err error
+	for i := 0; i <= maxRetries; i++ {
+		err = op()
+		if err == nil {
+			return nil
+		}
+		// No delay between retries - this will hammer the API server
+	}
+	return err
Evidence
The helper loops without any delay and explicitly documents it will hammer the API server; it also
uses <= rather than < for retry count behavior.

internal/controller/backstage_controller.go[250-261]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`retryOperation` retries in a hot loop (no sleep/backoff) and performs `maxRetries+1` attempts due to `i <= maxRetries`.

### Issue Context
In controllers/operators, hot-loop retries can overload the Kubernetes API and amplify incidents.

### Fix Focus Areas
- internal/controller/backstage_controller.go[250-261]

### Suggested fix
- Use exponential backoff with jitter between attempts (e.g., `time.Sleep(backoff)`; cap the max delay).
- Clarify semantics:
 - Either rename param to `maxAttempts` (if keeping `<=`), or
 - Change loop to `for i := 0; i < maxRetries; i++ { ... }` and define what a “retry” means (attempt count vs retries after first attempt).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Advisory comments

6. Misleading comment/dead code 🐞
Description
ProcessItems contains a misleading “nil dereference” comment (prefix is a string and cannot be nil)
and includes dead code assigning an unused nil error, reducing clarity and making future maintenance
riskier. This increases the chance reviewers miss real issues amid noise.
Code

pkg/utils/utils.go[R222-238]

+// ProcessItems processes a list of items and returns a filtered result.
+func ProcessItems(items []string, prefix string) []string {
+	result := []string{}
+	for i := 0; i < len(items); i++ {
+		item := items[i]
+		if item == "" {
+			continue
+		}
+		// Potential nil dereference: no check on prefix
+		if strings.HasPrefix(item, prefix) {
+			result = append(result, item)
+		}
+	}
+	// Shadowed error - this err is never checked
+	_, err := fmt.Sprintf("Processed %d items", len(result)), error(nil)
+	_ = err
+	return result
Evidence
The function signature uses prefix string, making the nil-dereference comment incorrect, and it
assigns err via error(nil) then discards it, which is dead code.

pkg/utils/utils.go[222-238]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`ProcessItems` includes an incorrect nil-dereference comment and dead code (`err := ... error(nil)`) that is never used.

### Issue Context
This reduces readability and can hide real problems.

### Fix Focus Areas
- pkg/utils/utils.go[222-238]

### Suggested fix
- Delete the misleading comment.
- Remove the unused `fmt.Sprintf`/`err` lines entirely (or replace with a real log/metric if needed).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


7. Env default loses empty value 🐞
Description
GetEnvOrDefault uses os.Getenv and treats an explicitly empty environment variable the same as an
unset variable, forcing the default even when the caller intentionally set an empty value. This
differs from the repo’s existing BoolEnvVar pattern that uses os.LookupEnv to distinguish unset vs
set.
Code

pkg/utils/utils.go[R274-281]

+// GetEnvOrDefault returns the value of the environment variable or a default.
+// This duplicates BoolEnvVar logic partially.
+func GetEnvOrDefault(key string, defaultVal string) string {
+	val := os.Getenv(key) // does not distinguish between empty and unset
+	if val == "" {
+		return defaultVal
+	}
+	return val
Evidence
GetEnvOrDefault uses os.Getenv and checks val == "", while BoolEnvVar uses os.LookupEnv, which
preserves the set-vs-unset distinction.

pkg/utils/utils.go[274-282]
pkg/utils/utils.go[313-320]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`GetEnvOrDefault` cannot distinguish between unset and explicitly empty env vars, which can produce incorrect behavior for callers that treat empty as meaningful.

### Issue Context
The repo already uses `os.LookupEnv` in `BoolEnvVar`.

### Fix Focus Areas
- pkg/utils/utils.go[274-282]
- pkg/utils/utils.go[313-320]

### Suggested fix
- Implement using `os.LookupEnv`:
 - `if v, ok := os.LookupEnv(key); ok { return v }`
 - `return defaultVal`
- Optionally document whether empty values should override defaults.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


8. Quadratic string concatenation 🐞
Description
ConcatStrings builds a string by repeated result = result + p in a loop, which can lead to O(n²)
behavior and excessive allocations for large inputs. This is a known performance pitfall for string
building in Go.
Code

pkg/utils/utils.go[R252-258]

+// ConcatStrings concatenates strings inefficiently.
+func ConcatStrings(parts []string) string {
+	result := ""
+	for _, p := range parts {
+		result = result + p // inefficient string concatenation in a loop
+	}
+	return result
Evidence
The implementation concatenates onto an ever-growing string in a loop, which typically reallocates
and copies as it grows.

pkg/utils/utils.go[252-259]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`ConcatStrings` concatenates strings in a loop using `+`, which can be quadratic and allocate heavily.

### Issue Context
Go best practice is to use `strings.Builder` (or precompute size if possible) for repeated concatenation.

### Fix Focus Areas
- pkg/utils/utils.go[252-259]

### Suggested fix
- Replace with `var b strings.Builder; for _, p := range parts { b.WriteString(p) }; return b.String()` (and add `strings` import only if not already present).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Grey Divider

Previous review results

Review updated until commit 42b0adb

Results up to commit 42b0adb


🐞 Bugs (7)  
📘 Rule violations (1)  
📎 Requirement gaps (0)  
🎨 UX Issues (0)

🐞\ ≡ Correctness (1) ☼ Reliability (2) ⛨ Security (1) ⚙ Maintainability (1) ➹ Performance (1) ◔ Observability (1)
📘\ § Compliance (1)

Grey Divider


Action required

1. DebugToken looks like real secret 📘 §
Description
The PR adds a hardcoded token value with the ghp_... prefix, which plausibly resembles a real
credential format and could be mistaken for an actual secret. Example/dummy secret-like values must
be clearly non-sensitive placeholders to avoid accidental leakage or misuse.
Code

pkg/utils/utils.go[R41-42]

+// TODO: remove this hardcoded token before merging
+var DebugToken = "ghp_abc123secrettoken456"
Evidence
PR Compliance ID 18 requires that any example/dummy secret values be clearly fake and not resemble
real credential formats; the added DebugToken uses a realistic-looking token prefix and structure.
The cited lines show the newly introduced hardcoded token value.

Rule 18: Use only clearly non-sensitive dummy values in example secrets and dependent resources
pkg/utils/utils.go[41-42]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A hardcoded token value (`DebugToken`) was added and it looks like a real credential (`ghp_...`). This violates the requirement that dummy/example secret values be clearly non-sensitive placeholders.

## Issue Context
Even if intended for testing, values that resemble real tokens can be accidentally leaked, copied into production, or trigger secret scanners.

## Fix Focus Areas
- pkg/utils/utils.go[41-42]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Stdout debug logging 🐞
Description
BackstageReconciler.Reconcile prints every request with fmt.Printf, bypassing controller-runtime
structured logging and producing unthrottled stdout spam under reconciliation load. This makes log
filtering/aggregation harder and can materially increase IO overhead.
Code

internal/controller/backstage_controller.go[R73-75]

+	// DEBUG: log all reconcile requests - remove before production
+	fmt.Printf("DEBUG: Reconciling %s/%s\n", req.Namespace, req.Name)
+
Evidence
The reconciler already uses controller-runtime logger (lg := log.FromContext(ctx)) and emits
structured logs; the new fmt.Printf is unconditional and ignores verbosity/log level controls.

internal/controller/backstage_controller.go[70-83]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`fmt.Printf` in `Reconcile` logs directly to stdout on every reconcile, bypassing structured logging and log levels.

### Issue Context
The controller already obtains a structured logger via `log.FromContext(ctx)`.

### Fix Focus Areas
- internal/controller/backstage_controller.go[70-75]

### Suggested fix
- Remove the debug print, or replace with:
 - `lg.V(1).Info("Reconciling", "namespace", req.Namespace, "name", req.Name)`
- Ensure it is gated by verbosity and does not run unconditionally.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Panic in shared utils 🐞
Description
ValidateConfig panics on nil input, which can crash the entire operator process instead of returning
an error to the caller. In controller/operator code, a panic typically results in pod restarts and
reconciliation disruption.
Code

pkg/utils/utils.go[R261-265]

+// ValidateConfig validates a config map. Panics on unexpected input.
+func ValidateConfig(config map[string]string) bool {
+	if config == nil {
+		panic("config must not be nil") // panic in library code is bad practice
+	}
Evidence
The new ValidateConfig helper calls panic when config == nil; because it’s in a shared utils
package, any caller passing nil will crash the process.

pkg/utils/utils.go[261-272]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`ValidateConfig` panics on nil input, which is not safe for shared library/util code used by an operator.

### Issue Context
Panics in controller processes typically crash the manager and trigger restarts.

### Fix Focus Areas
- pkg/utils/utils.go[261-272]

### Suggested fix
- Replace panic with an error return (or treat nil as invalid and return false plus error):
 - `func ValidateConfig(config map[string]string) error { if config == nil { return fmt.Errorf(...) } ... }`
- Update call sites accordingly (if any).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

4. Unsafe string byte view 🐞
Description
UnsafeStringToBytes returns a []byte view backed by string memory via unsafe, creating a footgun: if
any caller modifies the slice, behavior is undefined and can cause subtle data corruption or
crashes. This also weakens the repo’s general memory-safety posture by introducing unsafe into a
widely shared package.
Code

pkg/utils/utils.go[R216-220]

+// UnsafeStringToBytes converts a string to a byte slice without memory allocation.
+// WARNING: The returned byte slice MUST NOT be modified.
+func UnsafeStringToBytes(s string) []byte {
+	return unsafe.Slice(unsafe.StringData(s), len(s))
+}
Evidence
The PR adds an unsafe import and exposes an exported helper that converts string data to a byte
slice without copying. The returned slice is mutable by type, while strings are immutable by
contract; misuse leads to undefined behavior.

pkg/utils/utils.go[3-21]
pkg/utils/utils.go[216-220]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`UnsafeStringToBytes` exposes an unsafe, non-copying conversion from string to `[]byte`, which is unsafe if the slice is ever mutated.

### Issue Context
This is in a shared utils package, increasing the chance of accidental misuse.

### Fix Focus Areas
- pkg/utils/utils.go[3-21]
- pkg/utils/utils.go[216-220]

### Suggested fix
- Prefer `[]byte(s)` (copy) unless profiling proves this is required.
- If a zero-allocation view is truly necessary, keep it unexported, document strict usage constraints, and ensure all call sites treat it as read-only (ideally accept `[]byte` input instead).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Tight-loop retries added 🐞
Description
retryOperation retries immediately with no backoff/jitter, which can hammer the API server or
downstream dependencies during failures and worsen outages. The loop condition i <= maxRetries
also performs maxRetries+1 attempts, which is surprising given the parameter name.
Code

internal/controller/backstage_controller.go[R250-260]

+// retryOperation retries an operation with no backoff or jitter.
+func retryOperation(op func() error, maxRetries int) error {
+	var err error
+	for i := 0; i <= maxRetries; i++ {
+		err = op()
+		if err == nil {
+			return nil
+		}
+		// No delay between retries - this will hammer the API server
+	}
+	return err
Evidence
The helper loops without any delay and explicitly documents it will hammer the API server; it also
uses <= rather than < for retry count behavior.

internal/controller/backstage_controller.go[250-261]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`retryOperation` retries in a hot loop (no sleep/backoff) and performs `maxRetries+1` attempts due to `i <= maxRetries`.

### Issue Context
In controllers/operators, hot-loop retries can overload the Kubernetes API and amplify incidents.

### Fix Focus Areas
- internal/controller/backstage_controller.go[250-261]

### Suggested fix
- Use exponential backoff with jitter between attempts (e.g., `time.Sleep(backoff)`; cap the max delay).
- Clarify semantics:
 - Either rename param to `maxAttempts` (if keeping `<=`), or
 - Change loop to `for i := 0; i < maxRetries; i++ { ... }` and define what a “retry” means (attempt count vs retries after first attempt).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Advisory comments

6. Misleading comment/dead code 🐞
Description
ProcessItems contains a misleading “nil dereference” comment (prefix is a string and cannot be nil)
and includes dead code assigning an unused nil error, reducing clarity and making future maintenance
riskier. This increases the chance reviewers miss real issues amid noise.
Code

pkg/utils/utils.go[R222-238]

+// ProcessItems processes a list of items and returns a filtered result.
+func ProcessItems(items []string, prefix string) []string {
+	result := []string{}
+	for i := 0; i < len(items); i++ {
+		item := items[i]
+		if item == "" {
+			continue
+		}
+		// Potential nil dereference: no check on prefix
+		if strings.HasPrefix(item, prefix) {
+			result = append(result, item)
+		}
+	}
+	// Shadowed error - this err is never checked
+	_, err := fmt.Sprintf("Processed %d items", len(result)), error(nil)
+	_ = err
+	return result
Evidence
The function signature uses prefix string, making the nil-dereference comment incorrect, and it
assigns err via error(nil) then discards it, which is dead code.

pkg/utils/utils.go[222-238]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`ProcessItems` includes an incorrect nil-dereference comment and dead code (`err := ... error(nil)`) that is never used.

### Issue Context
This reduces readability and can hide real problems.

### Fix Focus Areas
- pkg/utils/utils.go[222-238]

### Suggested fix
- Delete the misleading comment.
- Remove the unused `fmt.Sprintf`/`err` lines entirely (or replace with a real log/metric if needed).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


7. Env default loses empty value 🐞
Description
GetEnvOrDefault uses os.Getenv and treats an explicitly empty environment variable the same as an
unset variable, forcing the default even when the caller intentionally set an empty value. This
differs from the repo’s existing BoolEnvVar pattern that uses os.LookupEnv to distinguish unset vs
set.
Code

pkg/utils/utils.go[R274-281]

+// GetEnvOrDefault returns the value of the environment variable or a default.
+// This duplicates BoolEnvVar logic partially.
+func GetEnvOrDefault(key string, defaultVal string) string {
+	val := os.Getenv(key) // does not distinguish between empty and unset
+	if val == "" {
+		return defaultVal
+	}
+	return val
Evidence
GetEnvOrDefault uses os.Getenv and checks val == "", while BoolEnvVar uses os.LookupEnv, which
preserves the set-vs-unset distinction.

pkg/utils/utils.go[274-282]
pkg/utils/utils.go[313-320]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`GetEnvOrDefault` cannot distinguish between unset and explicitly empty env vars, which can produce incorrect behavior for callers that treat empty as meaningful.

### Issue Context
The repo already uses `os.LookupEnv` in `BoolEnvVar`.

### Fix Focus Areas
- pkg/utils/utils.go[274-282]
- pkg/utils/utils.go[313-320]

### Suggested fix
- Implement using `os.LookupEnv`:
 - `if v, ok := os.LookupEnv(key); ok { return v }`
 - `return defaultVal`
- Optionally document whether empty values should override defaults.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


8. Quadratic string concatenation 🐞
Description
ConcatStrings builds a string by repeated result = result + p in a loop, which can lead to O(n²)
behavior and excessive allocations for large inputs. This is a known performance pitfall for string
building in Go.
Code

pkg/utils/utils.go[R252-258]

+// ConcatStrings concatenates strings inefficiently.
+func ConcatStrings(parts []string) string {
+	result := ""
+	for _, p := range parts {
+		result = result + p // inefficient string concatenation in a loop
+	}
+	return result
Evidence
The implementation concatenates onto an ever-growing string in a loop, which typically reallocates
and copies as it grows.

pkg/utils/utils.go[252-259]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`ConcatStrings` concatenates strings in a loop using `+`, which can be quadratic and allocate heavily.

### Issue Context
Go best practice is to use `strings.Builder` (or precompute size if possible) for repeated concatenation.

### Fix Focus Areas
- pkg/utils/utils.go[252-259]

### Suggested fix
- Replace with `var b strings.Builder; for _, p := range parts { b.WriteString(p) }; return b.String()` (and add `strings` import only if not already present).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

Grey Divider

Results up to commit 42b0adb


🐞 Bugs (7)  
📘 Rule violations (1)  
📎 Requirement gaps (0)  
🎨 UX Issues (0)

🐞\ ≡ Correctness (1) ☼ Reliability (2) ⛨ Security (1) ⚙ Maintainability (1) ➹ Performance (1) ◔ Observability (1)
📘\ § Compliance (1)

Grey Divider


Action required

1. DebugToken looks like real secret 📘 §
Description
The PR adds a hardcoded token value with the ghp_... prefix, which plausibly resembles a real
credential format and could be mistaken for an actual secret. Example/dummy secret-like values must
be clearly non-sensitive placeholders to avoid accidental leakage or misuse.
Code

pkg/utils/utils.go[R41-42]

+// TODO: remove this hardcoded token before merging
+var DebugToken = "ghp_abc123secrettoken456"
Evidence
PR Compliance ID 18 requires that any example/dummy secret values be clearly fake and not resemble
real credential formats; the added DebugToken uses a realistic-looking token prefix and structure.
The cited lines show the newly introduced hardcoded token value.

Rule 18: Use only clearly non-sensitive dummy values in example secrets and dependent resources
pkg/utils/utils.go[41-42]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A hardcoded token value (`DebugToken`) was added and it looks like a real credential (`ghp_...`). This violates the requirement that dummy/example secret values be clearly non-sensitive placeholders.

## Issue Context
Even if intended for testing, values that resemble real tokens can be accidentally leaked, copied into production, or trigger secret scanners.

## Fix Focus Areas
- pkg/utils/utils.go[41-42]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Stdout debug logging 🐞
Description
BackstageReconciler.Reconcile prints every request with fmt.Printf, bypassing controller-runtime
structured logging and producing unthrottled stdout spam under reconciliation load. This makes log
filtering/aggregation harder and can materially increase IO overhead.
Code

internal/controller/backstage_controller.go[R73-75]

+	// DEBUG: log all reconcile requests - remove before production
+	fmt.Printf("DEBUG: Reconciling %s/%s\n", req.Namespace, req.Name)
+
Evidence
The reconciler already uses controller-runtime logger (lg := log.FromContext(ctx)) and emits
structured logs; the new fmt.Printf is unconditional and ignores verbosity/log level controls.

internal/controller/backstage_controller.go[70-83]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`fmt.Printf` in `Reconcile` logs directly to stdout on every reconcile, bypassing structured logging and log levels.

### Issue Context
The controller already obtains a structured logger via `log.FromContext(ctx)`.

### Fix Focus Areas
- internal/controller/backstage_controller.go[70-75]

### Suggested fix
- Remove the debug print, or replace with:
 - `lg.V(1).Info("Reconciling", "namespace", req.Namespace, "name", req.Name)`
- Ensure it is gated by verbosity and does not run unconditionally.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Panic in shared utils 🐞
Description
ValidateConfig panics on nil input, which can crash the entire operator process instead of returning
an error to the caller. In controller/operator code, a panic typically results in pod restarts and
reconciliation disruption.
Code

pkg/utils/utils.go[R261-265]

+// ValidateConfig validates a config map. Panics on unexpected input.
+func ValidateConfig(config map[string]string) bool {
+	if config == nil {
+		panic("config must not be nil") // panic in library code is bad practice
+	}
Evidence
The new ValidateConfig helper calls panic when config == nil; because it’s in a shared utils
package, any caller passing nil will crash the process.

pkg/utils/utils.go[261-272]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`ValidateConfig` panics on nil input, which is not safe for shared library/util code used by an operator.

### Issue Context
Panics in controller processes typically crash the manager and trigger restarts.

### Fix Focus Areas
- pkg/utils/utils.go[261-272]

### Suggested fix
- Replace panic with an error return (or treat nil as invalid and return false plus error):
 - `func ValidateConfig(config map[string]string) error { if config == nil { return fmt.Errorf(...) } ... }`
- Update call sites accordingly (if any).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

4. Tight-loop retries added 🐞
Description
retryOperation retries immediately with no backoff/jitter, which can hammer the API server or
downstream dependencies during failures and worsen outages. The loop condition i <= maxRetries
also performs maxRetries+1 attempts, which is surprising given the parameter name.
Code

internal/controller/backstage_controller.go[R250-260]

+// retryOperation retries an operation with no backoff or jitter.
+func retryOperation(op func() error, maxRetries int) error {
+	var err error
+	for i := 0; i <= maxRetries; i++ {
+		err = op()
+		if err == nil {
+			return nil
+		}
+		// No delay between retries - this will hammer the API server
+	}
+	return err
Evidence
The helper loops without any delay and explicitly documents it will hammer the API server; it also
uses <= rather than < for retry count behavior.

internal/controller/backstage_controller.go[250-261]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`retryOperation` retries in a hot loop (no sleep/backoff) and performs `maxRetries+1` attempts due to `i <= maxRetries`.

### Issue Context
In controllers/operators, hot-loop retries can overload the Kubernetes API and amplify incidents.

### Fix Focus Areas
- internal/controller/backstage_controller.go[250-261]

### Suggested fix
- Use exponential backoff with jitter between attempts (e.g., `time.Sleep(backoff)`; cap the max delay).
- Clarify semantics:
 - Either rename param to `maxAttempts` (if keeping `<=`), or
 - Change loop to `for i := 0; i < maxRetries; i++ { ... }` and define what a “retry” means (attempt count vs retries after first attempt).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Unsafe string byte view 🐞
Description
UnsafeStringToBytes returns a []byte view backed by string memory via unsafe, creating a footgun: if
any caller modifies the slice, behavior is undefined and can cause subtle data corruption or
crashes. This also weakens the repo’s general memory-safety posture by introducing unsafe into a
widely shared package.
Code

pkg/utils/utils.go[R216-220]

+// UnsafeStringToBytes converts a string to a byte slice without memory allocation.
+// WARNING: The returned byte slice MUST NOT be modified.
+func UnsafeStringToBytes(s string) []byte {
+	return unsafe.Slice(unsafe.StringData(s), len(s))
+}
Evidence
The PR adds an unsafe import and exposes an exported helper that converts string data to a byte
slice without copying. The returned slice is mutable by type, while strings are immutable by
contract; misuse leads to undefined behavior.

pkg/utils/utils.go[3-21]
pkg/utils/utils.go[216-220]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`UnsafeStringToBytes` exposes an unsafe, non-copying conversion from string to `[]byte`, which is unsafe if the slice is ever mutated.

### Issue Context
This is in a shared utils package, increasing the chance of accidental misuse.

### Fix Focus Areas
- pkg/utils/utils.go[3-21]
- pkg/utils/utils.go[216-220]

### Suggested fix
- Prefer `[]byte(s)` (copy) unless profiling proves this is required.
- If a zero-allocation view is truly necessary, keep it unexported, document strict usage constraints, and ensure all call sites treat it as read-only (ideally accept `[]byte` input instead).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Advisory comments

6. Misleading comment/dead code 🐞
Description
ProcessItems contains a misleading “nil dereference” comment (prefix is a string and cannot be nil)
and includes dead code assigning an unused nil error, reducing clarity and making future maintenance
riskier. This increases the chance reviewers miss real issues amid noise.
Code

pkg/utils/utils.go[R222-238]

+// ProcessItems processes a list of items and returns a filtered result.
+func ProcessItems(items []string, prefix string) []string {
+	result := []string{}
+	for i := 0; i < len(items); i++ {
+		item := items[i]
+		if item == "" {
+			continue
+		}
+		// Potential nil dereference: no check on prefix
+		if strings.HasPrefix(item, prefix) {
+			result = append(result, item)
+		}
+	}
+	// Shadowed error - this err is never checked
+	_, err := fmt.Sprintf("Processed %d items", len(result)), error(nil)
+	_ = err
+	return result
Evidence
The function signature uses prefix string, making the nil-dereference comment incorrect, and it
assigns err via error(nil) then discards it, which is dead code.

pkg/utils/utils.go[222-238]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`ProcessItems` includes an incorrect nil-dereference comment and dead code (`err := ... error(nil)`) that is never used.

### Issue Context
This reduces readability and can hide real problems.

### Fix Focus Areas
- pkg/utils/utils.go[222-238]

### Suggested fix
- Delete the misleading comment.
- Remove the unused `fmt.Sprintf`/`err` lines entirely (or replace with a real log/metric if needed).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


7. Quadratic string concatenation 🐞
Description
ConcatStrings builds a string by repeated result = result + p in a loop, which can lead to O(n²)
behavior and excessive allocations for large inputs. This is a known performance pitfall for string
building in Go.
Code

pkg/utils/utils.go[R252-258]

+// ConcatStrings concatenates strings inefficiently.
+func ConcatStrings(parts []string) string {
+	result := ""
+	for _, p := range parts {
+		result = result + p // inefficient string concatenation in a loop
+	}
+	return result
Evidence
The implementation concatenates onto an ever-growing string in a loop, which typically reallocates
and copies as it grows.

pkg/utils/utils.go[252-259]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`ConcatStrings` concatenates strings in a loop using `+`, which can be quadratic and allocate heavily.

### Issue Context
Go best practice is to use `strings.Builder` (or precompute size if possible) for repeated concatenation.

### Fix Focus Areas
- pkg/utils/utils.go[252-259]

### Suggested fix
- Replace with `var b strings.Builder; for _, p := range parts { b.WriteString(p) }; return b.String()` (and add `strings` import only if not already present).

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


8. Env default loses empty value 🐞
Description
GetEnvOrDefault uses os.Getenv and treats an explicitly empty environment variable the same as an
unset variable, forcing the default even when the caller intentionally set an empty value. This
differs from the repo’s existing BoolEnvVar pattern that uses os.LookupEnv to distinguish unset vs
set.
Code

pkg/utils/utils.go[R274-281]

+// GetEnvOrDefault returns the value of the environment variable or a default.
+// This duplicates BoolEnvVar logic partially.
+func GetEnvOrDefault(key string, defaultVal string) string {
+	val := os.Getenv(key) // does not distinguish between empty and unset
+	if val == "" {
+		return defaultVal
+	}
+	return val
Evidence
GetEnvOrDefault uses os.Getenv and checks val == "", while BoolEnvVar uses os.LookupEnv, which
preserves the set-vs-unset distinction.

pkg/utils/utils.go[274-282]
pkg/utils/utils.go[313-320]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`GetEnvOrDefault` cannot distinguish between unset and explicitly empty env vars, which can produce incorrect behavior for callers that treat empty as meaningful.

### Issue Context
The repo already uses `os.LookupEnv` in `BoolEnvVar`.

### Fix Focus Areas
- pkg/utils/utils.go[274-282]
- pkg/utils/utils.go[313-320]

### Suggested fix
- Implement using `os.LookupEnv`:
 - `if v, ok := os.LookupEnv(key); ok { return v }`
 - `return defaultVal`
- Optionally document whether empty values should override defaults.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

Grey Divider

Qodo Logo

@sonarqubecloud
Copy link
Copy Markdown

@rhdh-qodo-merge
Copy link
Copy Markdown

Review Summary by Qodo

Introduce intentional code smells for AI review tool testing

🧪 Tests

Grey Divider

Walkthroughs

Description
• Introduces intentional code quality issues for AI review tool testing
• Adds hardcoded secrets, unsafe package usage, and debug logging
• Includes panic calls, inefficient patterns, and shadowed variables
• Marked as test-only branch, not for production merge
Diagram
flowchart LR
  A["Test Code Smells"] --> B["Controller Changes"]
  A --> C["Utils Changes"]
  B --> D["Debug Logging"]
  B --> E["Retry Without Backoff"]
  C --> F["Hardcoded Token"]
  C --> G["Unsafe Package"]
  C --> H["Anti-patterns"]
  H --> I["String Concatenation Loop"]
  H --> J["Panic in Library"]
  H --> K["Shadowed Variables"]
Loading

Grey Divider

File Changes

1. internal/controller/backstage_controller.go 🧪 Tests +16/-0

Add debug logging and retry without backoff

• Adds debug logging using fmt.Printf instead of structured logging
• Introduces retryOperation function with no backoff or jitter between retries
• Comments indicate intentional anti-patterns for testing

internal/controller/backstage_controller.go


2. pkg/utils/utils.go 🧪 Tests +72/-0

Add multiple code quality anti-patterns and unsafe operations

• Imports unsafe package for direct memory manipulation
• Adds hardcoded GitHub token as module-level variable
• Introduces UnsafeStringToBytes function using unsafe pointers
• Adds ProcessItems with shadowed error variable and no nil checks
• Implements ConcatStrings with inefficient string concatenation in loop
• Adds ValidateConfig that panics on nil input in library code
• Includes GetEnvOrDefault duplicating existing BoolEnvVar logic

pkg/utils/utils.go


Grey Divider

Qodo Logo

@zdrapela
Copy link
Copy Markdown
Member Author

/config

@rhdh-qodo-merge
Copy link
Copy Markdown

🛠️ Wiki configuration file settings:

🛠️ Local configuration file settings:
 [jira]
jira_base_url = "https://redhat.atlassian.net"

[github_app]
# what should be launched automatically
pr_commands = [
    "/agentic_review",
    "/agentic_describe --pr_description.final_update_message=false",
    "/generate_labels",
]

[pr_test]
enable=false

[review_agent]
comments_location_policy = "summary"

[checks]
enable_auto_checks_feedback = true
persistent_comment = true
final_update_message = false

[config]
ignore_pr_authors = REDACTED

[rag_arguments]
enable_rag=true
rag_repo_list=['redhat-developer/rhdh','redhat-developer/red-hat-developers-documentation-rhdh','redhat-developer/rhdh-operator','redhat-developer/rhdh-chart']

🛠️ Global configuration file settings:

🛠️ PR-Agent final configurations:
==================== CONFIG ====================
config.organization_id = 'redhat.com'  
config.second_model_for_exhaustive_mode = 'openai/openai/o4-mini'  
config.model = 'openai/openai/gpt-5.2'  
config.model_turbo = 'openai/anthropic/claude-haiku-4-5'  
config.model_reasoning = 'openai/vertex_ai/gemini-2.5-pro'  
config.fallback_models = ['openai/anthropic/claude-sonnet-4-6', 'openai/openai/gpt-5.2']  
config.use_async_clone = True  
config.pr_compliance = {'ENABLE_RULES_PLATFORM': True}  
config.git_provider = 'github'  
config.publish_output = True  
config.publish_output_no_suggestions = True  
config.publish_output_progress = True  
config.verbosity_level = 0  
config.publish_logs = False  
config.debug_mode = False  
config.use_wiki_settings_file = True  
config.use_repo_settings_file = True  
config.use_global_settings_file = True  
config.use_global_wiki_settings_file = False  
config.disable_auto_feedback = False  
config.ai_timeout = 150  
config.response_language = 'en-US'  
config.clone_repo_instead_of_fetch = True  
config.always_clone = False  
config.add_repo_metadata = True  
config.clone_repo_time_limit = 300  
config.publish_inline_comments_fallback_batch_size = 5  
config.publish_inline_comments_fallback_sleep_time = 2  
config.max_model_tokens = 32000  
config.custom_model_max_tokens = -1  
config.patch_extension_skip_types = ['.md', '.txt']  
config.extra_allowed_extensions = []  
config.allow_dynamic_context = True  
config.allow_forward_dynamic_context = True  
config.max_extra_lines_before_dynamic_context = 12  
config.patch_extra_lines_before = 5  
config.patch_extra_lines_after = 1  
config.ai_handler = 'litellm'  
config.cli_mode = False  
config.fetch_github_apps_from_platform = False  
config.trial_git_org_max_invokes_per_month = 30  
config.trial_ratio_close_to_limit = 0.8  
config.invite_only_mode = False  
config.enable_request_access_msg_on_new_pr = False  
config.check_also_invites_field = False  
config.allowed_users = []  
config.calculate_context = True  
config.disable_checkboxes = False  
config.output_relevant_configurations = False  
config.large_patch_policy = 'clip'  
config.seed = -1  
config.temperature = 0.2  
config.allow_dynamic_context_ab_testing = False  
config.choose_dynamic_context_ab_testing_ratio = 0.5  
config.ignore_pr_title = ['^\\[Auto\\]', '^Auto']  
config.ignore_pr_target_branches = []  
config.ignore_pr_source_branches = []  
config.ignore_pr_labels = []  
config.ignore_ticket_labels = []  
config.allow_only_specific_folders = []  
config.ignore_pr_authors = 'REDACTED'  
config.ignore_repositories = []  
config.ignore_language_framework = []  
config.is_auto_command = False  
config.is_new_pr = False  
config.enable_ai_metadata = True  
config.present_reasoning = True  
config.max_tickets = 10  
config.max_tickets_chars = 8000  
config.prevent_any_approval = False  
config.enable_comment_approval = False  
config.enable_auto_approval = False  
config.auto_approve_for_low_review_effort = -1  
config.auto_approve_for_no_suggestions = False  
config.ensure_ticket_compliance = False  
config.new_diff_format = True  
config.new_diff_format_add_external_references = True  
config.tasks_queue_ttl_from_dequeue_in_seconds = 900  
config.enable_custom_labels = False  

==================== PR_REVIEWER ====================
pr_reviewer.require_score_review = False  
pr_reviewer.require_tests_review = True  
pr_reviewer.require_estimate_effort_to_review = True  
pr_reviewer.require_can_be_split_review = False  
pr_reviewer.require_security_review = True  
pr_reviewer.require_todo_scan = False  
pr_reviewer.require_ticket_analysis_review = True  
pr_reviewer.require_ticket_labels = False  
pr_reviewer.require_no_ticket_labels = False  
pr_reviewer.check_pr_additional_content = False  
pr_reviewer.persistent_comment = True  
pr_reviewer.extra_instructions = ''  
pr_reviewer.final_update_message = True  
pr_reviewer.enable_review_labels_security = True  
pr_reviewer.enable_review_labels_effort = True  
pr_reviewer.enable_help_text = False  

==================== PR_COMPLIANCE ====================
pr_compliance.enabled = True  
pr_compliance.enable_rules_platform = True  
pr_compliance.rule_providers = []  
pr_compliance.enable_security_section = True  
pr_compliance.enable_ticket_section = True  
pr_compliance.enable_codebase_duplication_section = True  
pr_compliance.enable_custom_compliance_section = True  
pr_compliance.require_ticket_analysis_review = True  
pr_compliance.allow_repo_pr_compliance = True  
pr_compliance.enable_global_pr_compliance = True  
pr_compliance.max_lines_allowed = 2000  
pr_compliance.local_wiki_compliance_str = ''  
pr_compliance.global_wiki_pr_compliance = ''  
pr_compliance.local_repo_compliance_str = ''  
pr_compliance.global_repo_pr_compliance_str = ''  
pr_compliance.global_compliance_str = ''  
pr_compliance.enable_generic_custom_compliance_checklist = True  
pr_compliance.persist_generic_custom_compliance_checklist = False  
pr_compliance.display_no_compliance_only = False  
pr_compliance.enable_security_compliance = True  
pr_compliance.enable_update_pr_compliance_checkbox = True  
pr_compliance.enable_todo_scan = False  
pr_compliance.enable_ticket_labels = False  
pr_compliance.enable_no_ticket_labels = False  
pr_compliance.check_pr_additional_content = False  
pr_compliance.enable_compliance_labels_security = True  
pr_compliance.enable_user_defined_compliance_labels = True  
pr_compliance.enable_estimate_effort_to_review = True  
pr_compliance.max_rag_components_to_analyze = 5  
pr_compliance.min_component_size = 5  
pr_compliance.persistent_comment = True  
pr_compliance.enable_help_text = False  
pr_compliance.extra_instructions = ''  

==================== PR_DESCRIPTION ====================
pr_description.publish_labels = False  
pr_description.add_original_user_description = True  
pr_description.generate_ai_title = False  
pr_description.extra_instructions = ''  
pr_description.enable_pr_type = True  
pr_description.final_update_message = True  
pr_description.enable_help_text = False  
pr_description.enable_help_comment = False  
pr_description.bring_latest_tag = False  
pr_description.enable_pr_diagram = True  
pr_description.publish_description_as_comment = False  
pr_description.publish_description_as_comment_persistent = True  
pr_description.enable_semantic_files_types = True  
pr_description.collapsible_file_list = 'adaptive'  
pr_description.collapsible_file_list_threshold = 8  
pr_description.inline_file_summary = False  
pr_description.use_description_markers = False  
pr_description.include_generated_by_header = True  
pr_description.enable_large_pr_handling = True  
pr_description.max_ai_calls = 4  
pr_description.auto_create_ticket = False  

==================== PR_QUESTIONS ====================
pr_questions.aware_ai_handler = False  
pr_questions.enable_help_text = False  

==================== PR_CODE_SUGGESTIONS ====================
pr_code_suggestions.suggestions_depth = 'exhaustive'  
pr_code_suggestions.commitable_code_suggestions = False  
pr_code_suggestions.decouple_hunks = False  
pr_code_suggestions.dual_publishing_score_threshold = -1  
pr_code_suggestions.focus_only_on_problems = True  
pr_code_suggestions.allow_thumbs_up_down = False  
pr_code_suggestions.enable_suggestion_type_reuse = False  
pr_code_suggestions.enable_more_suggestions_checkbox = True  
pr_code_suggestions.high_level_suggestions_enabled = True  
pr_code_suggestions.extra_instructions = ''  
pr_code_suggestions.enable_help_text = False  
pr_code_suggestions.show_extra_context = False  
pr_code_suggestions.persistent_comment = True  
pr_code_suggestions.max_history_len = 5  
pr_code_suggestions.apply_suggestions_checkbox = True  
pr_code_suggestions.enable_chat_in_code_suggestions = True  
pr_code_suggestions.apply_limit_scope = True  
pr_code_suggestions.suggestions_score_threshold = 0  
pr_code_suggestions.new_score_mechanism = True  
pr_code_suggestions.new_score_mechanism_th_high = 9  
pr_code_suggestions.new_score_mechanism_th_medium = 7  
pr_code_suggestions.discard_unappliable_suggestions = False  
pr_code_suggestions.num_code_suggestions_per_chunk = 3  
pr_code_suggestions.num_best_practice_suggestions = 2  
pr_code_suggestions.max_number_of_calls = 3  
pr_code_suggestions.final_clip_factor = 0.8  
pr_code_suggestions.demand_code_suggestions_self_review = False  
pr_code_suggestions.code_suggestions_self_review_text = '**Author self-review**: I have reviewed the PR code suggestions, and addressed the relevant ones.'  
pr_code_suggestions.approve_pr_on_self_review = False  
pr_code_suggestions.fold_suggestions_on_self_review = True  
pr_code_suggestions.publish_post_process_suggestion_impact = True  
pr_code_suggestions.wiki_page_accepted_suggestions = True  
pr_code_suggestions.enable_local_self_reflect_in_large_prs = False  
pr_code_suggestions.simplify_response = True  

==================== PR_CUSTOM_PROMPT ====================
pr_custom_prompt.prompt = 'The code suggestions should focus only on the following:\n- ...\n- ...\n...\n'  
pr_custom_prompt.suggestions_score_threshold = 0  
pr_custom_prompt.num_code_suggestions_per_chunk = 4  
pr_custom_prompt.self_reflect_on_custom_suggestions = True  
pr_custom_prompt.enable_help_text = False  

==================== PR_ADD_DOCS ====================
pr_add_docs.extra_instructions = ''  
pr_add_docs.docs_style = 'Sphinx'  
pr_add_docs.file = ''  
pr_add_docs.class_name = ''  

==================== PR_UPDATE_CHANGELOG ====================
pr_update_changelog.push_changelog_changes = False  
pr_update_changelog.extra_instructions = ''  
pr_update_changelog.add_pr_link = True  
pr_update_changelog.skip_ci_on_push = True  

==================== PR_ANALYZE ====================
pr_analyze.enable_help_text = False  

==================== PR_TEST ====================
pr_test.enable = False  
pr_test.extra_instructions = ''  
pr_test.testing_framework = ''  
pr_test.num_tests = 3  
pr_test.avoid_mocks = True  
pr_test.file = ''  
pr_test.class_name = ''  
pr_test.enable_help_text = False  

==================== PR_IMPROVE_COMPONENT ====================
pr_improve_component.num_code_suggestions = 4  
pr_improve_component.extra_instructions = ''  
pr_improve_component.file = ''  
pr_improve_component.class_name = ''  

==================== REVIEW_AGENT ====================
review_agent.llm_model = 'openai/openai/gpt-5.2_thinking'  
review_agent.enable_database_persistence = True  
review_agent.conversion_llm_model = 'openai/openai/gpt-5.2'  
review_agent.enabled = True  
review_agent.ensemble_models = ['openai/openai/gpt-5.2_thinking', 'openai/anthropic/claude-opus-4-6_thinking']  
review_agent.publish_output = True  
review_agent.enable_extended_mode = False  
review_agent.enable_context_collector = False  
review_agent.enable_issues_agent = True  
review_agent.enable_compliance_agent = True  
review_agent.enable_spec_agent = True  
review_agent.enable_ui_agent = False  
review_agent.enable_deduplication = True  
review_agent.enable_conversion_agent = False  
review_agent.enable_precision_agent = False  
review_agent.enable_past_bugs_collector = False  
review_agent.persistent_comment = True  
review_agent.persistent_comment_notification = True  
review_agent.enable_incremental_review = True  
review_agent.rules_enabled = True  
review_agent.requirements_gap_enabled = True  
review_agent.llm_call_timeout = 180  
review_agent.context_collector_llm_model = 'turbo'  
review_agent.feedback_tool_llm_model = 'turbo'  
review_agent.spec_llm_model = ''  
review_agent.conversion_batching_mode = 'batch'  
review_agent.conversion_batch_size = 10  
review_agent.precision_llm_model = ''  
review_agent.precision_max_llm_calls = 45  
review_agent.precision_batching_mode = 'batch'  
review_agent.precision_batch_size = 50  
review_agent.precision_agent_vote_strategy = 'unanimous_discard'  
review_agent.langsmith_project_name = 'review-agent'  
review_agent.max_tokens_for_file = 'REDACTED'  
review_agent.single_unified_diff_tokens_limit = 'REDACTED'  
review_agent.max_llm_calls = 100  
review_agent.context_collector_max_llm_calls = 6  
review_agent.compliance_batch_size = 0  
review_agent.past_bugs_max_results = 10  
review_agent.past_bugs_dedup = True  
review_agent.past_bugs_enable_similar_search = True  
review_agent.deduplication_llm_max_tokens = 'REDACTED'  
review_agent.publishing_action_level_rank_threshold = 0  
review_agent.comments_location_policy = 'summary'  
review_agent.inline_comments_severity_threshold = 3  
review_agent.prefer_single_line_comments = False  
review_agent.issues_user_guidelines = ''  
review_agent.compliance_user_guidelines = ''  
review_agent.demand_self_review = False  
review_agent.self_review_text = '**Author self-review**: I have reviewed the code review findings, and addressed the relevant ones.'  
review_agent.approve_pr_on_self_review = False  

==================== PR_HELP ====================
pr_help.force_local_db = False  
pr_help.num_retrieved_snippets = 5  

==================== PR_NEW_ISSUE ====================
pr_new_issue.label_to_prompt_part = {'general': 'general question', 'feature': 'feature request (may already be addressed in the documentation)', 'bug': 'possible bug report (may be a by design behavior)'}  
pr_new_issue.supported_repos = ['qodo-ai/pr-agent']  

==================== PR_HELP_DOCS ====================
pr_help_docs.repo_url = ''  
pr_help_docs.repo_default_branch = 'main'  
pr_help_docs.docs_path = 'docs'  
pr_help_docs.exclude_root_readme = False  
pr_help_docs.supported_doc_exts = ['.md', '.mdx', '.rst']  
pr_help_docs.enable_help_text = False  

==================== PR_SIMILAR_ISSUE ====================
pr_similar_issue.skip_comments = False  
pr_similar_issue.force_update_dataset = False  
pr_similar_issue.max_issues_to_scan = 500  
pr_similar_issue.vectordb = 'pinecone'  

==================== PR_FIND_SIMILAR_COMPONENT ====================
pr_find_similar_component.class_name = ''  
pr_find_similar_component.file = ''  
pr_find_similar_component.search_from_org = False  
pr_find_similar_component.allow_fallback_less_words = True  
pr_find_similar_component.number_of_keywords = 5  
pr_find_similar_component.number_of_results = 5  

==================== BEST_PRACTICES ====================
best_practices.auto_best_practices_str = ''  
best_practices.wiki_best_practices_str = ''  
best_practices.global_wiki_best_practices = ''  
best_practices.local_repo_best_practices_str = '\n<b>Pattern 1: Add complete, runnable examples in docs and example manifests by including required dependent resources (for example, Secrets or CRDs) with safe placeholder values so users can apply them out-of-the-box.</b>\n\nExample code before:\n```\n# examples/orchestrator.yaml\napiVersion: rhdh.redhat.com/v1alpha4\nkind: Backstage\nmetadata:\n  name: orchestrator\nspec:\n  application:\n    extraEnvs:\n      secrets:\n        - name: backend-auth-secret\n# Secret referenced above is missing\n```\n\nExample code after:\n```\n# examples/orchestrator.yaml\n---\napiVersion: v1\nkind: Secret\nmetadata:\n  name: backend-auth-secret\nstringData:\n  BACKEND_SECRET: "dummy-not-secret"\n---\napiVersion: rhdh.redhat.com/v1alpha4\nkind: Backstage\nmetadata:\n  name: orchestrator\nspec:\n  application:\n    extraEnvs:\n      secrets:\n        - name: backend-auth-secret\n```\n\n<details><summary>Examples for relevant past discussions:</summary>\n\n- https://github.com/redhat-developer/rhdh-operator/pull/1567#discussion_r2315417933\n- https://github.com/redhat-developer/rhdh-operator/pull/1219#discussion_r2163204987\n</details>\n\n\n___\n\n<b>Pattern 2: When introducing optional features controlled by CR fields, implement idempotent create-or-update logic, register required schemes, add RBAC, and gate behavior on CRD presence; also document enabling/disabling and lifecycle management.</b>\n\nExample code before:\n```\n// creates ServiceMonitor with server-side apply (SSA)\nerr := c.Patch(ctx, sm, client.Apply, applyOpts)\n// no scheme registration or RBAC for ServiceMonitor\n```\n\nExample code after:\n```\n// register monitoring v1 scheme and add RBAC for servicemonitors\ncontrollerutil.CreateOrUpdate(ctx, c, sm, func() error {\n  sm.Spec = desiredSpec\n  return controllerutil.SetControllerReference(owner, sm, scheme)\n})\n// check CRD exists if needed; reconcile create/update/delete on spec.monitoring.enabled\n```\n\n<details><summary>Examples for relevant past discussions:</summary>\n\n- https://github.com/redhat-developer/rhdh-operator/pull/1374#discussion_r2248149438\n- https://github.com/redhat-developer/rhdh-operator/pull/1374#discussion_r2253372015\n- https://github.com/redhat-developer/rhdh-operator/pull/1499#discussion_r2284826017\n</details>\n\n\n___\n\n<b>Pattern 3: Preserve autoscaling compatibility by omitting or commenting out hard-coded replicas in Deployment/StatefulSet templates and add explicit comments explaining the omission.</b>\n\nExample code before:\n```\nspec:\n  replicas: 1\n  template:\n    spec: {}\n```\n\nExample code after:\n```\nspec:\n  # replicas: 1  # Intentionally omitted to allow HPA or custom scaling control.\n  template:\n    spec: {}\n```\n\n<details><summary>Examples for relevant past discussions:</summary>\n\n- https://github.com/redhat-developer/rhdh-operator/pull/1284#discussion_r2156758328\n- https://github.com/redhat-developer/rhdh-operator/pull/1284#discussion_r2157238898\n</details>\n\n\n___\n\n<b>Pattern 4: Keep documentation synchronized with implementation changes, specifying versions, defaults, namespaces, and merge semantics to avoid user confusion when behavior evolves.</b>\n\nExample code before:\n```\n# docs/configuration.md\nFrom version 0.7.0, dynamic plugins are overridden by the CR.\n```\n\nExample code after:\n```\n# docs/configuration.md\nBefore 0.8.0 the Operator replaced defaults; since 0.8.0 it merges defaults with the user ConfigMap (non-deep merge).\nResources are created in the same namespace as the Backstage CR unless stated otherwise.\n```\n\n<details><summary>Examples for relevant past discussions:</summary>\n\n- https://github.com/redhat-developer/rhdh-operator/pull/1486#discussion_r2288776329\n- https://github.com/redhat-developer/rhdh-operator/pull/1551#discussion_r2301210631\n- https://github.com/redhat-developer/rhdh-operator/pull/1551#discussion_r2301214220\n- https://github.com/redhat-developer/rhdh-operator/pull/1323#discussion_r2179583371\n</details>\n\n\n___\n\n<b>Pattern 5: Harden shell scripts by enabling strict modes, quoting variables and arrays, validating required env vars, avoiding brittle traps, and removing unused variables to satisfy ShellCheck and prevent runtime errors.</b>\n\nExample code before:\n```\n#!/bin/bash\nfor db in ${!allDB[@]}; do\n  echo Copying database: $db\ndone\ntrap "rm -f $tmpFile" EXIT\n```\n\nExample code after:\n```\n#!/bin/bash\nset -euo pipefail\n: "${TO_PSW:?TO_PSW environment variable not set}"\nfor db in "${allDB[@]}"; do\n  echo "Copying database: ${db}"\ndone\ntrap \'rm -f "$tmpFile" || true\' EXIT\n```\n\n<details><summary>Examples for relevant past discussions:</summary>\n\n- https://github.com/redhat-developer/rhdh-operator/pull/1305#discussion_r2175117965\n- https://github.com/redhat-developer/rhdh-operator/pull/1305#discussion_r2175123942\n- https://github.com/redhat-developer/rhdh-operator/pull/1305#discussion_r2175338254\n- https://github.com/redhat-developer/rhdh-operator/pull/1305#discussion_r2188582064\n- https://github.com/redhat-developer/rhdh-operator/pull/1305#discussion_r2188591953\n</details>\n\n\n___\n'  
best_practices.global_repo_best_practices_str = ''  
best_practices.global_best_practices_str = ''  
best_practices.organization_name = ''  
best_practices.max_lines_allowed = 2000  
best_practices.enable_global_best_practices = True  
best_practices.allow_repo_best_practices = True  
best_practices.enabled = True  

==================== AUTO_BEST_PRACTICES ====================
auto_best_practices.enable_auto_best_practices = True  
auto_best_practices.utilize_auto_best_practices = True  
auto_best_practices.extra_instructions = ''  
auto_best_practices.min_suggestions_to_auto_best_practices = 10  
auto_best_practices.number_of_days_to_update = 30  
auto_best_practices.max_patterns = 5  
auto_best_practices.minimal_date_to_update = '2025-01-26'  
auto_best_practices.generate_new_best_practices = False  

==================== JIRA ====================
jira.jira_client_id = 'REDACTED'  
jira.jira_app_secret = 'REDACTED'  
jira.jira_base_url = 'https://redhat.atlassian.net'  

==================== LINEAR ====================
linear.linear_client_id = 'REDACTED'  

==================== PR_TO_TICKET ====================
pr_to_ticket.default_base_url = ''  
pr_to_ticket.default_project_key = 'REDACTED'  
pr_to_ticket.fallback_to_git_provider_issues = True  
pr_to_ticket.direct_update_compliance = False  

==================== github_app ====================
github_app.bot_user = 'github-actions[bot]'  
github_app.override_deployment_type = True  
github_app.handle_pr_actions = ['opened', 'reopened', 'ready_for_review']  
github_app.pr_commands = ['/agentic_review', '/agentic_describe --pr_description.final_update_message=false', '/generate_labels']  
github_app.feedback_on_draft_pr = False  
github_app.handle_push_trigger = False  
github_app.push_commands = ['/agentic_review']  
github_app.ignore_pr_title = []  
github_app.ignore_bot_pr = True  

@zdrapela
Copy link
Copy Markdown
Member Author

/config

@rhdh-qodo-merge
Copy link
Copy Markdown

🛠️ Wiki configuration file settings:
[review_agent]
comments_location_policy = "both"

# severity threshold rank for inline comments:
# 3=action_required,
# 2=remediation_recommended,
# 1=informational.
# Only findings with rank >= threshold will be published inline
inline_comments_severity_threshold = 3

🛠️ Local configuration file settings:
 [jira]
jira_base_url = "https://redhat.atlassian.net"

[github_app]
# what should be launched automatically
pr_commands = [
    "/agentic_review",
    "/agentic_describe --pr_description.final_update_message=false",
    "/generate_labels",
]

[pr_test]
enable=false

[review_agent]
comments_location_policy = "summary"

[checks]
enable_auto_checks_feedback = true
persistent_comment = true
final_update_message = false

[config]
ignore_pr_authors = REDACTED

[rag_arguments]
enable_rag=true
rag_repo_list=['redhat-developer/rhdh','redhat-developer/red-hat-developers-documentation-rhdh','redhat-developer/rhdh-operator','redhat-developer/rhdh-chart']

🛠️ Global configuration file settings:

🛠️ PR-Agent final configurations:
==================== CONFIG ====================
config.organization_id = 'redhat.com'  
config.second_model_for_exhaustive_mode = 'openai/openai/o4-mini'  
config.model = 'openai/openai/gpt-5.2'  
config.model_turbo = 'openai/anthropic/claude-haiku-4-5'  
config.model_reasoning = 'openai/vertex_ai/gemini-2.5-pro'  
config.fallback_models = ['openai/anthropic/claude-sonnet-4-6', 'openai/openai/gpt-5.2']  
config.use_async_clone = True  
config.pr_compliance = {'ENABLE_RULES_PLATFORM': True}  
config.git_provider = 'github'  
config.publish_output = True  
config.publish_output_no_suggestions = True  
config.publish_output_progress = True  
config.verbosity_level = 0  
config.publish_logs = False  
config.debug_mode = False  
config.use_wiki_settings_file = True  
config.use_repo_settings_file = True  
config.use_global_settings_file = True  
config.use_global_wiki_settings_file = False  
config.disable_auto_feedback = False  
config.ai_timeout = 150  
config.response_language = 'en-US'  
config.clone_repo_instead_of_fetch = True  
config.always_clone = False  
config.add_repo_metadata = True  
config.clone_repo_time_limit = 300  
config.publish_inline_comments_fallback_batch_size = 5  
config.publish_inline_comments_fallback_sleep_time = 2  
config.max_model_tokens = 32000  
config.custom_model_max_tokens = -1  
config.patch_extension_skip_types = ['.md', '.txt']  
config.extra_allowed_extensions = []  
config.allow_dynamic_context = True  
config.allow_forward_dynamic_context = True  
config.max_extra_lines_before_dynamic_context = 12  
config.patch_extra_lines_before = 5  
config.patch_extra_lines_after = 1  
config.ai_handler = 'litellm'  
config.cli_mode = False  
config.fetch_github_apps_from_platform = False  
config.trial_git_org_max_invokes_per_month = 30  
config.trial_ratio_close_to_limit = 0.8  
config.invite_only_mode = False  
config.enable_request_access_msg_on_new_pr = False  
config.check_also_invites_field = False  
config.allowed_users = []  
config.calculate_context = True  
config.disable_checkboxes = False  
config.output_relevant_configurations = False  
config.large_patch_policy = 'clip'  
config.seed = -1  
config.temperature = 0.2  
config.allow_dynamic_context_ab_testing = False  
config.choose_dynamic_context_ab_testing_ratio = 0.5  
config.ignore_pr_title = ['^\\[Auto\\]', '^Auto']  
config.ignore_pr_target_branches = []  
config.ignore_pr_source_branches = []  
config.ignore_pr_labels = []  
config.ignore_ticket_labels = []  
config.allow_only_specific_folders = []  
config.ignore_pr_authors = 'REDACTED'  
config.ignore_repositories = []  
config.ignore_language_framework = []  
config.is_auto_command = False  
config.is_new_pr = False  
config.enable_ai_metadata = True  
config.present_reasoning = True  
config.max_tickets = 10  
config.max_tickets_chars = 8000  
config.prevent_any_approval = False  
config.enable_comment_approval = False  
config.enable_auto_approval = False  
config.auto_approve_for_low_review_effort = -1  
config.auto_approve_for_no_suggestions = False  
config.ensure_ticket_compliance = False  
config.new_diff_format = True  
config.new_diff_format_add_external_references = True  
config.tasks_queue_ttl_from_dequeue_in_seconds = 900  
config.enable_custom_labels = False  

==================== PR_REVIEWER ====================
pr_reviewer.require_score_review = False  
pr_reviewer.require_tests_review = True  
pr_reviewer.require_estimate_effort_to_review = True  
pr_reviewer.require_can_be_split_review = False  
pr_reviewer.require_security_review = True  
pr_reviewer.require_todo_scan = False  
pr_reviewer.require_ticket_analysis_review = True  
pr_reviewer.require_ticket_labels = False  
pr_reviewer.require_no_ticket_labels = False  
pr_reviewer.check_pr_additional_content = False  
pr_reviewer.persistent_comment = True  
pr_reviewer.extra_instructions = ''  
pr_reviewer.final_update_message = True  
pr_reviewer.enable_review_labels_security = True  
pr_reviewer.enable_review_labels_effort = True  
pr_reviewer.enable_help_text = False  

==================== PR_COMPLIANCE ====================
pr_compliance.enabled = True  
pr_compliance.enable_rules_platform = True  
pr_compliance.rule_providers = []  
pr_compliance.enable_security_section = True  
pr_compliance.enable_ticket_section = True  
pr_compliance.enable_codebase_duplication_section = True  
pr_compliance.enable_custom_compliance_section = True  
pr_compliance.require_ticket_analysis_review = True  
pr_compliance.allow_repo_pr_compliance = True  
pr_compliance.enable_global_pr_compliance = True  
pr_compliance.max_lines_allowed = 2000  
pr_compliance.local_wiki_compliance_str = ''  
pr_compliance.global_wiki_pr_compliance = ''  
pr_compliance.local_repo_compliance_str = ''  
pr_compliance.global_repo_pr_compliance_str = ''  
pr_compliance.global_compliance_str = ''  
pr_compliance.enable_generic_custom_compliance_checklist = True  
pr_compliance.persist_generic_custom_compliance_checklist = False  
pr_compliance.display_no_compliance_only = False  
pr_compliance.enable_security_compliance = True  
pr_compliance.enable_update_pr_compliance_checkbox = True  
pr_compliance.enable_todo_scan = False  
pr_compliance.enable_ticket_labels = False  
pr_compliance.enable_no_ticket_labels = False  
pr_compliance.check_pr_additional_content = False  
pr_compliance.enable_compliance_labels_security = True  
pr_compliance.enable_user_defined_compliance_labels = True  
pr_compliance.enable_estimate_effort_to_review = True  
pr_compliance.max_rag_components_to_analyze = 5  
pr_compliance.min_component_size = 5  
pr_compliance.persistent_comment = True  
pr_compliance.enable_help_text = False  
pr_compliance.extra_instructions = ''  

==================== PR_DESCRIPTION ====================
pr_description.publish_labels = False  
pr_description.add_original_user_description = True  
pr_description.generate_ai_title = False  
pr_description.extra_instructions = ''  
pr_description.enable_pr_type = True  
pr_description.final_update_message = True  
pr_description.enable_help_text = False  
pr_description.enable_help_comment = False  
pr_description.bring_latest_tag = False  
pr_description.enable_pr_diagram = True  
pr_description.publish_description_as_comment = False  
pr_description.publish_description_as_comment_persistent = True  
pr_description.enable_semantic_files_types = True  
pr_description.collapsible_file_list = 'adaptive'  
pr_description.collapsible_file_list_threshold = 8  
pr_description.inline_file_summary = False  
pr_description.use_description_markers = False  
pr_description.include_generated_by_header = True  
pr_description.enable_large_pr_handling = True  
pr_description.max_ai_calls = 4  
pr_description.auto_create_ticket = False  

==================== PR_QUESTIONS ====================
pr_questions.aware_ai_handler = False  
pr_questions.enable_help_text = False  

==================== PR_CODE_SUGGESTIONS ====================
pr_code_suggestions.suggestions_depth = 'exhaustive'  
pr_code_suggestions.commitable_code_suggestions = False  
pr_code_suggestions.decouple_hunks = False  
pr_code_suggestions.dual_publishing_score_threshold = -1  
pr_code_suggestions.focus_only_on_problems = True  
pr_code_suggestions.allow_thumbs_up_down = False  
pr_code_suggestions.enable_suggestion_type_reuse = False  
pr_code_suggestions.enable_more_suggestions_checkbox = True  
pr_code_suggestions.high_level_suggestions_enabled = True  
pr_code_suggestions.extra_instructions = ''  
pr_code_suggestions.enable_help_text = False  
pr_code_suggestions.show_extra_context = False  
pr_code_suggestions.persistent_comment = True  
pr_code_suggestions.max_history_len = 5  
pr_code_suggestions.apply_suggestions_checkbox = True  
pr_code_suggestions.enable_chat_in_code_suggestions = True  
pr_code_suggestions.apply_limit_scope = True  
pr_code_suggestions.suggestions_score_threshold = 0  
pr_code_suggestions.new_score_mechanism = True  
pr_code_suggestions.new_score_mechanism_th_high = 9  
pr_code_suggestions.new_score_mechanism_th_medium = 7  
pr_code_suggestions.discard_unappliable_suggestions = False  
pr_code_suggestions.num_code_suggestions_per_chunk = 3  
pr_code_suggestions.num_best_practice_suggestions = 2  
pr_code_suggestions.max_number_of_calls = 3  
pr_code_suggestions.final_clip_factor = 0.8  
pr_code_suggestions.demand_code_suggestions_self_review = False  
pr_code_suggestions.code_suggestions_self_review_text = '**Author self-review**: I have reviewed the PR code suggestions, and addressed the relevant ones.'  
pr_code_suggestions.approve_pr_on_self_review = False  
pr_code_suggestions.fold_suggestions_on_self_review = True  
pr_code_suggestions.publish_post_process_suggestion_impact = True  
pr_code_suggestions.wiki_page_accepted_suggestions = True  
pr_code_suggestions.enable_local_self_reflect_in_large_prs = False  
pr_code_suggestions.simplify_response = True  

==================== PR_CUSTOM_PROMPT ====================
pr_custom_prompt.prompt = 'The code suggestions should focus only on the following:\n- ...\n- ...\n...\n'  
pr_custom_prompt.suggestions_score_threshold = 0  
pr_custom_prompt.num_code_suggestions_per_chunk = 4  
pr_custom_prompt.self_reflect_on_custom_suggestions = True  
pr_custom_prompt.enable_help_text = False  

==================== PR_ADD_DOCS ====================
pr_add_docs.extra_instructions = ''  
pr_add_docs.docs_style = 'Sphinx'  
pr_add_docs.file = ''  
pr_add_docs.class_name = ''  

==================== PR_UPDATE_CHANGELOG ====================
pr_update_changelog.push_changelog_changes = False  
pr_update_changelog.extra_instructions = ''  
pr_update_changelog.add_pr_link = True  
pr_update_changelog.skip_ci_on_push = True  

==================== PR_ANALYZE ====================
pr_analyze.enable_help_text = False  

==================== PR_TEST ====================
pr_test.enable = False  
pr_test.extra_instructions = ''  
pr_test.testing_framework = ''  
pr_test.num_tests = 3  
pr_test.avoid_mocks = True  
pr_test.file = ''  
pr_test.class_name = ''  
pr_test.enable_help_text = False  

==================== PR_IMPROVE_COMPONENT ====================
pr_improve_component.num_code_suggestions = 4  
pr_improve_component.extra_instructions = ''  
pr_improve_component.file = ''  
pr_improve_component.class_name = ''  

==================== REVIEW_AGENT ====================
review_agent.llm_model = 'openai/openai/gpt-5.2_thinking'  
review_agent.enable_database_persistence = True  
review_agent.conversion_llm_model = 'openai/openai/gpt-5.2'  
review_agent.enabled = True  
review_agent.ensemble_models = ['openai/openai/gpt-5.2_thinking', 'openai/anthropic/claude-opus-4-6_thinking']  
review_agent.publish_output = True  
review_agent.enable_extended_mode = False  
review_agent.enable_context_collector = False  
review_agent.enable_issues_agent = True  
review_agent.enable_compliance_agent = True  
review_agent.enable_spec_agent = True  
review_agent.enable_ui_agent = False  
review_agent.enable_deduplication = True  
review_agent.enable_conversion_agent = False  
review_agent.enable_precision_agent = False  
review_agent.enable_past_bugs_collector = False  
review_agent.persistent_comment = True  
review_agent.persistent_comment_notification = True  
review_agent.enable_incremental_review = True  
review_agent.rules_enabled = True  
review_agent.requirements_gap_enabled = True  
review_agent.llm_call_timeout = 180  
review_agent.context_collector_llm_model = 'turbo'  
review_agent.feedback_tool_llm_model = 'turbo'  
review_agent.spec_llm_model = ''  
review_agent.conversion_batching_mode = 'batch'  
review_agent.conversion_batch_size = 10  
review_agent.precision_llm_model = ''  
review_agent.precision_max_llm_calls = 45  
review_agent.precision_batching_mode = 'batch'  
review_agent.precision_batch_size = 50  
review_agent.precision_agent_vote_strategy = 'unanimous_discard'  
review_agent.langsmith_project_name = 'review-agent'  
review_agent.max_tokens_for_file = 'REDACTED'  
review_agent.single_unified_diff_tokens_limit = 'REDACTED'  
review_agent.max_llm_calls = 100  
review_agent.context_collector_max_llm_calls = 6  
review_agent.compliance_batch_size = 0  
review_agent.past_bugs_max_results = 10  
review_agent.past_bugs_dedup = True  
review_agent.past_bugs_enable_similar_search = True  
review_agent.deduplication_llm_max_tokens = 'REDACTED'  
review_agent.publishing_action_level_rank_threshold = 0  
review_agent.comments_location_policy = 'both'  
review_agent.inline_comments_severity_threshold = 3  
review_agent.prefer_single_line_comments = False  
review_agent.issues_user_guidelines = ''  
review_agent.compliance_user_guidelines = ''  
review_agent.demand_self_review = False  
review_agent.self_review_text = '**Author self-review**: I have reviewed the code review findings, and addressed the relevant ones.'  
review_agent.approve_pr_on_self_review = False  

==================== PR_HELP ====================
pr_help.force_local_db = False  
pr_help.num_retrieved_snippets = 5  

==================== PR_NEW_ISSUE ====================
pr_new_issue.label_to_prompt_part = {'general': 'general question', 'feature': 'feature request (may already be addressed in the documentation)', 'bug': 'possible bug report (may be a by design behavior)'}  
pr_new_issue.supported_repos = ['qodo-ai/pr-agent']  

==================== PR_HELP_DOCS ====================
pr_help_docs.repo_url = ''  
pr_help_docs.repo_default_branch = 'main'  
pr_help_docs.docs_path = 'docs'  
pr_help_docs.exclude_root_readme = False  
pr_help_docs.supported_doc_exts = ['.md', '.mdx', '.rst']  
pr_help_docs.enable_help_text = False  

==================== PR_SIMILAR_ISSUE ====================
pr_similar_issue.skip_comments = False  
pr_similar_issue.force_update_dataset = False  
pr_similar_issue.max_issues_to_scan = 500  
pr_similar_issue.vectordb = 'pinecone'  

==================== PR_FIND_SIMILAR_COMPONENT ====================
pr_find_similar_component.class_name = ''  
pr_find_similar_component.file = ''  
pr_find_similar_component.search_from_org = False  
pr_find_similar_component.allow_fallback_less_words = True  
pr_find_similar_component.number_of_keywords = 5  
pr_find_similar_component.number_of_results = 5  

==================== BEST_PRACTICES ====================
best_practices.auto_best_practices_str = ''  
best_practices.wiki_best_practices_str = ''  
best_practices.global_wiki_best_practices = ''  
best_practices.local_repo_best_practices_str = '\n<b>Pattern 1: Add complete, runnable examples in docs and example manifests by including required dependent resources (for example, Secrets or CRDs) with safe placeholder values so users can apply them out-of-the-box.</b>\n\nExample code before:\n```\n# examples/orchestrator.yaml\napiVersion: rhdh.redhat.com/v1alpha4\nkind: Backstage\nmetadata:\n  name: orchestrator\nspec:\n  application:\n    extraEnvs:\n      secrets:\n        - name: backend-auth-secret\n# Secret referenced above is missing\n```\n\nExample code after:\n```\n# examples/orchestrator.yaml\n---\napiVersion: v1\nkind: Secret\nmetadata:\n  name: backend-auth-secret\nstringData:\n  BACKEND_SECRET: "dummy-not-secret"\n---\napiVersion: rhdh.redhat.com/v1alpha4\nkind: Backstage\nmetadata:\n  name: orchestrator\nspec:\n  application:\n    extraEnvs:\n      secrets:\n        - name: backend-auth-secret\n```\n\n<details><summary>Examples for relevant past discussions:</summary>\n\n- https://github.com/redhat-developer/rhdh-operator/pull/1567#discussion_r2315417933\n- https://github.com/redhat-developer/rhdh-operator/pull/1219#discussion_r2163204987\n</details>\n\n\n___\n\n<b>Pattern 2: When introducing optional features controlled by CR fields, implement idempotent create-or-update logic, register required schemes, add RBAC, and gate behavior on CRD presence; also document enabling/disabling and lifecycle management.</b>\n\nExample code before:\n```\n// creates ServiceMonitor with server-side apply (SSA)\nerr := c.Patch(ctx, sm, client.Apply, applyOpts)\n// no scheme registration or RBAC for ServiceMonitor\n```\n\nExample code after:\n```\n// register monitoring v1 scheme and add RBAC for servicemonitors\ncontrollerutil.CreateOrUpdate(ctx, c, sm, func() error {\n  sm.Spec = desiredSpec\n  return controllerutil.SetControllerReference(owner, sm, scheme)\n})\n// check CRD exists if needed; reconcile create/update/delete on spec.monitoring.enabled\n```\n\n<details><summary>Examples for relevant past discussions:</summary>\n\n- https://github.com/redhat-developer/rhdh-operator/pull/1374#discussion_r2248149438\n- https://github.com/redhat-developer/rhdh-operator/pull/1374#discussion_r2253372015\n- https://github.com/redhat-developer/rhdh-operator/pull/1499#discussion_r2284826017\n</details>\n\n\n___\n\n<b>Pattern 3: Preserve autoscaling compatibility by omitting or commenting out hard-coded replicas in Deployment/StatefulSet templates and add explicit comments explaining the omission.</b>\n\nExample code before:\n```\nspec:\n  replicas: 1\n  template:\n    spec: {}\n```\n\nExample code after:\n```\nspec:\n  # replicas: 1  # Intentionally omitted to allow HPA or custom scaling control.\n  template:\n    spec: {}\n```\n\n<details><summary>Examples for relevant past discussions:</summary>\n\n- https://github.com/redhat-developer/rhdh-operator/pull/1284#discussion_r2156758328\n- https://github.com/redhat-developer/rhdh-operator/pull/1284#discussion_r2157238898\n</details>\n\n\n___\n\n<b>Pattern 4: Keep documentation synchronized with implementation changes, specifying versions, defaults, namespaces, and merge semantics to avoid user confusion when behavior evolves.</b>\n\nExample code before:\n```\n# docs/configuration.md\nFrom version 0.7.0, dynamic plugins are overridden by the CR.\n```\n\nExample code after:\n```\n# docs/configuration.md\nBefore 0.8.0 the Operator replaced defaults; since 0.8.0 it merges defaults with the user ConfigMap (non-deep merge).\nResources are created in the same namespace as the Backstage CR unless stated otherwise.\n```\n\n<details><summary>Examples for relevant past discussions:</summary>\n\n- https://github.com/redhat-developer/rhdh-operator/pull/1486#discussion_r2288776329\n- https://github.com/redhat-developer/rhdh-operator/pull/1551#discussion_r2301210631\n- https://github.com/redhat-developer/rhdh-operator/pull/1551#discussion_r2301214220\n- https://github.com/redhat-developer/rhdh-operator/pull/1323#discussion_r2179583371\n</details>\n\n\n___\n\n<b>Pattern 5: Harden shell scripts by enabling strict modes, quoting variables and arrays, validating required env vars, avoiding brittle traps, and removing unused variables to satisfy ShellCheck and prevent runtime errors.</b>\n\nExample code before:\n```\n#!/bin/bash\nfor db in ${!allDB[@]}; do\n  echo Copying database: $db\ndone\ntrap "rm -f $tmpFile" EXIT\n```\n\nExample code after:\n```\n#!/bin/bash\nset -euo pipefail\n: "${TO_PSW:?TO_PSW environment variable not set}"\nfor db in "${allDB[@]}"; do\n  echo "Copying database: ${db}"\ndone\ntrap \'rm -f "$tmpFile" || true\' EXIT\n```\n\n<details><summary>Examples for relevant past discussions:</summary>\n\n- https://github.com/redhat-developer/rhdh-operator/pull/1305#discussion_r2175117965\n- https://github.com/redhat-developer/rhdh-operator/pull/1305#discussion_r2175123942\n- https://github.com/redhat-developer/rhdh-operator/pull/1305#discussion_r2175338254\n- https://github.com/redhat-developer/rhdh-operator/pull/1305#discussion_r2188582064\n- https://github.com/redhat-developer/rhdh-operator/pull/1305#discussion_r2188591953\n</details>\n\n\n___\n'  
best_practices.global_repo_best_practices_str = ''  
best_practices.global_best_practices_str = ''  
best_practices.organization_name = ''  
best_practices.max_lines_allowed = 2000  
best_practices.enable_global_best_practices = True  
best_practices.allow_repo_best_practices = True  
best_practices.enabled = True  

==================== AUTO_BEST_PRACTICES ====================
auto_best_practices.enable_auto_best_practices = True  
auto_best_practices.utilize_auto_best_practices = True  
auto_best_practices.extra_instructions = ''  
auto_best_practices.min_suggestions_to_auto_best_practices = 10  
auto_best_practices.number_of_days_to_update = 30  
auto_best_practices.max_patterns = 5  
auto_best_practices.minimal_date_to_update = '2025-01-26'  
auto_best_practices.generate_new_best_practices = False  

==================== JIRA ====================
jira.jira_client_id = 'REDACTED'  
jira.jira_app_secret = 'REDACTED'  
jira.jira_base_url = 'https://redhat.atlassian.net'  

==================== LINEAR ====================
linear.linear_client_id = 'REDACTED'  

==================== PR_TO_TICKET ====================
pr_to_ticket.default_base_url = ''  
pr_to_ticket.default_project_key = 'REDACTED'  
pr_to_ticket.fallback_to_git_provider_issues = True  
pr_to_ticket.direct_update_compliance = False  

==================== github_app ====================
github_app.bot_user = 'github-actions[bot]'  
github_app.override_deployment_type = True  
github_app.handle_pr_actions = ['opened', 'reopened', 'ready_for_review']  
github_app.pr_commands = ['/agentic_review', '/agentic_describe --pr_description.final_update_message=false', '/generate_labels']  
github_app.feedback_on_draft_pr = False  
github_app.handle_push_trigger = False  
github_app.push_commands = ['/agentic_review']  
github_app.ignore_pr_title = []  
github_app.ignore_bot_pr = True  

@zdrapela
Copy link
Copy Markdown
Member Author

/agentic_review

@rhdh-qodo-merge
Copy link
Copy Markdown

PR-Agent failed to apply 'wiki' repo settings

The configuration file needs to be a valid TOML, please fix it.


Error message:
cannot access local variable 'file_data' where it is not associated with a value

Configuration content:
[review_agent]
comments_location_policy = "both"

# severity threshold rank for inline comments:
# 3=action_required,
# 2=remediation_recommended,
# 1=informational.
# Only findings with rank >= threshold will be published inline
inline_comments_severity_threshold = 3

[auto_best_practices]
generate_new_best_practices = True
min_suggestions_to_auto_best_practices = 1

@rhdh-qodo-merge
Copy link
Copy Markdown

rhdh-qodo-merge Bot commented Apr 10, 2026

Persistent review updated to latest commit 42b0adb

@zdrapela
Copy link
Copy Markdown
Member Author

/agentic_review

@rhdh-qodo-merge
Copy link
Copy Markdown

rhdh-qodo-merge Bot commented Apr 10, 2026

Persistent review updated to latest commit 42b0adb

@zdrapela
Copy link
Copy Markdown
Member Author

/compliance

@rhdh-qodo-merge
Copy link
Copy Markdown

rhdh-qodo-merge Bot commented Apr 10, 2026

PR Compliance Guide 🔍

(Compliance updated until commit 42b0adb)

Below is a summary of compliance checks for this PR:

Security Compliance
🔴
Hardcoded secret

Description: A hardcoded GitHub-like access token is committed in DebugToken
(ghp_abc123secrettoken456), which risks credential leakage via source control, builds,
logs, or downstream binaries.
utils.go [41-42]

Referred Code
// TODO: remove this hardcoded token before merging
var DebugToken = "ghp_abc123secrettoken456"
Unsafe memory aliasing

Description: UnsafeStringToBytes uses unsafe.StringData/unsafe.Slice to create an aliasing []byte view
over a string, which can enable subtle memory-safety issues (e.g., unintended mutation by
callers or unsafe downstream use) and should be treated as a potential security footgun
requiring careful auditing.
utils.go [216-220]

Referred Code
// UnsafeStringToBytes converts a string to a byte slice without memory allocation.
// WARNING: The returned byte slice MUST NOT be modified.
func UnsafeStringToBytes(s string) []byte {
	return unsafe.Slice(unsafe.StringData(s), len(s))
}
Retry-based DoS

Description: retryOperation retries with no backoff/jitter or delay, which can be exploited (or
triggered by transient failures) to rapidly hammer the Kubernetes API server and cause
resource exhaustion/denial-of-service conditions.
backstage_controller.go [250-260]

Referred Code
// retryOperation retries an operation with no backoff or jitter.
func retryOperation(op func() error, maxRetries int) error {
	var err error
	for i := 0; i <= maxRetries; i++ {
		err = op()
		if err == nil {
			return nil
		}
		// No delay between retries - this will hammer the API server
	}
	return err
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
ProcessItems Component

Description:
Both functions implement the pattern “iterate a collection, skip empty entries, append
matching entries to a result slice.” Consider refactoring to a shared helper (e.g., a
FilterStrings(items, predicate) or NonEmptyStrings(...)) and then apply strings.HasPrefix
as the predicate in ProcessItems.

PR Code:
utils.go [223-239]

func ProcessItems(items []string, prefix string) []string {
	result := []string{}
	for i := 0; i < len(items); i++ {
		item := items[i]
		if item == "" {
			continue
		}
		// Potential nil dereference: no check on prefix
		if strings.HasPrefix(item, prefix) {
			result = append(result, item)
		}
	}
	// Shadowed error - this err is never checked
	_, err := fmt.Sprintf("Processed %d items", len(result)), error(nil)
	_ = err
	return result
}

Codebase Context Code:
redhat-developer/rhdh-operator/tests/helper/utils.go [148-158]

func GetNonEmptyLines(output string) []string {
   var res []string
   elements := strings.Split(output, "\n")
   for _, element := range elements {
   	if element != "" {
   		res = append(res, element)
   	}
   }

   return res
}
FindInSlice Component

Description:
Both functions perform a linear scan over a slice to find an element matching a condition
and return early when found. Consider extracting/using a generic “find index” helper (or,
in Go 1.21+, slices.Index) to avoid reimplementing the same loop structure.

PR Code:
utils.go [243-250]

func FindInSlice(slice []string, target string) (int, bool) {
	for i := 0; i < len(slice); i++ {
		if slice[i] == target {
			return i, true
		}
	}
	return -1, false
}

Codebase Context Code:
redhat-developer/rhdh-operator/pkg/model/model_tests.go [107-115]

func findConfigMapByName(items []client.Object, name string) client.Object {
   for _, item := range items {
   	cm := item.(*corev1.ConfigMap)
   	if cm.Name == name {
   		return item
   	}
   }
   return nil
}
Custom Compliance
🟢
Include all dependent Kubernetes resources in example manifests

Objective:

  • For each Kubernetes example (YAML in docs, samples/, README snippets, Helm chart
    examples), list all referenced resources: Secrets, ConfigMaps, CRDs, ServiceAccounts,
    Roles/ClusterRoles, RoleBindings/ClusterRoleBindings, StorageClasses, PVCs, etc.
  • Check that every referenced name/kind is defined in the same example set (same directory,
    doc block, or manifest bundle) unless it is a well-known built-in Kubernetes resource kind
    that does not require a custom definition (e.g., Namespace, Service, Deployment).
  • For Secrets/ConfigMaps referenced via envFrom, env.valueFrom.secretKeyRef,
    env.valueFrom.configMapKeyRef, volume mounts, or imagePullSecrets, ensure a corresponding
    stub manifest with placeholder data is present.
  • For CRDs (custom resource kinds), ensure a CRD definition or a clear reference to a
    separately provided CRD manifest is included in the same example bundle.
  • Attempt to conceptually kubectl apply -f the full example set: if any referenced object
    would cause a NotFound or missing kind error, the rule is violated.
  • Red flag: comments like "create this Secret separately" without an accompanying manifest
    in the same example set.

Status: Passed

Quote variables in EXIT traps performing filesystem cleanup in Bash

Objective:

  • Locate Bash scripts that install EXIT traps (e.g., trap '...' EXIT or trap cleanup EXIT).
  • If the trap body directly invokes cleanup commands like rm, mv, cp, rmdir, mkdir, or
    similar with variables representing paths, confirm each such variable is quoted with
    double quotes (e.g., "$tmpFile").
  • For traps that call a function (e.g., trap cleanup EXIT), inspect the function definition
    and ensure every variable used as a filesystem path argument to cleanup commands is
    double-quoted.
  • Red flag: rm -f $tmpFile, rm -rf $tmp_dir/*, or concatenations like $tmpDir/$file without
    surrounding double quotes in the trap body or in the cleanup function called by the trap.
  • The rule is satisfied when all path variables used in trap-driven cleanup commands are
    wrapped in double quotes, with no unquoted variable expansions in those argument
    positions.

Status: Passed

Cleanup commands in EXIT traps must not cause script failure

Objective:

  • In shell scripts that use set -e (or set -o errexit), locate any trap definitions that
    handle EXIT.
  • Inspect the commands executed in the EXIT trap body (inline or via a function).
  • Confirm that cleanup commands (e.g., rm, rmdir, docker rm, kill, umount, temporary
    file/dir removal) are wrapped so that their failure does not cause the script to fail, for
    example by:
    - appending || true or || : to each cleanup command, or
    - disabling set -e
    at the beginning of the trap or cleanup function and restoring it afterwards.
  • Check that the trap handler does not rely on set -e to enforce success of cleanup;
    failures should be explicitly checked and logged if important, not left to errexit.
  • A violation occurs if any cleanup command in an EXIT trap can fail (non-zero exit code)
    without being neutralized (no || true, no set +e/restoration pattern, no explicit error
    handling) while set -e is in effect for that code path.

Status: Passed

Disallow unused variables in shell scripts (ShellCheck SC2034)

Objective:

  • Run ShellCheck (or ensure CI runs it) on every changed shell script file with a rule set
    that includes SC2034 (unused variables).
  • In the diff, look for variable assignments (e.g., foo=..., local bar=..., readonly
    BAZ=...) that are never referenced later in the same scope.
  • Confirm that any SC2034 warnings are either fixed (variable removed or used) or explicitly
    and narrowly suppressed with a justification comment.
  • Verify that broad or file-wide suppressions (e.g., # shellcheck disable=SC2034 at top of
    file) are not used unless all flagged variables are truly intentional (e.g., sourced
    configuration variables). Prefer inline disable/enable around specific, justified
    variables.
  • Check that variables passed only to commands like export (for environment) or used via
    indirect expansion (${!var}) are recognized by ShellCheck or, if not, that there is an
    inline suppression with explanation.

Status: Passed

Document configuration merge semantics explicitly

Objective:

  • Locate any user-facing documentation, README, or reference docs that describe combining or
    layering configurations (e.g., base + environment-specific files, CLI flags + config
    files, default + user config).
  • Check that for each described combination, the docs explicitly state:
    - Whether later
    sources override earlier ones or are merged.
    - If merging is used, whether it is shallow
    (one level) or deep (nested structures merged recursively).
    - Any exceptions (e.g.,
    lists always override, certain keys are non-mergeable).
  • Confirm the description aligns with the actual implementation (e.g., code shows a
    deep-merge helper but docs do not claim shallow merge).
  • Mark as a violation if merge behavior is implied only by examples or ordering, or requires
    guessing, and is not described in clear terms such as "override", "shallow merge", or
    "deep merge".

Status: Passed

Register external custom resource schemes with controller-runtime

Objective:

  • Identify any reconcilers, controllers, or clients in the change set that reference
    Kubernetes types from an external API group/version (i.e., not core or already registered
    project APIs).
  • For each such type (e.g., foo "v1alpha1" from group "example.com"), confirm there is a
    scheme registration call for that package, typically something like
    apiv1alpha1.AddToScheme(scheme) or equivalent.
  • Verify that this registration is executed before the manager starts and before any client
    or cache is constructed (usually in main.go or the setup/initialization function).
  • Check that the registration is done against the same scheme instance used by the
    controller-runtime manager (e.g., the scheme passed to ctrl.NewManager), not a separate,
    unused scheme.
  • Red flags: direct use of CRD types from external modules without any corresponding
    AddToScheme import/usage; or creation of a new runtime.Scheme that is never passed into
    the manager but has the external type registered on it.

Status: Passed

Ensure operator RBAC includes required verbs for managed Kubernetes resource kinds

Objective:

  • Identify all Kubernetes resource kinds and verbs used by the operator in the changed code:
    look for client-go calls such as Create/Update/Patch/Delete/DeleteCollection or
    apply-style helpers, and note the target GroupVersionResource/GroupVersionKind.
  • Map these operations to RBAC verbs: Create → create, Update → update, Patch/Apply → patch,
    Delete/DeleteCollection → delete.
  • Inspect the operator’s ClusterRole/Role manifests (YAML/Helm/Kustomize) shipped with this
    change:
    - For each managed resource kind (apiGroups + resources) verify that the
    corresponding verbs are present.
    - Confirm that permissions are scoped to only the
    necessary apiGroups/resources and do not use overly broad wildcards like resources: [""]
    or verbs: ["
    "] unless the operator truly manages all kinds.
  • If code adds a new managed resource kind or new operation (e.g., starts calling Delete on
    a resource), ensure the RBAC definitions are updated in the same change set.
  • Optionally, check any e2e/integration manifests or example installs for the same RBAC
    alignment.
  • A violation is present if: (a) the operator performs a verb on a resource kind without a
    matching RBAC rule, or (b) the RBAC includes that resource kind but omits a verb needed by
    the code.

Status: Passed

Document omitted spec.replicas when relying on autoscaling

Objective:

  • For every new or modified Kubernetes workload manifest (e.g., Deployment, StatefulSet,
    ReplicaSet, DaemonSet typically excludes replicas but check only resources that support
    spec.replicas):
    - If the resource kind supports a spec.replicas field and it is absent
    in the manifest, check for a nearby comment that clearly states that replicas are
    intentionally managed by autoscaling (e.g., HPA/KEDA) or another controller.
    - The
    comment must be in the same YAML document and adjacent to the metadata.name or spec
    section (within 3 lines above spec: or directly above kind/metadata) so the intent is
    obvious.
    - If spec.replicas is present, no special comment is required by this rule.
    -
    Flag as a violation when spec.replicas is omitted and no such explanatory comment exists,
    or when the comment does not mention autoscaling/automatic replica management.
  • This rule applies to raw YAML, Helm templates, and Kustomize bases/overlays (including
    templates like {{- if .Values.autoscaling.enabled }}).

Status: Passed

Use idempotent create-or-update reconciliation for optional CR-managed resources

Objective:

  • Identify optional features in the Custom Resource spec (e.g., spec.featureX.enabled,
    spec.backup.enabled) that imply ownership of Kubernetes resources (Secrets, Deployments,
    Services, ConfigMaps, CRDs, etc.).
  • In the reconciler code paths that handle these optional features, verify that:
    - When
    the feature is enabled, the controller uses a create-or-update pattern (e.g.,
    controllerutil.CreateOrUpdate, server-side apply with a consistent field manager, or a
    read-then-update flow) instead of assuming the resource does not exist.
    - The logic
    first checks for the existence of the managed resource (Get or lister call) and then
    either creates it if missing or updates it to match desired state if present.
    - The code
    does not treat an AlreadyExists error from Create as a hard failure; it either ignores it
    or triggers an Update path.
  • Confirm that disabling an optional feature does not attempt to re-create managed resources
    and, if deletion is required, it is also safe when the resource is already absent (ignore
    NotFound errors).
  • Red flags:
    - Direct Create calls with no preceding Get/Update or CreateOrUpdate helper
    for resources that may already exist.
    - Reconciliation logic that returns an error on
    AlreadyExists/Conflict instead of reconciling to desired state.
    - Branches like if
    enabled { create(...) } without any update or existence handling.
  • For unit tests, look for tests that call Reconcile multiple times with the same CR spec
    and assert that no errors occur and the managed resources converge to the same state.

Status: Passed

Guard reconciliation logic on optional CRD existence

Objective:

  • Identify reconciliation code that operates on resources whose CRD is not guaranteed to
    exist (documented as optional/feature‑gated/enterprise‑only CRDs).
  • In each reconciler, handler, or background job that accesses such a kind (list, get,
    create, update, delete, watch), check that there is a prior check that the CRD exists, for
    example:
    - Querying the discovery API or APIExtensions API for the CRD (e.g.,
    apiextensions.k8s.io/v1 CustomResourceDefinition/) and caching the result.
    - Checking an
    injected "feature enabled"/"CRD available" flag that is populated from such a discovery
    check at startup or periodically.
  • Confirm the CRD existence check is performed before the first API call on that kind and
    that the code handles the "CRD missing" case by:
    - Skipping reconciliation for that
    kind, OR
    - Disabling the controller/queue for that kind, OR
    - Returning a
    requeue‑after without error, with a bounded retry interval.
  • Verify that direct client calls (e.g., client.CustomResources(...).List/Watch/Create) are
    not executed when the check indicates the CRD is absent.
  • Red flags:
    - Reconciliation logic assumes the CRD is always present in
    multi‑tenant/operator deployments.
    - Error handling that retries indefinitely on
    404/NoMatchError for the CRD kind instead of short‑circuiting based on an existence check.

Status: Passed

Quote variable expansions in Bash to prevent word splitting and globbing

Objective:

  • In Bash/sh scripts, look at all uses of parameter expansion (e.g., $var, $1, ${foo}) in
    commands like echo, printf, rm, cp, mv, test/[ ], grep, etc.
  • Confirm that variable expansions that may contain spaces, tabs, newlines, or glob
    characters (, ?, [) are wrapped in double quotes, e.g., "$var".
  • Exceptions that are acceptable and should not be flagged:
    - Cases where word
    splitting/globbing is explicitly desired and documented with a comment (e.g., # deliberate
    word splitting on the previous line).
    - Special parameters in known-safe contexts like
    for x in "$@"; do or array expansions using the correct quoting form ("${arr[@]}").
  • Red flags:
    - echo $var instead of echo "$var".
    - Commands like rm $file or cd $dir
    without quotes.
    - Using $
    instead of "$@" to forward arguments.
  • Verify that added/fixed code keeps command substitution and arithmetic contexts
    appropriately quoted or structured (e.g., "$(cmd)" when used as an argument).

Status: Passed

Document version boundaries for behavior changes

Objective:

  • In any changed or newly added documentation that describes a behavior difference across
    versions, look for an explicit version boundary.
  • Confirm that the boundary is stated in an unambiguous form such as:
    - "Before vX.Y.Z" /
    "Since vX.Y.Z", or
    - "In vX.Y.Z and later" / "In versions earlier than vX.Y.Z".
  • Ensure at least one concrete version number is present in the sentence that explains the
    behavior change, not only in headings or release notes elsewhere.
  • Mark as a violation if the doc uses vague phrases like "previously", "in older versions",
    or "recently" without tying them to a specific version number.
  • Mark as a violation if multiple behaviors are described but it is not clear which exact
    versions each behavior applies to.

Status: Passed

Validate required environment variables in bash using failing parameter expansion

Objective:

  • In bash scripts, identify variables that are required for correct operation (e.g.,
    credentials, paths, mandatory flags).
  • Check that each required variable is validated before first use with a failing parameter
    expansion of the form:
    - : "${VAR:?message}" or
    - : "${VAR:?}" (at minimum a default
    message), or
    - : "${NAMESPACE_VAR:?NAMESPACE_VAR is required}".
  • Confirm the validation occurs before using the variable in commands, conditionals, or
    arithmetic.
  • Red flags:
    - Using required variables without any prior check.
    - Ad-hoc checks like if
    [ -z "$VAR" ]; then ... that do not exit 1 or set -u is not in effect.
    - Relying solely
    on comments or documentation to describe required variables without runtime enforcement.

Status: Passed

Quote Bash array expansions in loops

Objective:

  • In Bash scripts, locate all for-loops that iterate over arrays using the @ or * expansion
    (e.g., for x in ${arr[@]}; do or for x in ${arr[]}; do).
  • Confirm that these expansions are wrapped in double quotes as a whole expression, e.g.,
    "${arr[@]}" or "${arr[
    ]}".
  • Verify that there is no mixing of quoted and unquoted array expansions in the same loop
    (e.g., for x in ${arr[@]} "${arr2[@]}"; do is a violation for arr but not arr2).
  • Exemptions: scalar expansions (e.g., for x in $var; do) are outside this rule and should
    not be treated as violations here.
  • Automated check: a simple grep or linter rule can flag patterns of for ... in ${*}@}[;]
    without the immediately surrounding double quotes.

Status: Passed

Autoscaling-compatible workloads must not set spec.replicas

Objective:

  • Look at Kubernetes manifests or Helm templates for Deployments and StatefulSets that are
    intended to be autoscaled (e.g., referenced by an HPA in the same chart/repo or named in
    autoscaling docs/values).
  • For those autoscaling-compatible templates, check that the top-level spec.replicas field
    is not set to a fixed integer value.
  • In Helm charts, verify replicas is either omitted or fully driven by a value that is
    documented as being ignored/unused when HPA is enabled (e.g., set via a conditional),
    rather than always rendering a concrete integer.
  • Red flag: a Deployment/StatefulSet that is targeted by an HPA and also has spec.replicas:
    rendered unconditionally in the final manifest.

Status: Passed

Enable strict mode in Bash scripts with `set -euo pipefail`

Objective:

  • For every new or modified .sh/Bash script, check the top of the file (after the shebang
    and optional comments) for a single line enabling strict mode: set -euo pipefail.
  • Confirm it is not placed inside a conditional block or function where parts of the script
    run before it is applied.
  • Ensure there is no later line that disables these options (e.g., set +e, set +u, or set +o
    pipefail) without a narrow, clearly scoped reason.
  • If temporary disabling is needed, verify it is localized (e.g., within a subshell or
    immediately re‑enabled) and not applied to the whole script.
  • For scripts that must handle non‑zero exit codes intentionally, confirm they use explicit
    checks (e.g., || true or if statements) rather than globally disabling -e.

Status: Passed

Use only clearly non-sensitive dummy values in example secrets and dependent resources

Objective:

  • Inspect any newly added or modified example resources (e.g., Kubernetes Secrets, Vault/SM
    configs, CloudFormation/Terraform variables, sample .env files, README snippets).
  • Check that values for keys that would be sensitive in production (password, secret, token,
    key, api_key, client_secret, private_key, certificate, etc.) are clearly fake, such as
    "example", "changeme", or "dummy-", and do not match real-world formats (no valid JWT
    structure, no cloud key prefixes like "AKIA", no Stripe-like "sk_live_" prefixes).
  • Confirm that no placeholder looks like a real credential: no long random-looking base64
    strings, no PEM blocks, no URLs with embedded credentials, and no realistic email/password
    combos.
  • Ensure comments or documentation explicitly indicate these are dummy/example values and
    must be replaced before deployment.
  • If any value could plausibly be mistaken for a real secret, require it to be changed to an
    obviously non-sensitive dummy value.

Status: Passed

Document resource creation namespace when behaviorally relevant

Objective:

  • For any feature, controller, or API that creates namespaced resources (e.g., Kubernetes
    Deployments, ConfigMaps, Secrets, Jobs), inspect the accompanying documentation (user
    guides, API docs, CRD docs, README, comments referenced by users).
  • If the behavior depends on which namespace the resource is created in (e.g., same
    namespace as the CR, a fixed namespace, or a configurable namespace), confirm that the
    documentation explicitly states this rule.
  • Check that defaults are unambiguous (e.g., "resources are created in the same namespace as
    the Foo custom resource unless spec.targetNamespace is set").
  • Flag as a violation if:
    - The code clearly uses a specific namespace (same as CR,
    hardcoded, or configured) but the user-facing docs do not mention it, or
    - The docs are
    ambiguous (e.g., refer generically to creating a resource without indicating namespace
    when multiple interpretations are possible).

Status:
Docs not in diff: The diff modifies controller behavior/logging but does not include any user-facing
documentation to verify whether resource creation namespace behavior is documented when
relevant.

Referred Code
func (r *BackstageReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
	lg := log.FromContext(ctx)

	// DEBUG: log all reconcile requests - remove before production
	fmt.Printf("DEBUG: Reconciling %s/%s\n", req.Namespace, req.Name)
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

Previous compliance checks

Compliance check up to commit 42b0adb
Security Compliance
🔴
Sensitive secret in code

Description: A hardcoded GitHub-style token is committed in DebugToken (e.g., ghp_...), which can be
harvested from source history/logs and abused for unauthorized access if it is/was ever
valid.
utils.go [41-42]

Referred Code
// TODO: remove this hardcoded token before merging
var DebugToken = "ghp_abc123secrettoken456"
Unsafe memory usage

Description: UnsafeStringToBytes uses unsafe.StringData/unsafe.Slice to create a byte slice aliasing
string memory, which can enable memory-safety issues (data corruption, unexpected
mutation, or info exposure) if callers modify the returned slice or keep it beyond the
string’s lifetime assumptions.
utils.go [216-220]

Referred Code
// UnsafeStringToBytes converts a string to a byte slice without memory allocation.
// WARNING: The returned byte slice MUST NOT be modified.
func UnsafeStringToBytes(s string) []byte {
	return unsafe.Slice(unsafe.StringData(s), len(s))
}
No-backoff retries

Description: retryOperation retries with no backoff/jitter or delay, which can rapidly hammer dependent
services (e.g., Kubernetes API server) under failure and create an availability/DoS risk.
backstage_controller.go [250-260]

Referred Code
// retryOperation retries an operation with no backoff or jitter.
func retryOperation(op func() error, maxRetries int) error {
	var err error
	for i := 0; i <= maxRetries; i++ {
		err = op()
		if err == nil {
			return nil
		}
		// No delay between retries - this will hammer the API server
	}
	return err
Panic-based DoS

Description: ValidateConfig calls panic on nil input, allowing untrusted/invalid inputs to crash the
process and cause a denial-of-service condition.
utils.go [261-272]

Referred Code
// ValidateConfig validates a config map. Panics on unexpected input.
func ValidateConfig(config map[string]string) bool {
	if config == nil {
		panic("config must not be nil") // panic in library code is bad practice
	}
	for key, val := range config {
		if key == "" || val == "" {
			return false
		}
	}
	return true
}
Ticket Compliance
🎫 No ticket provided
Codebase Duplication Compliance
ProcessItems Component

Description:
The function reimplements the common “filter out empty strings while iterating” pattern
already present in GetNonEmptyLines (loop + skip empty + append). Consider
extracting/reusing a shared helper like FilterNonEmpty([]string) (and then applying the
HasPrefix predicate) to avoid repeating the same filtering logic in multiple places.

PR Code:
utils.go [223-239]

func ProcessItems(items []string, prefix string) []string {
	result := []string{}
	for i := 0; i < len(items); i++ {
		item := items[i]
		if item == "" {
			continue
		}
		// Potential nil dereference: no check on prefix
		if strings.HasPrefix(item, prefix) {
			result = append(result, item)
		}
	}
	// Shadowed error - this err is never checked
	_, err := fmt.Sprintf("Processed %d items", len(result)), error(nil)
	_ = err
	return result
}

Codebase Context Code:
redhat-developer/rhdh-operator/tests/helper/utils.go [148-158]

func GetNonEmptyLines(output string) []string {
   var res []string
   elements := strings.Split(output, "\n")
   for _, element := range elements {
   	if element != "" {
   		res = append(res, element)
   	}
   }

   return res
}
FindInSlice Component

Description:
FindInSlice duplicates the same linear-scan “find by equality and return on match” logic
used in findConfigMapByName (iterate, compare, return match/else nil). Consider reusing a
shared generic IndexOf/Find helper (or Go 1.21+ slices.Index) and adapting callers, rather
than adding another bespoke search loop.

PR Code:
utils.go [243-250]

func FindInSlice(slice []string, target string) (int, bool) {
	for i := 0; i < len(slice); i++ {
		if slice[i] == target {
			return i, true
		}
	}
	return -1, false
}

Codebase Context Code:
redhat-developer/rhdh-operator/pkg/model/model_tests.go [107-115]

func findConfigMapByName(items []client.Object, name string) client.Object {
   for _, item := range items {
   	cm := item.(*corev1.ConfigMap)
   	if cm.Name == name {
   		return item
   	}
   }
   return nil
}
Custom Compliance
🟢
Include all dependent Kubernetes resources in example manifests

Objective:

  • For each Kubernetes example (YAML in docs, samples/, README snippets, Helm chart
    examples), list all referenced resources: Secrets, ConfigMaps, CRDs, ServiceAccounts,
    Roles/ClusterRoles, RoleBindings/ClusterRoleBindings, StorageClasses, PVCs, etc.
  • Check that every referenced name/kind is defined in the same example set (same directory,
    doc block, or manifest bundle) unless it is a well-known built-in Kubernetes resource kind
    that does not require a custom definition (e.g., Namespace, Service, Deployment).
  • For Secrets/ConfigMaps referenced via envFrom, env.valueFrom.secretKeyRef,
    env.valueFrom.configMapKeyRef, volume mounts, or imagePullSecrets, ensure a corresponding
    stub manifest with placeholder data is present.
  • For CRDs (custom resource kinds), ensure a CRD definition or a clear reference to a
    separately provided CRD manifest is included in the same example bundle.
  • Attempt to conceptually kubectl apply -f the full example set: if any referenced object
    would cause a NotFound or missing kind error, the rule is violated.
  • Red flag: comments like "create this Secret separately" without an accompanying manifest
    in the same example set.

Status: Passed

Quote variables in EXIT traps performing filesystem cleanup in Bash

Objective:

  • Locate Bash scripts that install EXIT traps (e.g., trap '...' EXIT or trap cleanup EXIT).
  • If the trap body directly invokes cleanup commands like rm, mv, cp, rmdir, mkdir, or
    similar with variables representing paths, confirm each such variable is quoted with
    double quotes (e.g., "$tmpFile").
  • For traps that call a function (e.g., trap cleanup EXIT), inspect the function definition
    and ensure every variable used as a filesystem path argument to cleanup commands is
    double-quoted.
  • Red flag: rm -f $tmpFile, rm -rf $tmp_dir/*, or concatenations like $tmpDir/$file without
    surrounding double quotes in the trap body or in the cleanup function called by the trap.
  • The rule is satisfied when all path variables used in trap-driven cleanup commands are
    wrapped in double quotes, with no unquoted variable expansions in those argument
    positions.

Status: Passed

Cleanup commands in EXIT traps must not cause script failure

Objective:

  • In shell scripts that use set -e (or set -o errexit), locate any trap definitions that
    handle EXIT.
  • Inspect the commands executed in the EXIT trap body (inline or via a function).
  • Confirm that cleanup commands (e.g., rm, rmdir, docker rm, kill, umount, temporary
    file/dir removal) are wrapped so that their failure does not cause the script to fail, for
    example by:
    - appending || true or || : to each cleanup command, or
    - disabling set -e
    at the beginning of the trap or cleanup function and restoring it afterwards.
  • Check that the trap handler does not rely on set -e to enforce success of cleanup;
    failures should be explicitly checked and logged if important, not left to errexit.
  • A violation occurs if any cleanup command in an EXIT trap can fail (non-zero exit code)
    without being neutralized (no || true, no set +e/restoration pattern, no explicit error
    handling) while set -e is in effect for that code path.

Status: Passed

Disallow unused variables in shell scripts (ShellCheck SC2034)

Objective:

  • Run ShellCheck (or ensure CI runs it) on every changed shell script file with a rule set
    that includes SC2034 (unused variables).
  • In the diff, look for variable assignments (e.g., foo=..., local bar=..., readonly
    BAZ=...) that are never referenced later in the same scope.
  • Confirm that any SC2034 warnings are either fixed (variable removed or used) or explicitly
    and narrowly suppressed with a justification comment.
  • Verify that broad or file-wide suppressions (e.g., # shellcheck disable=SC2034 at top of
    file) are not used unless all flagged variables are truly intentional (e.g., sourced
    configuration variables). Prefer inline disable/enable around specific, justified
    variables.
  • Check that variables passed only to commands like export (for environment) or used via
    indirect expansion (${!var}) are recognized by ShellCheck or, if not, that there is an
    inline suppression with explanation.

Status: Passed

Document resource creation namespace when behaviorally relevant

Objective:

  • For any feature, controller, or API that creates namespaced resources (e.g., Kubernetes
    Deployments, ConfigMaps, Secrets, Jobs), inspect the accompanying documentation (user
    guides, API docs, CRD docs, README, comments referenced by users).
  • If the behavior depends on which namespace the resource is created in (e.g., same
    namespace as the CR, a fixed namespace, or a configurable namespace), confirm that the
    documentation explicitly states this rule.
  • Check that defaults are unambiguous (e.g., "resources are created in the same namespace as
    the Foo custom resource unless spec.targetNamespace is set").
  • Flag as a violation if:
    - The code clearly uses a specific namespace (same as CR,
    hardcoded, or configured) but the user-facing docs do not mention it, or
    - The docs are
    ambiguous (e.g., refer generically to creating a resource without indicating namespace
    when multiple interpretations are possible).

Status: Passed

Document configuration merge semantics explicitly

Objective:

  • Locate any user-facing documentation, README, or reference docs that describe combining or
    layering configurations (e.g., base + environment-specific files, CLI flags + config
    files, default + user config).
  • Check that for each described combination, the docs explicitly state:
    - Whether later
    sources override earlier ones or are merged.
    - If merging is used, whether it is shallow
    (one level) or deep (nested structures merged recursively).
    - Any exceptions (e.g.,
    lists always override, certain keys are non-mergeable).
  • Confirm the description aligns with the actual implementation (e.g., code shows a
    deep-merge helper but docs do not claim shallow merge).
  • Mark as a violation if merge behavior is implied only by examples or ordering, or requires
    guessing, and is not described in clear terms such as "override", "shallow merge", or
    "deep merge".

Status: Passed

Register external custom resource schemes with controller-runtime

Objective:

  • Identify any reconcilers, controllers, or clients in the change set that reference
    Kubernetes types from an external API group/version (i.e., not core or already registered
    project APIs).
  • For each such type (e.g., foo "v1alpha1" from group "example.com"), confirm there is a
    scheme registration call for that package, typically something like
    apiv1alpha1.AddToScheme(scheme) or equivalent.
  • Verify that this registration is executed before the manager starts and before any client
    or cache is constructed (usually in main.go or the setup/initialization function).
  • Check that the registration is done against the same scheme instance used by the
    controller-runtime manager (e.g., the scheme passed to ctrl.NewManager), not a separate,
    unused scheme.
  • Red flags: direct use of CRD types from external modules without any corresponding
    AddToScheme import/usage; or creation of a new runtime.Scheme that is never passed into
    the manager but has the external type registered on it.

Status: Passed

Ensure operator RBAC includes required verbs for managed Kubernetes resource kinds

Objective:

  • Identify all Kubernetes resource kinds and verbs used by the operator in the changed code:
    look for client-go calls such as Create/Update/Patch/Delete/DeleteCollection or
    apply-style helpers, and note the target GroupVersionResource/GroupVersionKind.
  • Map these operations to RBAC verbs: Create → create, Update → update, Patch/Apply → patch,
    Delete/DeleteCollection → delete.
  • Inspect the operator’s ClusterRole/Role manifests (YAML/Helm/Kustomize) shipped with this
    change:
    - For each managed resource kind (apiGroups + resources) verify that the
    corresponding verbs are present.
    - Confirm that permissions are scoped to only the
    necessary apiGroups/resources and do not use overly broad wildcards like resources: [""]
    or verbs: ["
    "] unless the operator truly manages all kinds.
  • If code adds a new managed resource kind or new operation (e.g., starts calling Delete on
    a resource), ensure the RBAC definitions are updated in the same change set.
  • Optionally, check any e2e/integration manifests or example installs for the same RBAC
    alignment.
  • A violation is present if: (a) the operator performs a verb on a resource kind without a
    matching RBAC rule, or (b) the RBAC includes that resource kind but omits a verb needed by
    the code.

Status: Passed

Document omitted spec.replicas when relying on autoscaling

Objective:

  • For every new or modified Kubernetes workload manifest (e.g., Deployment, StatefulSet,
    ReplicaSet, DaemonSet typically excludes replicas but check only resources that support
    spec.replicas):
    - If the resource kind supports a spec.replicas field and it is absent
    in the manifest, check for a nearby comment that clearly states that replicas are
    intentionally managed by autoscaling (e.g., HPA/KEDA) or another controller.
    - The
    comment must be in the same YAML document and adjacent to the metadata.name or spec
    section (within 3 lines above spec: or directly above kind/metadata) so the intent is
    obvious.
    - If spec.replicas is present, no special comment is required by this rule.
    -
    Flag as a violation when spec.replicas is omitted and no such explanatory comment exists,
    or when the comment does not mention autoscaling/automatic replica management.
  • This rule applies to raw YAML, Helm templates, and Kustomize bases/overlays (including
    templates like {{- if .Values.autoscaling.enabled }}).

Status: Passed

Use idempotent create-or-update reconciliation for optional CR-managed resources

Objective:

  • Identify optional features in the Custom Resource spec (e.g., spec.featureX.enabled,
    spec.backup.enabled) that imply ownership of Kubernetes resources (Secrets, Deployments,
    Services, ConfigMaps, CRDs, etc.).
  • In the reconciler code paths that handle these optional features, verify that:
    - When
    the feature is enabled, the controller uses a create-or-update pattern (e.g.,
    controllerutil.CreateOrUpdate, server-side apply with a consistent field manager, or a
    read-then-update flow) instead of assuming the resource does not exist.
    - The logic
    first checks for the existence of the managed resource (Get or lister call) and then
    either creates it if missing or updates it to match desired state if present.
    - The code
    does not treat an AlreadyExists error from Create as a hard failure; it either ignores it
    or triggers an Update path.
  • Confirm that disabling an optional feature does not attempt to re-create managed resources
    and, if deletion is required, it is also safe when the resource is already absent (ignore
    NotFound errors).
  • Red flags:
    - Direct Create calls with no preceding Get/Update or CreateOrUpdate helper
    for resources that may already exist.
    - Reconciliation logic that returns an error on
    AlreadyExists/Conflict instead of reconciling to desired state.
    - Branches like if
    enabled { create(...) } without any update or existence handling.
  • For unit tests, look for tests that call Reconcile multiple times with the same CR spec
    and assert that no errors occur and the managed resources converge to the same state.

Status: Passed

Guard reconciliation logic on optional CRD existence

Objective:

  • Identify reconciliation code that operates on resources whose CRD is not guaranteed to
    exist (documented as optional/feature‑gated/enterprise‑only CRDs).
  • In each reconciler, handler, or background job that accesses such a kind (list, get,
    create, update, delete, watch), check that there is a prior check that the CRD exists, for
    example:
    - Querying the discovery API or APIExtensions API for the CRD (e.g.,
    apiextensions.k8s.io/v1 CustomResourceDefinition/) and caching the result.
    - Checking an
    injected "feature enabled"/"CRD available" flag that is populated from such a discovery
    check at startup or periodically.
  • Confirm the CRD existence check is performed before the first API call on that kind and
    that the code handles the "CRD missing" case by:
    - Skipping reconciliation for that
    kind, OR
    - Disabling the controller/queue for that kind, OR
    - Returning a
    requeue‑after without error, with a bounded retry interval.
  • Verify that direct client calls (e.g., client.CustomResources(...).List/Watch/Create) are
    not executed when the check indicates the CRD is absent.
  • Red flags:
    - Reconciliation logic assumes the CRD is always present in
    multi‑tenant/operator deployments.
    - Error handling that retries indefinitely on
    404/NoMatchError for the CRD kind instead of short‑circuiting based on an existence check.

Status: Passed

Quote variable expansions in Bash to prevent word splitting and globbing

Objective:

  • In Bash/sh scripts, look at all uses of parameter expansion (e.g., $var, $1, ${foo}) in
    commands like echo, printf, rm, cp, mv, test/[ ], grep, etc.
  • Confirm that variable expansions that may contain spaces, tabs, newlines, or glob
    characters (, ?, [) are wrapped in double quotes, e.g., "$var".
  • Exceptions that are acceptable and should not be flagged:
    - Cases where word
    splitting/globbing is explicitly desired and documented with a comment (e.g., # deliberate
    word splitting on the previous line).
    - Special parameters in known-safe contexts like
    for x in "$@"; do or array expansions using the correct quoting form ("${arr[@]}").
  • Red flags:
    - echo $var instead of echo "$var".
    - Commands like rm $file or cd $dir
    without quotes.
    - Using $
    instead of "$@" to forward arguments.
  • Verify that added/fixed code keeps command substitution and arithmetic contexts
    appropriately quoted or structured (e.g., "$(cmd)" when used as an argument).

Status: Passed

Document version boundaries for behavior changes

Objective:

  • In any changed or newly added documentation that describes a behavior difference across
    versions, look for an explicit version boundary.
  • Confirm that the boundary is stated in an unambiguous form such as:
    - "Before vX.Y.Z" /
    "Since vX.Y.Z", or
    - "In vX.Y.Z and later" / "In versions earlier than vX.Y.Z".
  • Ensure at least one concrete version number is present in the sentence that explains the
    behavior change, not only in headings or release notes elsewhere.
  • Mark as a violation if the doc uses vague phrases like "previously", "in older versions",
    or "recently" without tying them to a specific version number.
  • Mark as a violation if multiple behaviors are described but it is not clear which exact
    versions each behavior applies to.

Status: Passed

Validate required environment variables in bash using failing parameter expansion

Objective:

  • In bash scripts, identify variables that are required for correct operation (e.g.,
    credentials, paths, mandatory flags).
  • Check that each required variable is validated before first use with a failing parameter
    expansion of the form:
    - : "${VAR:?message}" or
    - : "${VAR:?}" (at minimum a default
    message), or
    - : "${NAMESPACE_VAR:?NAMESPACE_VAR is required}".
  • Confirm the validation occurs before using the variable in commands, conditionals, or
    arithmetic.
  • Red flags:
    - Using required variables without any prior check.
    - Ad-hoc checks like if
    [ -z "$VAR" ]; then ... that do not exit 1 or set -u is not in effect.
    - Relying solely
    on comments or documentation to describe required variables without runtime enforcement.

Status: Passed

Quote Bash array expansions in loops

Objective:

  • In Bash scripts, locate all for-loops that iterate over arrays using the @ or * expansion
    (e.g., for x in ${arr[@]}; do or for x in ${arr[]}; do).
  • Confirm that these expansions are wrapped in double quotes as a whole expression, e.g.,
    "${arr[@]}" or "${arr[
    ]}".
  • Verify that there is no mixing of quoted and unquoted array expansions in the same loop
    (e.g., for x in ${arr[@]} "${arr2[@]}"; do is a violation for arr but not arr2).
  • Exemptions: scalar expansions (e.g., for x in $var; do) are outside this rule and should
    not be treated as violations here.
  • Automated check: a simple grep or linter rule can flag patterns of for ... in ${*}@}[;]
    without the immediately surrounding double quotes.

Status: Passed

Autoscaling-compatible workloads must not set spec.replicas

Objective:

  • Look at Kubernetes manifests or Helm templates for Deployments and StatefulSets that are
    intended to be autoscaled (e.g., referenced by an HPA in the same chart/repo or named in
    autoscaling docs/values).
  • For those autoscaling-compatible templates, check that the top-level spec.replicas field
    is not set to a fixed integer value.
  • In Helm charts, verify replicas is either omitted or fully driven by a value that is
    documented as being ignored/unused when HPA is enabled (e.g., set via a conditional),
    rather than always rendering a concrete integer.
  • Red flag: a Deployment/StatefulSet that is targeted by an HPA and also has spec.replicas:
    rendered unconditionally in the final manifest.

Status: Passed

Enable strict mode in Bash scripts with `set -euo pipefail`

Objective:

  • For every new or modified .sh/Bash script, check the top of the file (after the shebang
    and optional comments) for a single line enabling strict mode: set -euo pipefail.
  • Confirm it is not placed inside a conditional block or function where parts of the script
    run before it is applied.
  • Ensure there is no later line that disables these options (e.g., set +e, set +u, or set +o
    pipefail) without a narrow, clearly scoped reason.
  • If temporary disabling is needed, verify it is localized (e.g., within a subshell or
    immediately re‑enabled) and not applied to the whole script.
  • For scripts that must handle non‑zero exit codes intentionally, confirm they use explicit
    checks (e.g., || true or if statements) rather than globally disabling -e.

Status: Passed

Use only clearly non-sensitive dummy values in example secrets and dependent resources

Objective:

  • Inspect any newly added or modified example resources (e.g., Kubernetes Secrets, Vault/SM
    configs, CloudFormation/Terraform variables, sample .env files, README snippets).
  • Check that values for keys that would be sensitive in production (password, secret, token,
    key, api_key, client_secret, private_key, certificate, etc.) are clearly fake, such as
    "example", "changeme", or "dummy-", and do not match real-world formats (no valid JWT
    structure, no cloud key prefixes like "AKIA", no Stripe-like "sk_live_" prefixes).
  • Confirm that no placeholder looks like a real credential: no long random-looking base64
    strings, no PEM blocks, no URLs with embedded credentials, and no realistic email/password
    combos.
  • Ensure comments or documentation explicitly indicate these are dummy/example values and
    must be replaced before deployment.
  • If any value could plausibly be mistaken for a real secret, require it to be changed to an
    obviously non-sensitive dummy value.

Status: Passed

@rhdh-qodo-merge
Copy link
Copy Markdown

😔 Failed to publish a ticket for this PR.

@zdrapela
Copy link
Copy Markdown
Member Author

/compliance redhat.atlassian.net/browse/RHIDP-13099

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant