Guardian's self-healing system represents a breakthrough in autonomous code maintenance - a cybersecurity engine that can diagnose and repair its own codebase using the LJPW (Love, Justice, Power, Wisdom) framework. This document provides an in-depth analysis of how Guardian analyzes itself, identifies weaknesses, and automatically applies corrective actions to improve code health.
Key Innovation: Guardian uses its own semantic analysis framework to understand code health as a dynamic system, predict degradation trajectories, and intervene before critical failures occur.
- Overview
- Philosophical Foundation
- Architecture
- How It Works
- Healing Strategies
- Real-World Results
- Usage Guide
- Technical Implementation
- Safety Mechanisms
- Future Capabilities
Guardian's self-healing system enables the codebase to:
- Self-Analyze: Map code health metrics to LJPW semantic coordinates
- Self-Diagnose: Use dynamic system modeling to predict trajectory and identify issues
- Self-Heal: Automatically apply corrective actions to improve code health
- Self-Verify: Run tests to ensure healing actions don't introduce regressions
Traditional software maintenance requires human intervention to:
- Identify code quality issues
- Plan remediation strategies
- Apply fixes manually
- Verify improvements
Guardian automates this entire cycle by treating code health as a dynamic system that can be analyzed, predicted, and corrected using the same LJPW framework it uses for cybersecurity analysis.
┌─────────────────────────────────────────────────────────────┐
│ GUARDIAN SELF-HEALING │
│ │
│ Analyze → Diagnose → Plan → Execute → Verify → Report │
│ ↑ ↓ │
│ └──────────── Feedback Loop ───────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Guardian's insight is that code health can be understood through the LJPW lens:
| Dimension | Code Health Mapping | What It Measures |
|---|---|---|
| Love (L) | Care for code | Test coverage, documentation, maintainability |
| Justice (J) | Fairness & balance | Even test distribution, consistent patterns, no tech debt imbalance |
| Power (P) | Execution capability | Test pass rate, feature completeness, performance |
| Wisdom (W) | Strategic design | Architecture quality, low complexity, documentation, long-term thinking |
Just as Guardian models semantic meaning as a trajectory through LJPW space, it models codebase evolution as a dynamic system:
dL/dt = α_LJ * J + α_LW * W - β_L * L
dJ/dt = α_JL * (L / (K_JL + L)) + α_JW * W - γ_JP * (P^n / (K_JP^n + P^n)) * (1 - W) - β_J * J
dP/dt = α_PL * L + α_PJ * J - β_PW * P * (1 - W) - β_P * P
dW/dt = α_WL * L + α_WJ * J + α_WP * P - β_W * W
Key Insight: Code with high Power (execution) but low Wisdom (documentation, architecture) will experience Justice Erosion - technical debt accumulates, and the system becomes unmaintainable.
Guardian's LJPW model has a natural equilibrium point:
(L, J, P, W) = (0.618, 0.414, 0.718, 0.693)
This represents optimal code health based on golden ratio relationships and empirical calibration. The self-healing system guides the codebase toward this equilibrium.
┌─────────────────────────────────────────────────────────────┐
│ Self-Healing Architecture │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────┐ ┌──────────────────┐ │
│ │ SelfAnalyzer │────────>│ DynamicLJPW │ │
│ │ (Diagnosis) │ │ (Prediction) │ │
│ └────────┬────────┘ └──────────────────┘ │
│ │ │
│ │ CodeHealthReport │
│ ↓ │
│ ┌─────────────────┐ ┌──────────────────┐ │
│ │ SelfHealer │────────>│ HealingStrategies│ │
│ │ (Correction) │ │ (Actions) │ │
│ └────────┬────────┘ └──────────────────┘ │
│ │ │
│ │ Applied Changes │
│ ↓ │
│ ┌─────────────────┐ ┌──────────────────┐ │
│ │ TestRunner │────────>│ ResultValidator │ │
│ │ (Verification) │ │ (Safety Check) │ │
│ └─────────────────┘ └──────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
Analyzes codebase health and maps to LJPW coordinates.
Key Methods:
analyze()- Run complete health analysis_collect_metrics()- Gather coverage, test, and quality metrics_calculate_ljpw_coordinates()- Map metrics to LJPW space_diagnose_concerns()- Identify issues and predict trajectory
Applies automatic healing actions based on diagnosis.
Key Methods:
heal()- Execute healing cycle_plan_healing_actions()- Generate action plan_apply_healing_action()- Execute individual action_verify_with_tests()- Ensure fixes don't break tests
Predicts trajectory and calculates optimal interventions.
Key Methods:
simulate()- Project future state using RK4 integrationpredict_trajectory()- Determine if system is converging or divergingcalculate_intervention()- Compute optimal corrective action
Guardian begins by analyzing its own codebase:
# Command
guardian self-analyze
# What happens internally
1. Run pytest with coverage plugin
2. Collect metrics:
- Test coverage percentage
- Number of passing/failing tests
- Documentation ratio
- Code complexity
- Lines of code
3. Map metrics to LJPW coordinates
4. Simulate trajectory to predict future state
5. Generate diagnosis and recommendationsThe mapping philosophy:
# Love (Care) = Investment in code quality
love = (
0.60 * test_coverage + # Primary: Do we test our code?
0.30 * documentation_ratio + # Secondary: Do we document our code?
0.10 * baseline # Base score for existing
)
# Justice (Fairness) = Even distribution and consistency
# Calculate test distribution fairness (Gini coefficient)
fairness = 1.0 - gini(coverage_by_module)
justice = (
0.70 * fairness + # Primary: Even test coverage?
0.30 * (1.0 if no_failing_tests else 0.5)
)
# Power (Execution) = Ability to execute correctly
power = (
0.70 * test_pass_rate + # Primary: Do tests pass?
0.30 * test_coverage # Secondary: Coverage supports power
)
# Wisdom (Strategy) = Long-term thinking and architecture
complexity_score = 1.0 - (cyclomatic_complexity / threshold)
wisdom = (
0.40 * documentation_ratio + # Primary: Strategic documentation
0.40 * complexity_score + # Secondary: Simple architecture
0.20 * test_coverage # Tertiary: Coverage supports wisdom
)Once LJPW coordinates are calculated, Guardian simulates the future:
# Current state
current = Coordinates(L=0.383, J=0.569, P=0.842, W=0.394)
# Simulate 10 time units into future using RK4 integration
future = simulator.simulate(current, duration=10.0)
# Result: future = (0.628, 0.713, 1.334, 1.172)
# System is DIVERGING from equilibrium (0.618, 0.414, 0.718, 0.693)Trajectory Classification:
- Converging: Distance to equilibrium decreases → System is self-correcting
- Stable: Distance remains constant → System at equilibrium
- Diverging: Distance increases → Intervention required
Guardian uses numerical differentiation to find optimal intervention:
# For each dimension, test small perturbations
for dimension in [L, J, P, W]:
perturbed = current.copy()
perturbed[dimension] += 0.1
future_with_perturbation = simulate(perturbed)
distance_to_equilibrium = calculate_distance(future_with_perturbation)
# Which perturbation reduces distance most?
if distance_to_equilibrium < best_distance:
optimal_intervention[dimension] = 0.1
# Result: Increase Wisdom by 0.2 → Maximum stabilizationBased on the intervention, Guardian plans concrete actions:
# Intervention says: Increase Wisdom by 0.2
# Healer translates to concrete actions:
if wisdom_deficit:
actions = [
"Add docstrings to undocumented modules",
"Create architecture documentation",
"Add inline comments to complex functions",
"Generate README for each package"
]
if justice_deficit:
actions = [
"Generate test stubs for untested modules",
"Balance test coverage across packages",
"Add integration tests for undertested areas"
]
if power_deficit:
actions = [
"Fix failing tests",
"Apply linting fixes (black, isort, ruff)",
"Fix type errors (mypy)"
]
if love_deficit:
actions = [
"Increase test coverage",
"Add docstrings",
"Improve code maintainability"
]Guardian executes healing actions automatically:
# Example: Auto-format with black
def _apply_black_formatting(self, target_dir: str) -> HealingResult:
"""Apply black code formatting"""
try:
subprocess.run(
["python", "-m", "black", target_dir, "--line-length", "120"],
check=True,
capture_output=True
)
return HealingResult(success=True)
except subprocess.CalledProcessError as e:
return HealingResult(success=False, error=str(e))After each action, Guardian verifies nothing broke:
# Run full test suite
result = subprocess.run(["pytest", "tests/"], capture_output=True)
if result.returncode != 0:
# Tests failed - ROLLBACK
self._rollback_changes()
return HealingStatus.FAILED
else:
# Tests passed - COMMIT
return HealingStatus.SUCCESSGuardian implements multiple healing strategies, each targeting specific LJPW dimensions:
| Strategy | Target | Impact | Safety | Implementation |
|---|---|---|---|---|
| Black Formatting | Power | +0.05 | ✅ Safe | black guardian/ --line-length 120 |
| Import Sorting | Power | +0.02 | ✅ Safe | isort guardian/ |
| Docstring Addition | Love, Wisdom | +0.10 | Generate AI docstrings | |
| Test Stub Generation | Justice | +0.04 | ✅ Safe | Create placeholder tests |
| Documentation | Wisdom | +0.15 | ✅ Safe | Generate README/guides |
| Type Hint Addition | Wisdom | +0.08 | Add missing type hints | |
| Complexity Reduction | Wisdom | +0.12 | Refactor complex functions | |
| Coverage Expansion | Love, Justice | +0.10 | Add real tests |
Auto-Safe Actions (applied automatically):
- Code formatting (black, isort)
- Test stub generation
- Documentation generation
- Import organization
Manual Review Required (dry-run only):
- Code refactoring
- Test implementation
- Type hint changes
- Complexity reduction
Guardian prioritizes healing actions based on:
- Intervention engine recommendation (which dimension needs boost?)
- Impact score (how much improvement per action?)
- Safety level (auto-safe actions first)
- Dependencies (some actions depend on others)
# Example prioritization
actions = [
HealingAction(type="black", impact=0.05, safety=10, priority=100),
HealingAction(type="test_stubs", impact=0.04, safety=10, priority=90),
HealingAction(type="docstrings", impact=0.10, safety=7, priority=70),
]
# Sort by: safety DESC, impact DESC, priority DESC
sorted_actions = sort_healing_actions(actions)Date: 2025-11-12 09:00 UTC Duration: 4 minutes Actions: 2 (black formatting, test stub generation)
Before:
Love: 0.383
Justice: 0.569
Power: 0.842
Wisdom: 0.394
Coverage: 47.2%
Tests: 140/140 passing
Threat: MEDIUM (diverging trajectory)
Actions Applied:
-
✅ Black formatting on 36 Python files
- 1,305 insertions, 1,564 deletions
- Standardized code style
-
✅ Test stubs for 2 modules
tests/test_advanced_gap_analyzer.pytests/test_analyzers_geopolitical.py- Fixed import paths
After:
Love: 0.388 (+0.005, +1.3%)
Justice: 0.578 (+0.009, +1.6%) ⭐ Largest improvement
Power: 0.844 (+0.002, +0.2%)
Wisdom: 0.396 (+0.002, +0.5%)
Coverage: 48.1% (+0.9%)
Tests: 142/142 passing (+2 tests)
Threat: MEDIUM (still diverging, needs more Wisdom)
Key Insights:
- Justice improved most (+0.009) because test stubs balanced coverage distribution
- All tests still passing - no regressions introduced
- System still diverging - needs documentation boost to Wisdom
- Self-healing works - Guardian successfully diagnosed and corrected itself
Quantitative Improvements:
- +1.6% Justice (fairness in test coverage)
- +0.9% absolute coverage increase
- +2 test files (infrastructure for future testing)
- 0 test regressions (100% safety)
Qualitative Improvements:
- Consistent code formatting across entire codebase
- Test infrastructure for previously untested modules
- Foundation for developers to add real tests
- Demonstrated viability of self-healing concept
Based on current trajectory analysis:
Recommendation: Increase Wisdom by 0.20
Priority: 2.8/10 (medium-high)
Top tactics:
1. Generate documentation for key modules
2. Add docstrings to undocumented functions
3. Create architecture guides
4. Reduce cyclomatic complexity in complex functions
guardian self-analyzeOutput:
📊 CODE METRICS:
Test Coverage: 48.1%
Tests Passing: 142/142 (100.0%)
Documentation: 0.0%
Lines of Code: 9,018
🧭 LJPW COORDINATES:
Love (Care): 0.388
Justice (Fairness): 0.578
Power (Execution): 0.844
Wisdom (Strategy): 0.396
⚠️ THREAT ASSESSMENT: MEDIUM
💡 RECOMMENDATIONS:
1. WISDOM (Priority: 2.8/10)
Current: 0.396 → Target: +0.200
guardian heal-self --dry-runOutput:
📋 Planned 5 healing actions:
1. [Power] ✓ Auto-format code with black
2. [Justice] ✓ Generate test stubs for untested modules
3. [Love] ⚠ Add docstrings to undocumented functions
4. [Wisdom] ⚠ Generate documentation
5. [Wisdom] ⚠ Add architecture guide
🔍 DRY RUN - No changes applied
guardian heal-self --max-actions 3Applies up to 3 healing actions and verifies with tests.
guardian heal-self --strategy black
guardian heal-self --strategy test-stubs
guardian heal-self --strategy docstrings --dry-run# .github/workflows/self-healing.yml
name: Guardian Self-Healing
on:
schedule:
- cron: '0 2 * * 0' # Weekly on Sunday 2 AM
jobs:
heal:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Analyze health
run: guardian self-analyze
- name: Apply healing
run: guardian heal-self --max-actions 3
- name: Create PR
uses: peter-evans/create-pull-request@v5
with:
title: "Guardian Self-Healing: Weekly maintenance"
body: "Automated code health improvements"# .git/hooks/pre-commit
#!/bin/bash
# Analyze health before each commit
guardian self-analyze --quiet
# If trajectory is severely diverging, warn developer
if [ $? -eq 2 ]; then
echo "⚠️ Warning: Code health is degrading"
echo "Run 'guardian heal-self' to improve"
fi# custom_healing.py
from guardian.meta.self_healer import SelfHealer, HealingAction
class CustomHealer(SelfHealer):
def _plan_custom_action(self) -> List[HealingAction]:
"""Add your custom healing strategy"""
return [
HealingAction(
type="custom",
description="My custom fix",
impact_dimension="wisdom",
expected_delta=0.05,
safety_level=8,
command=["my-custom-tool", "fix"]
)
]
# Use custom healer
healer = CustomHealer(project_root=".")
healer.heal()Guardian uses 4th-order Runge-Kutta integration for high accuracy:
def _rk4_step(self, state: np.ndarray, dt: float) -> np.ndarray:
"""
4th-order Runge-Kutta integration step
Achieves O(dt^5) local error (vs O(dt^2) for Euler method)
RK4 Formula:
k1 = f(t, y)
k2 = f(t + dt/2, y + dt*k1/2)
k3 = f(t + dt/2, y + dt*k2/2)
k4 = f(t + dt, y + dt*k3)
y_next = y + (dt/6) * (k1 + 2*k2 + 2*k3 + k4)
"""
k1 = self._derivatives(state)
k2 = self._derivatives(state + 0.5 * dt * k1)
k3 = self._derivatives(state + 0.5 * dt * k2)
k4 = self._derivatives(state + dt * k3)
new_state = state + (dt / 6.0) * (k1 + 2*k2 + 2*k3 + k4)
return np.clip(new_state, 0.0, 1.5)Uses numerical differentiation to find steepest descent:
def calculate_intervention(self, current: Coordinates) -> Dict[str, float]:
"""
Calculate optimal intervention using gradient descent
For each dimension, perturb by small amount and measure impact
on distance to equilibrium. Choose perturbation with maximum
beneficial impact.
"""
perturbation = 0.1
best_improvement = 0
optimal_intervention = {d: 0 for d in ['L', 'J', 'P', 'W']}
# Baseline: distance without intervention
future_baseline = self.simulate(current, duration=10.0)
baseline_distance = self._distance(future_baseline, self.equilibrium)
# Test perturbations in each dimension
for dim_idx, dim_name in enumerate(['L', 'J', 'P', 'W']):
perturbed = current.copy()
perturbed[dim_idx] += perturbation
future_perturbed = self.simulate(perturbed, duration=10.0)
perturbed_distance = self._distance(future_perturbed, self.equilibrium)
improvement = baseline_distance - perturbed_distance
if improvement > best_improvement:
best_improvement = improvement
optimal_intervention = {d: 0 for d in ['L', 'J', 'P', 'W']}
optimal_intervention[dim_name] = 0.2 # Suggested boost
return optimal_intervention@dataclass
class CodeHealthReport:
"""Complete diagnosis of codebase health"""
# LJPW coordinates
coordinates: Coordinates
# Raw metrics
coverage: Dict[str, float] # overall, by_module, by_file
tests: Dict[str, int] # total, passing, failing, skipped
quality: Dict[str, float] # doc_ratio, complexity, maintainability
# Trajectory analysis
trajectory: TrajectoryPrediction
concerns: List[str]
recommendations: List[Intervention]
# Threat assessment
threat_level: str # LOW, MEDIUM, HIGH, CRITICAL
# Timestamp
timestamp: datetime@dataclass
class HealingAction:
"""A single healing action to be applied"""
type: str # "black", "test-stubs", "docstrings", etc.
description: str # Human-readable description
target_path: str # File/directory to modify
impact_dimension: str # "love", "justice", "power", "wisdom"
expected_delta: float # Expected improvement (0.0-1.0)
safety_level: int # 1-10 (10 = completely safe)
requires_review: bool # True if manual review needed
command: List[str] # Command to execute
rollback_command: List[str] # How to undo (if applicable)@dataclass
class HealingResult:
"""Result of applying a healing action"""
action: HealingAction
success: bool
# Changes made
files_modified: List[str]
lines_added: int
lines_removed: int
# Verification
tests_passed: bool
new_coordinates: Optional[Coordinates]
# Error info (if failed)
error: Optional[str]
stderr: Optional[str]| Operation | Time Complexity | Space Complexity | Typical Duration |
|---|---|---|---|
| Self-analysis | O(n) files | O(n) metrics | ~30 seconds |
| RK4 simulation | O(steps) | O(dimensions) | ~0.01 seconds |
| Black formatting | O(n) files | O(1) | ~2 seconds |
| Test stub generation | O(1) per stub | O(1) | ~0.1 seconds |
| Test verification | O(tests) | O(coverage) | ~10-60 seconds |
class SelfHealer:
def heal(self) -> HealingSummary:
"""Execute healing cycle with comprehensive error handling"""
try:
# Phase 1: Diagnosis
report = self.analyzer.analyze()
# Phase 2: Planning
actions = self._plan_healing_actions(report)
# Phase 3: Execution
results = []
for action in actions:
try:
result = self._apply_healing_action(action)
results.append(result)
if not result.success:
logger.warning(f"Action failed: {action.type}")
continue
except Exception as e:
logger.error(f"Error applying {action.type}: {e}")
self._rollback_changes()
raise
# Phase 4: Verification
if not self._verify_with_tests():
logger.error("Tests failed after healing")
self._rollback_changes()
return HealingSummary(success=False)
return HealingSummary(
success=True,
actions_applied=len(results),
improvements=self._calculate_improvements()
)
except Exception as e:
logger.critical(f"Self-healing failed: {e}")
return HealingSummary(success=False, error=str(e))Guardian implements multiple safety layers to prevent self-healing from causing damage:
Always preview changes before applying:
guardian heal-self --dry-runNo changes are made to disk. You see exactly what would happen.
Every healing action is verified:
# After applying action
result = run_tests()
if not result.passed:
rollback() # Revert all changes
raise HealingFailed("Tests failed after healing")Git integration for automatic rollback:
def _apply_healing_action(self, action: HealingAction) -> HealingResult:
# Create checkpoint
checkpoint = self._create_git_checkpoint()
try:
# Apply action
execute(action.command)
# Verify
if not self._verify_with_tests():
# Rollback to checkpoint
self._restore_checkpoint(checkpoint)
return HealingResult(success=False)
except Exception as e:
self._restore_checkpoint(checkpoint)
raiseEach action has a safety rating (1-10):
SAFETY_LEVELS = {
"black_format": 10, # Completely safe (code style only)
"isort": 10, # Completely safe (import order only)
"test_stubs": 9, # Very safe (adds new files only)
"docstrings": 7, # Moderately safe (modifies code, but tests verify)
"refactor": 5, # Requires care (changes logic)
"delete_code": 3, # Dangerous (removes code)
}
# Only apply actions with safety >= threshold
safe_actions = [a for a in actions if a.safety_level >= 7]Prevent runaway healing:
guardian heal-self --max-actions 3Limits to 3 actions per cycle, reducing risk of cascading failures.
if action.requires_review:
print(f"Action {action.type} requires human approval")
print(f"Impact: {action.expected_delta} on {action.impact_dimension}")
if not confirm_with_user():
skip_action(action)Every action is logged:
logger.info(f"Applying healing action: {action.type}")
logger.info(f"Target: {action.target_path}")
logger.info(f"Expected impact: {action.expected_delta}")
# After execution
logger.info(f"Files modified: {result.files_modified}")
logger.info(f"Tests passed: {result.tests_passed}")
logger.info(f"New coordinates: {result.new_coordinates}")Full audit trail in .guardian_checkpoints/.
- AI-Powered Docstring Generation: Use LLM to generate meaningful docstrings
- Intelligent Refactoring: Detect code smells and refactor automatically
- Type Hint Inference: Add missing type hints using static analysis
- Test Case Generation: Generate real tests (not just stubs) using LLM
- Proactive Intervention: Heal before degradation occurs
- Trajectory Forecasting: Predict which modules will need attention
- Preventive Actions: Add tests to modules likely to break
- Healing Pattern Library: Learn from healing other codebases
- Best Practice Detection: Identify patterns from healthy codebases
- Community Sharing: Share healing strategies across Guardian deployments
- Meta-Healing: Guardian improves its own healing algorithms
- Strategy Optimization: Learn which strategies work best
- Custom Strategy Synthesis: Generate new healing strategies automatically
- Multi-Objective Optimization: Balance multiple LJPW dimensions simultaneously
- Reinforcement Learning: Train healing agent with rewards for improvements
- Causal Inference: Understand causal relationships between code changes and LJPW impact
- Distributed Healing: Coordinate healing across microservices
- Human-in-the-Loop: Hybrid system where AI proposes, human approves, AI learns
# Analysis
guardian self-analyze # Full health analysis
guardian self-analyze --format json # JSON output
guardian self-analyze --save report.html # Save HTML report
# Healing
guardian heal-self # Apply healing (auto-safe only)
guardian heal-self --dry-run # Preview without changes
guardian heal-self --max-actions 5 # Limit to 5 actions
guardian heal-self --strategy black # Apply specific strategy
guardian heal-self --all # Apply all strategies (including risky)
# Reporting
guardian health-report # Generate comprehensive report
guardian health-diff --before HEAD~1 # Compare health before/after commits
guardian health-history --days 30 # Show health trend over timefrom guardian.meta.self_analyzer import SelfAnalyzer
from guardian.meta.self_healer import SelfHealer
# Analysis
analyzer = SelfAnalyzer(project_root=".")
report = analyzer.analyze()
print(f"Love: {report.coordinates.love}")
print(f"Wisdom: {report.coordinates.wisdom}")
print(f"Threat: {report.threat_level}")
# Healing
healer = SelfHealer(project_root=".", dry_run=False)
summary = healer.heal(max_actions=3)
print(f"Actions applied: {summary.actions_applied}")
print(f"Success: {summary.success}")
print(f"Improvements: {summary.improvements}")| Coordinate | Value | Interpretation | Action Needed |
|---|---|---|---|
| Love | < 0.3 | Critical: Very low test coverage | Add tests immediately |
| 0.3-0.5 | Warning: Low coverage | Increase test coverage | |
| 0.5-0.6 | Fair: Moderate coverage | Maintain and grow | |
| > 0.6 | Good: Strong test coverage | Maintain current level | |
| Justice | < 0.4 | Critical: Very uneven tests | Balance coverage |
| 0.4-0.5 | Warning: Uneven distribution | Add tests to weak areas | |
| 0.5-0.7 | Fair: Reasonably balanced | Minor balancing needed | |
| > 0.7 | Good: Well-balanced tests | Maintain balance | |
| Power | < 0.5 | Critical: Many failing tests | Fix tests immediately |
| 0.5-0.7 | Warning: Some tests failing | Address failures | |
| 0.7-0.9 | Fair: Most tests passing | Fix remaining issues | |
| > 0.9 | Good: All tests passing | Maintain quality | |
| Wisdom | < 0.3 | Critical: No documentation | Add docs immediately |
| 0.3-0.5 | Warning: Poor documentation | Improve documentation | |
| 0.5-0.6 | Fair: Basic documentation | Enhance docs | |
| > 0.6 | Good: Well-documented | Maintain and update |
High Power, Low Wisdom (Reckless Execution):
Power = 0.85, Wisdom = 0.35
- All tests pass, but no documentation
- Code works but is unmaintainable
- Risk: Future developers can't understand system
- Action: Add documentation and comments
High Love, Low Justice (Unbalanced Care):
Love = 0.70, Justice = 0.45
- High overall coverage, but concentrated in few modules
- Some modules have 90% coverage, others have 0%
- Risk: Weak modules will cause failures
- Action: Balance test coverage across all modules
High Everything Except One (Single Weakness):
Love = 0.65, Justice = 0.70, Power = 0.85, Wisdom = 0.35
- System is generally healthy but has one critical weakness
- Risk: Single weakness can undermine entire system
- Action: Focus healing on the weak dimension
Symptom: guardian heal-self times out during test verification
Cause: Test suite is too large (>1000 tests) or slow tests
Solution:
# Apply formatting manually (fast)
python -m black guardian/ --line-length 120
# Apply test stubs without verification
guardian heal-self --strategy test-stubs --skip-verification
# Or increase timeout
guardian heal-self --timeout 600Symptom: "Failed to calculate coverage" error
Cause: pytest-cov not installed or configured incorrectly
Solution:
pip install pytest-cov
pytest --cov=guardian --cov-report=jsonSymptom: "Failed to write commit object" during healing
Cause: Git signing service unavailable or misconfigured
Solution:
# Disable signing temporarily
git config commit.gpgsign false
# Or retry after brief wait
sleep 2 && guardian heal-selfSymptom: Coordinates remain unchanged after healing
Cause: Actions applied but metrics not recalculated
Solution:
# Run analysis before and after to see change
guardian self-analyze > before.txt
guardian heal-self
guardian self-analyze > after.txt
diff before.txt after.txtGuardian's self-healing system represents a paradigm shift in software maintenance - from reactive fixing to proactive self-correction. By treating code health as a dynamic system modeled through the LJPW framework, Guardian can:
- Understand its own health through semantic coordinates
- Predict future degradation through trajectory simulation
- Intervene with targeted healing actions
- Verify improvements through comprehensive testing
- Learn from each healing cycle to improve future interventions
This is just the beginning. As Guardian's self-healing capabilities evolve, we envision a future where codebases are self-maintaining ecosystems that automatically adapt, improve, and evolve - guided by the timeless principles of Love, Justice, Power, and Wisdom.
For questions, contributions, or feedback:
- GitHub: https://github.com/BruinGrowly/Guardian-Cybersecurity-Engine
- Issues: https://github.com/BruinGrowly/Guardian-Cybersecurity-Engine/issues
- Docs: https://guardian-docs.example.com
Version: 1.0.0 Last Updated: 2025-11-12 Authors: Guardian Development Team