| Version | Supported |
|---|---|
| 1.0.x | β Yes |
We take security seriously! π If you discover a security vulnerability in this project, please report it responsibly.
- DO NOT open a public GitHub issue for security vulnerabilities
- Instead, email us at: security@dubsopenhub.com
- Or use GitHub's private vulnerability reporting
Please provide as much of the following as possible:
- π Description of the vulnerability
- π Steps to reproduce
- π₯ Potential impact
- π‘ Suggested fix (if you have one)
- β±οΈ Acknowledgment within 48 hours
- π Assessment within 1 week
- π οΈ Fix or mitigation as quickly as possible
- π Credit in the release notes (unless you prefer anonymity)
This repository has the following GitHub security features configured:
| Feature | Status | Notes |
|---|---|---|
| β Dependabot Alerts | Enabled | Monitors dependencies for known vulnerabilities |
| β Dependabot Security Updates | Enabled | Auto-creates PRs to fix vulnerable dependencies |
| β Secret Scanning | Enabled | Detects accidentally committed secrets |
| β Secret Scanning Push Protection | Enabled | Blocks pushes containing secrets |
| β Code Scanning (CodeQL) | Available | Static analysis for security bugs |
Since this is a Copilot CLI skill (no runtime code, only markdown instructions), the primary security considerations are:
- π No secrets in skill files β SKILL.md and agent.md should never contain API keys, tokens, or credentials
- π Safe instructions β Skill instructions should never instruct the agent to bypass security controls
- π Dependency awareness β If dependencies are added in the future, keep them updated
Since this skill orchestrates hundreds of AI agents and processes user-provided task descriptions, prompt injection is a relevant concern:
- π Depth Guard β 3-layer enforcement prevents runaway spawning: prompt-level, contract-level, and config-level
- π§Ή Input sanitization β Task descriptions are compressed through 4 layers of context reduction (128K β 128 tokens), stripping potential injection payloads
- π« No credential passthrough β User input is used as task descriptions only; it is never interpolated into system-level commands or used to access external services
- βοΈ Consensus scoring β Even if one agent is influenced by injected content, the median-of-3 consensus mechanism and cross-family review limits the impact on final scores
- π» Shadow scoring β Hidden criteria that agents never see provide an independent quality audit, catching outputs that look good but contain errors
This project is licensed under the MIT License.