Product Name: Shellsafe Domain: shellsafe.ai Author: Maxime Mansiet Date: 2026-01-31 Status: Ready for MVP Development
Shellsafe is a viral, trust-first marketplace for AI agent resources (skills, workflows, configs, MCPs, cron jobs) that combines:
- Discovery — Browse and search AI agent resources across platforms
- Security Scanning — Automated detection of prompt injection, data exfiltration, vulnerabilities
- Cryptographic Trust — Verified authors via DIDs, tamper-proof integrity via Verifiable Credentials
- Social Layer — Community ratings, best practices sharing, trending resources
- MCP/API Access — AI agents can discover and verify resources programmatically
- Shell = Command line shell (the security risk of AI agents with root access)
- Shell = Animal shell (crustacean theme from OpenClaw/Moltbot ecosystem)
- Shell = Protective shell (safety metaphor)
- Safe = Security, trust, protection
- Platform-agnostic — not tied to any specific AI agent framework
- "The only place to find AI agent skills you can actually trust."
- "AI resources you can verify — not just trust."
- "VirusTotal meets Product Hunt for AI agents."
- "The trust layer the AI agent ecosystem is missing."
- "Your AI has shell access. Make sure it's Shellsafe."
- OpenClaw (formerly Clawdbot/Moltbot) hit 100k+ GitHub stars in 72 hours
- Moltbook launched as "Reddit for AI agents" — 150k+ agents registered in 3 days
- People are running AI agents 24/7 on Mac Minis and $5 VPS instances
- AI agents have shell access, can read/write files, execute scripts, control browsers
- 26% of agent skills contain vulnerabilities (Cisco research)
- 1,800+ exposed instances leaking API keys and credentials
- OpenClaw's own docs admit: "There is no 'perfectly secure' setup"
- ClawdHub (OpenClaw's skill directory) explicitly warns: "Skills are not security-audited"
- SkillsMP has 71,000+ skills with zero security scanning
"The missing middle — trustable autonomy — is the most valuable unsolved problem in AI." — DigitalOcean
Users choose between:
- Helpful but risky — Run untrusted skills with full system access
- Safe but limited — Don't use the ecosystem at all
Nobody has built the trust layer.
- Security scanning + verified authors + trust scores
- Pre-seeded with indexed skills from GitHub, ClawdHub, SkillsMP
- Web UI + MCP server for AI agent access
- Workflows, full agent configs, cron jobs, MCP servers
- Become the "npm + VirusTotal" for AI automation
- API access for platforms to integrate verification
- "Verified by [Product]" becomes the industry badge
- Other platforms integrate the verification API
- The trust layer for Claude, Gemini, OpenAI, and all AI tools
| Product | What It Does | Gap |
|---|---|---|
| SkillsMP | 71,000+ skills for Claude Code, Codex, ChatGPT | No security scanning |
| OpenClaw ClawdHub | 700+ community skills | Explicitly "not security-audited" |
| Rebuff, Antijection | Prompt injection detection APIs | Separate from discovery |
| ServiceNow, Oracle | Enterprise AI agent marketplaces | Enterprise-only, closed |
- OpenClaw ecosystem is exploding right now (Jan 2026)
- Security concerns are making headlines daily
- No trusted marketplace exists
- MCP is becoming the standard protocol (adopted by OpenAI, Google, Anthropic)
- The window to become the trust standard is now
Every resource is scanned before listing:
- Prompt injection patterns
- Data exfiltration attempts
- Excessive permission requests
- Known vulnerability signatures
- Behavioral analysis (sandboxed execution)
Leveraging Self-Sovereign Identity (SSI) concepts:
- Verified Authors — DID-linked identity, not just "we checked"
- Tamper-Proof Integrity — Verifiable Credentials prove content hasn't changed
- Verifiable Scan Results — Security reports are VCs, independently verifiable
- Portable Trust — Credentials work even if the platform dies
Like Moltbook but for trust:
- Community ratings and reviews
- "Trending" and "Most Trusted" sections
- Best practices discussions
- Author reputation scores
- Shareable trust badges
Future platforms interact via AI:
- MCP server for AI agents to search/verify resources
- REST API for platform integrations
- Webhooks for real-time verification
- "Verified by [Product]" embeddable badges
- Running OpenClaw, Claude Code, Codex, custom agents
- Want to use community skills but worried about security
- Need quick way to verify before installing
- Creating skills, workflows, configs
- Want credibility and visibility
- Willing to verify identity for "Verified Author" badge
- Building AI agent products
- Need to verify third-party resources
- Want API access to trust layer
User sees a skill they want to install → Clicks "Verify" → Instant security report + trust score + author verification → "Holy shit, I can actually see if this is safe"
- Trust badges — Authors embed "Verified Safe" badges in READMEs
- Scan reports — Shareable URLs for security analysis
- Leaderboards — "Most Trusted Authors", "Safest Skills"
- Twitter-ready — One-click share scan results
- More skills indexed → More users → More authors verify → More skills
- Trust data compounds (scan history, author reputation)
- Becomes the default place to check before installing
- Pre-seed with 10,000+ indexed skills (GitHub, ClawdHub, SkillsMP)
- Scan everything — Generate trust scores for all existing skills
- Launch on Twitter/HN — "I scanned 10,000 AI agent skills. Here's what I found."
- Controversial hook — Name-and-shame dangerous skills (with responsible disclosure)
| Feature | Priority | Description |
|---|---|---|
| Skill Indexer | P0 | Scrape and index skills from GitHub, ClawdHub, SkillsMP |
| Security Scanner | P0 | Prompt injection, data exfil, permission analysis |
| Trust Score | P0 | Visual score (0-100) + Safe/Warning/Dangerous badge |
| Web UI | P0 | Browse, search, filter, view scan reports |
| Scan Report Page | P0 | Detailed breakdown, shareable URL |
| Author Profiles | P1 | Link skills to authors, basic reputation |
| MCP Server | P1 | AI agents can search and verify via MCP |
| API Access | P1 | REST API for programmatic verification |
| Verified Authors | P2 | Optional DID-based verification for trust badge |
| VC Issuance | P2 | Issue Verifiable Credentials for scan results |
- Workflow/config support (skills only first)
- Full social features (comments, discussions)
- Team/organization features
- Paid tiers (launch free, monetize later)
| Layer | Technology | Rationale |
|---|---|---|
| Frontend | Next.js 15 (App Router) | Fast, SEO-friendly, Maxime's expertise |
| Styling | Tailwind CSS + shadcn/ui | Rapid UI development |
| Backend | Next.js API Routes + tRPC | Type-safe, co-located |
| Database | PostgreSQL + Prisma | Reliable, scalable |
| Search | Meilisearch or Typesense | Fast full-text search |
| Queue | BullMQ + Redis | Background scanning jobs |
| Scanning | Custom + Rebuff/Antijection APIs | Multi-layer detection |
| MCP Server | Node.js MCP SDK | Native MCP protocol support |
| Auth | NextAuth or Clerk | Quick setup, social logins |
| Hosting | Vercel (frontend) + Railway/Hetzner (backend) | Cost-effective, scalable |
| SSI (Later) | did:web + custom VC issuance | Lightweight SSI, no blockchain needed |
Skill
├── id (uuid)
├── name
├── description
├── source_url (GitHub, etc.)
├── source_platform (openclaw, claude-code, codex, etc.)
├── content_hash (SHA-256 for integrity)
├── author_id (FK)
├── trust_score (0-100)
├── status (safe, warning, dangerous, pending)
├── last_scanned_at
├── created_at
└── updated_at
Author
├── id (uuid)
├── name
├── github_username
├── did (optional, for verified authors)
├── verified (boolean)
├── reputation_score
├── skills_count
└── created_at
ScanReport
├── id (uuid)
├── skill_id (FK)
├── scanner_version
├── findings (JSON array of issues)
├── trust_score
├── status
├── scanned_at
├── vc_proof (optional, Verifiable Credential)
└── created_at
Finding
├── type (prompt_injection, data_exfil, excessive_permission, etc.)
├── severity (critical, high, medium, low, info)
├── description
├── line_number (optional)
├── evidence (code snippet)
└── recommendation
Layer 1: Static Analysis
├── Regex patterns for known injection techniques
├── AST parsing for suspicious code patterns
├── Permission analysis (what does it access?)
└── Dependency scanning (known vulnerable packages)
Layer 2: LLM-Based Analysis
├── Send skill content to LLM for semantic analysis
├── Detect obfuscated or novel injection techniques
├── Context-aware scoring (code comments vs instructions)
└── Use Rebuff or custom fine-tuned model
Layer 3: Behavioral Analysis (Future)
├── Sandboxed execution
├── Network call monitoring
├── File system access tracking
└── Actual behavior vs declared permissions
Base Score: 100
Deductions:
- Critical finding: -40
- High finding: -25
- Medium finding: -15
- Low finding: -5
- No author info: -10
- Recently created (< 7 days): -5
- No community ratings: -5
Bonuses:
- Verified author: +10
- 100+ installs with no reports: +5
- Active maintenance (updated < 30 days): +5
Final Score: Clamped 0-100
Status:
- 80-100: Safe (green)
- 50-79: Warning (yellow)
- 0-49: Dangerous (red)
AI agents can interact with the marketplace via MCP:
// Search for skills
search_skills(query: string, platform?: string, min_trust_score?: number)
→ Returns: Array of skills with trust scores
// Get skill details
get_skill(skill_id: string)
→ Returns: Full skill info + latest scan report
// Verify skill safety
verify_skill(source_url: string)
→ Returns: Trust score + findings summary
// Get author info
get_author(author_id: string)
→ Returns: Author profile + verification status + reputation
// Report issue
report_skill(skill_id: string, reason: string)
→ Returns: Report confirmation// Skill catalog
skills://catalog
→ Returns: Full skill index (paginated)
// Trust leaderboard
skills://leaderboard/authors
skills://leaderboard/skills
→ Returns: Top trusted authors/skills
// Recent scans
skills://recent-scans
→ Returns: Latest scan results- First-mover in trust layer — Compounds over time (scan history, reputation data)
- SSI expertise — Cryptographic verification is non-trivial to implement
- Network effects — Authors want to be verified here, users check here
- Data moat — Every scan improves detection models
- MCP integration — Becomes the default verification API for agents
| Timeframe | Moat Strength |
|---|---|
| Month 1-3 | Low — anyone could build this |
| Month 3-6 | Medium — data accumulation, early adopters |
| Month 6-12 | High — network effects, SSI integration, API adoption |
| Year 2+ | Very High — industry standard, deep integrations |
| Tier | Price | Features |
|---|---|---|
| Free | $0 | Browse, search, basic scan reports, 10 API calls/day |
| Pro | $9/mo | Unlimited API, priority scanning, private skills, webhooks |
| Team | $29/mo | Shared library, team management, verified org badge |
| Enterprise | Custom | SLA, dedicated support, on-prem scanner, custom integrations |
- Verified Author Badge — One-time fee for identity verification
- Promoted Listings — Authors pay for visibility
- API Usage — Pay-per-scan for high-volume users
- White-label — License scanner to other platforms
| Risk | Likelihood | Impact | Mitigation |
|---|---|---|---|
| Chicken-and-egg (no skills, no users) | High | High | Pre-seed with 10k+ indexed skills |
| SkillsMP adds security scanning | Medium | High | SSI layer is moat, move fast |
| OpenClaw ecosystem dies | Low | High | Support multiple platforms from start |
| Security scanning has false positives | High | Medium | Allow appeals, improve over time, be transparent |
| Legal issues (scanning third-party code) | Medium | Medium | Only scan public code, clear ToS |
| Author backlash (skills flagged as dangerous) | Medium | Medium | Responsible disclosure, appeals process |
- Index 10,000+ skills from GitHub, ClawdHub, SkillsMP
- Build security scanner (static + LLM analysis)
- Create web UI (browse, search, scan reports)
- Set up MCP server
- Write API documentation
- Create "State of AI Agent Security" report from scan data
- Post on Twitter with controversial findings
- Submit to Hacker News
- Reach out to OpenClaw community (Discord, GitHub)
- Contact tech journalists covering AI agent security
- Create shareable badges for authors
- Monitor feedback, iterate fast
- Add verified author feature
- Implement VC issuance for scan reports
- Expand to workflows and configs
- Build partnerships with platforms
Final choice — domain secured
| Asset | Status | URL/Handle |
|---|---|---|
| Domain | ✅ Secured | shellsafe.ai ($74/year on Porkbun) |
| Twitter/X | ✅ Available | @shellsafe |
| GitHub | Check | github.com/shellsafe |
| Discord | Check | For community |
Triple meaning of "shell":
- Command shell — AI agents have dangerous shell access
- Animal shell — Crustacean theme (OpenClaw lobster, Moltbot molting)
- Protective shell — Safety and security
- "Your AI has shell access. Make sure it's Shellsafe."
- "Trust the shell."
- "Verify before you install."
- "The trust layer for AI agents."
- Logo concept: Shield + shell hybrid, or minimalist shell icon
- Colors: Deep blue (trust) + green (safe) accents
- Mascot potential: Friendly hermit crab with shield (optional, for personality)
- Typography: Clean, modern sans-serif (Inter, Geist, or similar)
- 10,000+ skills indexed
- 1,000+ unique visitors
- 100+ skills manually verified by authors
- 10+ mentions on Twitter/HN
- 50,000+ skills indexed
- 10,000+ monthly active users
- 1,000+ API calls/day
- 100+ verified authors
- Featured in tech press
- 100,000+ resources (skills, workflows, configs)
- 100,000+ monthly active users
- Platform integrations (OpenClaw, Claude Code, etc.)
- $10k+ MRR from Pro/Team tiers
- OpenClaw Official Site
- TechCrunch: OpenClaw's AI assistants building social network
- Cisco: Personal AI Agents Security Nightmare
- DigitalOcean: What is OpenClaw
- GitHub: awesome-openclaw-skills
- OWASP Top 10 for Agentic Applications 2026
- CyberArk: AI agents and identity risks
- Stytch: Handling AI agent permissions
- SkillsMP — 71,000+ skills, no security
- OpenClaw ClawdHub — 700+ skills, not audited
- Maxime's experience at Verana (decentralized trust layer)
- DIDs, Verifiable Credentials, Trust Registries
- Lightweight approach: did:web, custom VC issuance (no blockchain required)
- Finalize product name — Check domain availability
- Set up project structure — Next.js, Prisma, etc.
- Build skill indexer — Start scraping GitHub, ClawdHub
- Implement basic scanner — Static analysis + LLM layer
- Create MVP UI — Browse, search, scan reports
- Launch beta — Share with OpenClaw community
- Iterate based on feedback
This document captures the full brainstorming session from 2026-01-31. Maxime has SSI expertise from working at Verana and wants to build a viral, trust-first marketplace for AI agent resources. The timing is perfect given the OpenClaw explosion and security concerns in the ecosystem.