The CONTRIBUTING.md sets a high bar for submissions — "real technical depth", "not marketing fluff", etc. Curious what methodology was used for the initial 209 entries that all landed in a single commit.
For a curated list, it would really add value to know:
- Which of these tools/resources have you actually used or tested?
- Any personal notes on tradeoffs? (e.g., "mcp-scan is great but slow on large codebases", or "this blog post is better than that one because...")
- How were descriptions validated? Some read more like summaries of the repo's own README than independent assessments.
Even a small "maintainer picks" or "tested by me" marker on entries would help readers distinguish signal from catalog.
The CONTRIBUTING.md sets a high bar for submissions — "real technical depth", "not marketing fluff", etc. Curious what methodology was used for the initial 209 entries that all landed in a single commit.
For a curated list, it would really add value to know:
Even a small "maintainer picks" or "tested by me" marker on entries would help readers distinguish signal from catalog.