Improving comparision table - Beeceptor#17
Improving comparision table - Beeceptor#17ankitjaininfo wants to merge 1 commit intogetmockd:mainfrom
Conversation
|
Hey @ankitjaininfo, thanks for putting this together. The reference links saved me a bunch of time digging. One thing I want to get out of the way upfront: I noticed on LinkedIn that you're on the Beeceptor team. That's totally fine, and honestly you'll know the product better than anyone reviewing this PR. I just want to flag it openly so we can work through it transparently. I'll try to be just as fair to Beeceptor here as I am to the other tools in the table. If anything below is wrong and you can point me at docs that prove it, I'll happily update or split rows so the comparison is accurate for both sides. I went through each row against Beeceptor's docs over the weekend. Most of what you've put holds up, but there are a few I'd like to talk through, and in a couple of places I think the honest fix is to split a row in two so neither product gets unfairly lumped in with the other. Single binary, no runtime (✅ Cloud) This row is really about "can I drop a binary on a box with no JVM/Node/Python and have it run." Beeceptor is hosted SaaS, with on-prem available via Enterprise sales per the pricing page, which is a different deployment model. Not worse, just different. A ✅ here, even labeled "Cloud," ends up mixing two categories. I'd either flip this to ❌ SaaS, or (the cleaner option in my view) add a separate Self-hosted / offline row so Beeceptor's hosted model can be represented properly somewhere else without this row losing its meaning. HTTP + gRPC + GraphQL + WS (✅) Your own protocols page lists REST, GraphQL (SDL), gRPC (proto), SOAP (WSDL), and mTLS. WebSocket isn't on that page. I poked around and found marketing copy elsewhere mentioning websockets, but nothing showing how you'd actually configure a WS mock the way the other protocols are documented. If there's a dedicated WebSocket mock guide I missed, link it and I'll keep the ✅. Otherwise I'd mark it MQTT + SSE + SOAP + OAuth (❌) You actually under-claimed this one. Beeceptor does support SOAP via WSDL. Since SOAP is bundled into this row, I'd bump it up to Stateful CRUD (✅) Your multi-step tutorial makes the case well. There's a real stateful story here with the data-store, lists, step counters, and CRUD routes. Two small nuances: the CRUD routes are capped at 10 objects on the free tier, and the model is template-driven (you script reads/writes explicitly) rather than json-server's auto-REST over a persistent store. Both are valid, but they're not quite the same thing. I'd keep the ✅ with a footnote, or alternatively split into "Stateful CRUD" and "Multi-step stateful flows" so each tool lands a clean ✅ on the row that actually matches its design. Open to either. Import OpenAPI / Postman / HAR (✅) OpenAPI and Postman both check out. HAR I couldn't find, only HAR export from request history, no import. If HAR import exists somewhere, point me at it and the ✅ stays. Otherwise Chaos engineering (✅) This is the one I most want to talk through, because I don't think the current row gives either product a fair shake. Your chaos engineering article describes deterministic, rule-based fault injection: manual latency rules, static error codes (500/503/504/429/409), and malformed JSON responses. That's a real and useful feature, no question. What the row is currently pointing at on the mockd side is a different kind of thing though: probabilistic fault injection through a The fairest fix I can think of is to split this into Fault injection (latency, error codes, malformed responses) where Beeceptor lands a clean ✅, and Chaos engineering (probabilistic faults, profiles, circuit breakers, stateful fault tracking). That way nobody's stretching to fit a single ✅ box. Tell me if that feels right. MCP server, AI-native (✅) Confirmed via the agentic mode docs. Cloud tunnel sharing (✅) Confirmed, and good to see it's your own tunnel implementation rather than an ngrok wrapper. No changes. Built-in web dashboard (✅) Confirmed. No changes. So to summarise the asks: WebSocket and HAR import, if there are docs proving them, send the links and they stay ✅. I'd like to add a Self-hosted / offline row, fix SOAP, footnote the MCP plan requirement, and split the chaos row into Fault injection vs Chaos engineering so both tools score honestly. If the row splits sound fair, I'm happy to push the edits myself so you don't have to redo the diff. Thanks again for the contribution, genuinely appreciate someone from the Beeceptor side engaging directly rather than us trying to characterise the product from the outside. |
|
@zach-snell: thanks for your quick response. I appreciate you taking the time to review Beeceptor’s capabilities in such detail. I’m the founder of Beeceptor. Your analysis is spot on, and it’s clear you also have deep expertise. I’ve also updated the PR description for better transparency. Here are my suggestions and answers based on Beeceptor’s current capabilities:
I hope these clarify the capabilities. Beeceptor is in active development and we are pushing boundaries on mutiple fronts. Feel free to update the PR with a new commit and later merge. |
|
Just checking in if you have got time to review clarifications and suggestions given, and they are good. |
Replaces the single bundled comparison table with a compact "at a glance" table (5 differentiators) plus an expandable full matrix split into Deployment, Protocol support, Capabilities, Import/export, and Free tier sections. Changes driven by feedback and verification from #17: - Per-protocol rows instead of bundling (HTTP/gRPC/GraphQL/WS was hiding WireMock's extension story and MockServer's gRPC gap) - Fault injection vs chaos profiles split so tools can't borrow credit for primitives they ship vs those they don't - Import and export broken out; HAR export given its own row - WireMock OSS vs WireMock Cloud gating made explicit (OpenAPI import, MCP, chaos modes, dashboard are all Cloud-only) - Prism corrected from "Node" to binary (standalone releases exist) - Beeceptor added as SaaS entry with tier-appropriate footnotes - Mockoon dashboard corrected to "Cloud-only web UI" (desktop is Electron) - MockServer gRPC/GraphQL downgraded to HTTP-only (not supported) - json-server flagged REST-only and feature-frozen (v1 removed --delay) Sources and per-cell verification notes provided in PR description.
|
Thanks for the nudge @ankitjaininfo, a few days got away from me. Where I landed: rather than merging as a straight row-add, I've opened #19 as a restructure. Short "at a glance" table up top, then a full matrix in a collapsed Most of what you asked for is baked in directly: per-protocol rows, chaos split, stateful split, HAR marked ❌ import with HAR export getting its own row, self-hosted row, MCP footnote noting cloud + Team+. Two cells for Beeceptor are marked
On the Docker on-prem question: no public image on Docker Hub, no install guide I can find, and your pricing page only lists on-prem under Enterprise. The current mark is Once the new PR lands I'll close this one. Thanks for the engagement throughout, made the restructure a lot easier than starting from scratch. |
|
@zach-snell - At present Beeceptor doesn't support both of the following.
|
|
This PR can be closed in favor of #19. |
* docs: restructure README comparison table Replaces the single bundled comparison table with a compact "at a glance" table (5 differentiators) plus an expandable full matrix split into Deployment, Protocol support, Capabilities, Import/export, and Free tier sections. Changes driven by feedback and verification from #17: - Per-protocol rows instead of bundling (HTTP/gRPC/GraphQL/WS was hiding WireMock's extension story and MockServer's gRPC gap) - Fault injection vs chaos profiles split so tools can't borrow credit for primitives they ship vs those they don't - Import and export broken out; HAR export given its own row - WireMock OSS vs WireMock Cloud gating made explicit (OpenAPI import, MCP, chaos modes, dashboard are all Cloud-only) - Prism corrected from "Node" to binary (standalone releases exist) - Beeceptor added as SaaS entry with tier-appropriate footnotes - Mockoon dashboard corrected to "Cloud-only web UI" (desktop is Electron) - MockServer gRPC/GraphQL downgraded to HTTP-only (not supported) - json-server flagged REST-only and feature-frozen (v1 removed --delay) Sources and per-cell verification notes provided in PR description. * docs: resolve Beeceptor cells per maintainer confirmations Updates four cells based on @ankitjaininfo's confirmations on #19. Confirmed and updated: - WebSocket: unverified to no (mocking not supported, only HTTP proxy) - Postman import: unverified to no (not available) - Bandwidth throttling: no to roadmap (per maintainer, 2026 roadmap item) Reviewed and held: - OAuth flows: stays no. The hosted oauth-mock template is a pre-built HTTP mock template, not native OAuth flow infrastructure. Token endpoint returns faker placeholders, no JWT signing or refresh primitives. Any HTTP mocker can host the same kind of template; this row is reserved for tools that ship OAuth as a first-class protocol primitive. - Chaos profiles: stays no. Weighted responses with custom error rates are credited under "Fault injection" where Beeceptor already scores yes. The Chaos profiles row is for named pre-built scenarios (slow-api, mobile-3g, dns-flaky, satellite, etc.) shipped with the tool, not for orchestrating profiles from primitives. Legend updated: removes "unverified" key (no cells use it now), adds "roadmap" key.
|
Closing in favor of #19 (now merged: af27abe). Thanks again @ankitjaininfo for the contribution and the back-and-forth, both made the restructured comparison much better than what was there before. |
Description
Updating the comparision table with Beeceptor's capabilities.
Beeceptor is an API mocking and Virtualiation tool. It's available as SasS and on-prem.
This update is from Beeceptor's core technical team.
Type of Change
References