Skip to content

Improving comparision table - Beeceptor#17

Closed
ankitjaininfo wants to merge 1 commit intogetmockd:mainfrom
ankitjaininfo:patch-1
Closed

Improving comparision table - Beeceptor#17
ankitjaininfo wants to merge 1 commit intogetmockd:mainfrom
ankitjaininfo:patch-1

Conversation

@ankitjaininfo
Copy link
Copy Markdown

@ankitjaininfo ankitjaininfo commented Apr 10, 2026

Description

Updating the comparision table with Beeceptor's capabilities.
Beeceptor is an API mocking and Virtualiation tool. It's available as SasS and on-prem.
This update is from Beeceptor's core technical team.

Type of Change

  • Documentation update

References

  1. Multi protocol support: https://beeceptor.com/docs/protocols-supported/
  2. Upload OpenAPI Spec: https://beeceptor.com/openapi-mock-server/
  3. Stateful mocks: https://beeceptor.com/docs/tutorials/building-multi-step-user-journeys-using-api-mocking/
  4. MCP - connect with AI - https://beeceptor.com/docs/mcp-agentic-mode/
  5. Cloud tunnel to localhost: https://beeceptor.com/docs/local-tunneling-by-exposing-service-port/
  6. Web dashboard: realtime web dashboard to intercept and create mock behaivors

@ankitjaininfo ankitjaininfo changed the title Improving comparision with Beeceptor Improving comparision table - Beeceptor Apr 10, 2026
@zach-snell
Copy link
Copy Markdown
Contributor

zach-snell commented Apr 11, 2026

Hey @ankitjaininfo, thanks for putting this together. The reference links saved me a bunch of time digging.

One thing I want to get out of the way upfront: I noticed on LinkedIn that you're on the Beeceptor team. That's totally fine, and honestly you'll know the product better than anyone reviewing this PR. I just want to flag it openly so we can work through it transparently. I'll try to be just as fair to Beeceptor here as I am to the other tools in the table. If anything below is wrong and you can point me at docs that prove it, I'll happily update or split rows so the comparison is accurate for both sides.

I went through each row against Beeceptor's docs over the weekend. Most of what you've put holds up, but there are a few I'd like to talk through, and in a couple of places I think the honest fix is to split a row in two so neither product gets unfairly lumped in with the other.

Single binary, no runtime (✅ Cloud)

This row is really about "can I drop a binary on a box with no JVM/Node/Python and have it run." Beeceptor is hosted SaaS, with on-prem available via Enterprise sales per the pricing page, which is a different deployment model. Not worse, just different. A ✅ here, even labeled "Cloud," ends up mixing two categories. I'd either flip this to ❌ SaaS, or (the cleaner option in my view) add a separate Self-hosted / offline row so Beeceptor's hosted model can be represented properly somewhere else without this row losing its meaning.

HTTP + gRPC + GraphQL + WS (✅)

Your own protocols page lists REST, GraphQL (SDL), gRPC (proto), SOAP (WSDL), and mTLS. WebSocket isn't on that page. I poked around and found marketing copy elsewhere mentioning websockets, but nothing showing how you'd actually configure a WS mock the way the other protocols are documented. If there's a dedicated WebSocket mock guide I missed, link it and I'll keep the ✅. Otherwise I'd mark it ⚠️ partial: REST/GraphQL/gRPC yes, WS unconfirmed.

MQTT + SSE + SOAP + OAuth (❌)

You actually under-claimed this one. Beeceptor does support SOAP via WSDL. Since SOAP is bundled into this row, I'd bump it up to ⚠️ partial with a "SOAP only" note. Happy to split SOAP into its own row if you'd prefer it called out cleanly.

Stateful CRUD (✅)

Your multi-step tutorial makes the case well. There's a real stateful story here with the data-store, lists, step counters, and CRUD routes. Two small nuances: the CRUD routes are capped at 10 objects on the free tier, and the model is template-driven (you script reads/writes explicitly) rather than json-server's auto-REST over a persistent store. Both are valid, but they're not quite the same thing. I'd keep the ✅ with a footnote, or alternatively split into "Stateful CRUD" and "Multi-step stateful flows" so each tool lands a clean ✅ on the row that actually matches its design. Open to either.

Import OpenAPI / Postman / HAR (✅)

OpenAPI and Postman both check out. HAR I couldn't find, only HAR export from request history, no import. If HAR import exists somewhere, point me at it and the ✅ stays. Otherwise ⚠️ partial with a footnote.

Chaos engineering (✅)

This is the one I most want to talk through, because I don't think the current row gives either product a fair shake. Your chaos engineering article describes deterministic, rule-based fault injection: manual latency rules, static error codes (500/503/504/429/409), and malformed JSON responses. That's a real and useful feature, no question.

What the row is currently pointing at on the mockd side is a different kind of thing though: probabilistic fault injection through a /chaos admin API, ten named profiles (slow-api, degraded, flaky, offline, timeout, rate-limited, mobile-3g, satellite, dns-flaky, overloaded), stateful circuit breakers with trip/reset state machines, progressive degradation, retry-after tracking, and bandwidth throttling. Both products have value here, they're just doing different things.

The fairest fix I can think of is to split this into Fault injection (latency, error codes, malformed responses) where Beeceptor lands a clean ✅, and Chaos engineering (probabilistic faults, profiles, circuit breakers, stateful fault tracking). That way nobody's stretching to fit a single ✅ box. Tell me if that feels right.

MCP server, AI-native (✅)

Confirmed via the agentic mode docs. https://mcp.beeceptor.com/mcp is a real MCP endpoint and the tool surface looks substantial. Two things worth flagging: it's cloud-hosted rather than a local MCP server, and it sits behind the Team plan ($25/mo). mockd's MCP server runs in-process locally and ships free with the binary. I'd keep the ✅ but add a footnote ("cloud-hosted, Team plan and above") so readers can weigh the tradeoff without the feature being dismissed.

Cloud tunnel sharing (✅)

Confirmed, and good to see it's your own tunnel implementation rather than an ngrok wrapper. No changes.

Built-in web dashboard (✅)

Confirmed. No changes.


So to summarise the asks: WebSocket and HAR import, if there are docs proving them, send the links and they stay ✅. I'd like to add a Self-hosted / offline row, fix SOAP, footnote the MCP plan requirement, and split the chaos row into Fault injection vs Chaos engineering so both tools score honestly.

If the row splits sound fair, I'm happy to push the edits myself so you don't have to redo the diff. Thanks again for the contribution, genuinely appreciate someone from the Beeceptor side engaging directly rather than us trying to characterise the product from the outside.

@ankitjaininfo
Copy link
Copy Markdown
Author

@zach-snell: thanks for your quick response. I appreciate you taking the time to review Beeceptor’s capabilities in such detail. I’m the founder of Beeceptor. Your analysis is spot on, and it’s clear you also have deep expertise. I’ve also updated the PR description for better transparency.

Here are my suggestions and answers based on Beeceptor’s current capabilities:

  1. My suggestion is to split this table into two: (a) protocol coverage and (b) capabilities. For protocol coverage, you can include rows like REST APIs, SOAP, gRPC, GraphQL, WebSockets, MQTT, SSE, and OAuth. This allows a deeper comparison across tools. The remaining items can go into the capabilities table.
  2. Beeceptor isn’t available as a single binary. However, both SaaS and Docker-based on-prem deployments are available, and both are easy to set up. You can have two rows “Single binary, no runtime” and “Self-hosted / offline.” I’d suggest adding “❌ SaaS, Docker” to the original row for clarity. In the second row, you can add a checkmark with a mention of Docker.
  3. Stateful CRUD (✅): Splitting this into “Stateful CRUD” and “Multi-step stateful flows” is a good direction, based on how tools in this space are evolving. The first covers pure CRUD operations w.r.t Rest APIs context, while the second enables modeling user journeys. Beeceptor supports both. “Multi-step stateful flows” are handled using inline Handlebars template constructs, with no additional scripting required. (same ref: https://beeceptor.com/docs/tutorials/building-multi-step-user-journeys-using-api-mocking/ )
  4. Import OpenAPI / Postman / HAR (✅): Beeceptor supports importing OpenAPI, WSDL, gRPC, and GraphQL specs directly. HAR import is not currently supported, so you may want to mark this as partial with a footnote.
  5. Export: HAR export is supported. I’d suggest adding a separate “Export” row if that fits your structure. This can highlight the ability to export request logs in all these tools. For example, Beeceptor allows exporting past request logs as HAR files.
  6. Chaos engineering: Beeceptor supports weighted responses, which enable probabilistic failures and randomness (weighted response). Now when this combined with Admin APIs to dynamically change the mock server behavior, it gives full control over setting up Chaos engineering (Admin API reference: https://beeceptor.com/docs/api/beeceptor-api/ )
  7. MCP server, AI-native (✅): Sure. Being cloud native, the MCP is also in the cloud.

I hope these clarify the capabilities. Beeceptor is in active development and we are pushing boundaries on mutiple fronts.

Feel free to update the PR with a new commit and later merge.

@ankitjaininfo
Copy link
Copy Markdown
Author

Just checking in if you have got time to review clarifications and suggestions given, and they are good.

zach-snell added a commit that referenced this pull request Apr 19, 2026
Replaces the single bundled comparison table with a compact "at a glance"
table (5 differentiators) plus an expandable full matrix split into
Deployment, Protocol support, Capabilities, Import/export, and Free tier
sections.

Changes driven by feedback and verification from #17:
- Per-protocol rows instead of bundling (HTTP/gRPC/GraphQL/WS was hiding
  WireMock's extension story and MockServer's gRPC gap)
- Fault injection vs chaos profiles split so tools can't borrow credit
  for primitives they ship vs those they don't
- Import and export broken out; HAR export given its own row
- WireMock OSS vs WireMock Cloud gating made explicit (OpenAPI import,
  MCP, chaos modes, dashboard are all Cloud-only)
- Prism corrected from "Node" to binary (standalone releases exist)
- Beeceptor added as SaaS entry with tier-appropriate footnotes
- Mockoon dashboard corrected to "Cloud-only web UI" (desktop is Electron)
- MockServer gRPC/GraphQL downgraded to HTTP-only (not supported)
- json-server flagged REST-only and feature-frozen (v1 removed --delay)

Sources and per-cell verification notes provided in PR description.
@zach-snell
Copy link
Copy Markdown
Contributor

Thanks for the nudge @ankitjaininfo, a few days got away from me.

Where I landed: rather than merging as a straight row-add, I've opened #19 as a restructure. Short "at a glance" table up top, then a full matrix in a collapsed <details> with protocols broken out row-by-row, capabilities split (fault injection vs chaos primitives, imports vs exports, etc), and a free-tier row. Every non-mockd cell is backed by a primary-source link in the PR description so readers can verify.

Most of what you asked for is baked in directly: per-protocol rows, chaos split, stateful split, HAR marked ❌ import with HAR export getting its own row, self-hosted row, MCP footnote noting cloud + Team+.

Two cells for Beeceptor are marked ⚠️ unverified pending your answer on these:

  1. WebSocket. Not on your protocols page. Is there a WS mock setup guide I missed?
  2. Postman import. You listed OpenAPI / WSDL / gRPC / GraphQL in your clarifications but not Postman. Earlier Beeceptor material referenced Postman collections. Still supported or pulled?

On the Docker on-prem question: no public image on Docker Hub, no install guide I can find, and your pricing page only lists on-prem under Enterprise. The current mark is ⚠️ Enterprise for Docker on the deployment row. If it's available below Enterprise with a public image, happy to upgrade that.

Once the new PR lands I'll close this one. Thanks for the engagement throughout, made the restructure a lot easier than starting from scratch.

@ankitjaininfo
Copy link
Copy Markdown
Author

@zach-snell - At present Beeceptor doesn't support both of the following.

  • Websockets - Mocking isn't present. (only HTTP proxy / passthrough is available)
  • Postman collection import

@ankitjaininfo
Copy link
Copy Markdown
Author

This PR can be closed in favor of #19.

zach-snell added a commit that referenced this pull request May 1, 2026
* docs: restructure README comparison table

Replaces the single bundled comparison table with a compact "at a glance"
table (5 differentiators) plus an expandable full matrix split into
Deployment, Protocol support, Capabilities, Import/export, and Free tier
sections.

Changes driven by feedback and verification from #17:
- Per-protocol rows instead of bundling (HTTP/gRPC/GraphQL/WS was hiding
  WireMock's extension story and MockServer's gRPC gap)
- Fault injection vs chaos profiles split so tools can't borrow credit
  for primitives they ship vs those they don't
- Import and export broken out; HAR export given its own row
- WireMock OSS vs WireMock Cloud gating made explicit (OpenAPI import,
  MCP, chaos modes, dashboard are all Cloud-only)
- Prism corrected from "Node" to binary (standalone releases exist)
- Beeceptor added as SaaS entry with tier-appropriate footnotes
- Mockoon dashboard corrected to "Cloud-only web UI" (desktop is Electron)
- MockServer gRPC/GraphQL downgraded to HTTP-only (not supported)
- json-server flagged REST-only and feature-frozen (v1 removed --delay)

Sources and per-cell verification notes provided in PR description.

* docs: resolve Beeceptor cells per maintainer confirmations

Updates four cells based on @ankitjaininfo's confirmations on #19.

Confirmed and updated:
- WebSocket: unverified to no (mocking not supported, only HTTP proxy)
- Postman import: unverified to no (not available)
- Bandwidth throttling: no to roadmap (per maintainer, 2026 roadmap item)

Reviewed and held:
- OAuth flows: stays no. The hosted oauth-mock template is a pre-built
  HTTP mock template, not native OAuth flow infrastructure. Token
  endpoint returns faker placeholders, no JWT signing or refresh
  primitives. Any HTTP mocker can host the same kind of template;
  this row is reserved for tools that ship OAuth as a first-class
  protocol primitive.
- Chaos profiles: stays no. Weighted responses with custom error rates
  are credited under "Fault injection" where Beeceptor already scores
  yes. The Chaos profiles row is for named pre-built scenarios
  (slow-api, mobile-3g, dns-flaky, satellite, etc.) shipped with the
  tool, not for orchestrating profiles from primitives.

Legend updated: removes "unverified" key (no cells use it now), adds
"roadmap" key.
@zach-snell
Copy link
Copy Markdown
Contributor

Closing in favor of #19 (now merged: af27abe). Thanks again @ankitjaininfo for the contribution and the back-and-forth, both made the restructured comparison much better than what was there before.

@zach-snell zach-snell closed this May 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants