Summary
Calls to PATCH /v1/notes/:noteId intermittently return HTTP 500 with the body {"error":"Failed to update note"}. The failure is not deterministic — the exact same request succeeds when retried 1–2 seconds later. The issue resolves itself within a minute.
Endpoint
PATCH https://api.hackmd.io/v1/notes/<note-id>
Request details
| Field |
Value |
| Method |
PATCH |
| Content-Type |
application/json |
| Authorization |
Bearer <token> (valid, same token works immediately after) |
| Body |
{ "content": "<markdown string ~400 bytes>" } |
| Protocol |
HTTP/3 |
| TLS |
TLSv1.3 |
Observed response
HTTP/1.1 500 Internal Server Error
Content-Type: application/json
{"error":"Failed to update note"}
Timing evidence
From Cloudflare Worker logs (the request is routed through a Cloudflare Worker CORS proxy before reaching the HackMD API):
| Metric |
Value |
| Worker CPU time |
~0 ms (proxy only, no compute) |
| Wall time (total round-trip) |
~6,072 ms |
| Worker outcome |
ok (worker completed normally) |
| Response status received from HackMD |
500 |
The 6-second wall time indicates the HackMD backend held the connection open before returning the 500 — this is not a timeout on the client side.
Reproduction pattern
- Frequency: Occasional, roughly 1 in 20–30 PATCH calls during normal use
- Recovery: Retrying the identical request 1.5–2 seconds later succeeds
- The note content is valid markdown (plain text, checkboxes, headings)
- The note is owned by the authenticated user — no permission issues
What is NOT the cause
- ❌ Invalid token — the same token succeeds immediately on retry
- ❌ Permission / ownership — same note, same token works after retry
- ❌ Malformed body — same payload succeeds on retry
- ❌ Client-side timeout — the server held the connection for 6 seconds before responding 500
- ❌ CORS proxy issue — the proxy passes the request and response through transparently; its own outcome is
ok
- ❌ Rate limiting — there is no burst of requests, this happens on isolated single saves
Impact
Users see a "Save failed" error and must manually retry. With a client-side retry in place the error is hidden, but the underlying instability means saves are silently slower and occasionally double-charged against any rate limits.
Request
- Investigate whether the backend experiences transient write failures on the note storage layer
- If a 500 is unavoidable, consider returning a
503 Service Unavailable with a Retry-After header so clients can back off intelligently
- A more descriptive error body (e.g. the underlying cause) would help distinguish storage errors from permission or validation errors
Environment
- API version: v1
- Client: browser (Chrome 145, macOS)
- Region inferred from Cloudflare colo:
MRS (Marseille edge, request origin: Egypt)
Test
Summary
Calls to
PATCH /v1/notes/:noteIdintermittently return HTTP 500 with the body{"error":"Failed to update note"}. The failure is not deterministic — the exact same request succeeds when retried 1–2 seconds later. The issue resolves itself within a minute.Endpoint
Request details
PATCHapplication/jsonBearer <token>(valid, same token works immediately after){ "content": "<markdown string ~400 bytes>" }Observed response
Timing evidence
From Cloudflare Worker logs (the request is routed through a Cloudflare Worker CORS proxy before reaching the HackMD API):
ok(worker completed normally)500The 6-second wall time indicates the HackMD backend held the connection open before returning the 500 — this is not a timeout on the client side.
Reproduction pattern
What is NOT the cause
okImpact
Users see a "Save failed" error and must manually retry. With a client-side retry in place the error is hidden, but the underlying instability means saves are silently slower and occasionally double-charged against any rate limits.
Request
503 Service Unavailablewith aRetry-Afterheader so clients can back off intelligentlyEnvironment
MRS(Marseille edge, request origin: Egypt)Test