Skip to content

Commit 20f4ed0

Browse files
committed
Restore SDK credential behavior coverage
1 parent 734cc9b commit 20f4ed0

3 files changed

Lines changed: 997 additions & 0 deletions

File tree

MISTAKES.md

Lines changed: 124 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,124 @@
1+
- During the Cloudflare deploy incident, I did not first compare the active
2+
Worker version's secret bindings against the previous known-good version. The
3+
failing `/api/auth/login` route was caused by the current Worker version
4+
missing `WORKOS_API_KEY`, `WORKOS_CLIENT_ID`, and `WORKOS_COOKIE_PASSWORD`;
5+
`wrangler versions view` exposed this quickly.
6+
- Ran `bun run format packages/core/sdk/src/oxlint-plugin-executor.test.ts`, but the repo script expands to `oxfmt .` and ignores file arguments, so it formatted the whole checkout instead of only the intended file. Use `./node_modules/.bin/oxfmt <files>` for scoped formatting.
7+
- Initially treated the HTTP payload schema check as good after direct imports passed, but the focused test caught that an aliased `HttpApiEndpoint` import (`HttpApiEndpoint as Endpoint`) was missed. Import aliases need explicit symbol/declaration handling in compiler-backed lint rules.
8+
- Tried to run SDK package-local tests through root `vitest run <package paths>` again; root Vitest only includes root `tests/**/*.test.ts`, so use the owning package script or run Vitest from that package directory.
9+
- Repeated the same root `vitest run <package paths>` mistake while checking MCP OAuth changes; use `bun run --cwd <package> test -- <src test paths>` for package-local Vitest files.
10+
- I treated "OAuth required but no dynamic client registration" as an OAuth-only state in the MCP add-source UI, which trapped users instead of letting them use PAT/header auth or save a deferred source.
11+
- I used raw JSON.parse in OpenAPI parsing even though this repo's lint policy requires Effect Schema JSON parsing at boundaries.
12+
- Ran a broad `find . -path '*node_modules*autumn*'` while debugging Autumn package code; it walked too much of the checkout. Use `rg --files | rg 'autumn-js'` or target `node_modules/.bun/<package>` directly.
13+
- Ran root `vitest run apps/cloud/src/api/autumn-customer.test.ts`; root Vitest only includes root tests, so app tests need to run from `apps/cloud` or with that package's Vitest config.
14+
- Ran root `bunx vitest run packages/react/src/api/oauth-popup.test.ts`; root Vitest only includes root `tests/**/*.test.ts`, so React package tests need `bun run --cwd packages/react test <src test path>`.
15+
- I treated `alchemy plan` as a non-mutating verification command. In Alchemy v2 beta.36 it can update/deploy the Cloudflare state store before planning application resources, so do not run it casually against shared profiles.
16+
- I tried to replace the real-port MCP Miniflare e2e harness with Alchemy dev-mode local Worker deployment before proving the local runtime could boot this app. Alchemy v2 beta.36's sidecar stayed in resource creation and hit its internal timeout, so keep that harness on the known-passing path until a smaller reproduction is solved.
17+
18+
- Appended to LEARNINGS.md with shell redirection instead of apply_patch. This violated the repo editing preference; use apply_patch for manual file edits, including notes files.
19+
20+
- During the Alchemy migration, I used a shell/Node file rewrite for apps/cloud/src/mcp-session.ts instead of apply_patch. Manual code edits should use apply_patch; only formatting/bulk tooling should write files directly.
21+
- While updating the cloud usage-tracking tests for Effect v4, I used the old `Effect.zipRight` API. Use `Effect.andThen` for this sequencing shape in this repo.
22+
- I probed marketing with `bunx vite build`, which pulled a temporary latest Vite and reported saving the lockfile. Use the package's own build script or Alchemy dry-run instead of `bunx` probes in this repo.
23+
- During the Alchemy fork work, I renamed the source workspace package from `alchemy` to `@rhyssul/alchemy`, which broke Alchemy's own `workspace:*` references. Keep the checkout package named `alchemy` for local development and use a publish staging copy for scoped npm releases.
24+
- Published `@rhyssul/alchemy@2.0.0-beta.36-rhyssul.0` before replacing Alchemy's `catalog:` dependency ranges, making the tarball unusable for direct installs outside the monorepo. Stage and inspect the package manifest before publishing the next scoped build.
25+
- Published `@rhyssul/alchemy@2.0.0-beta.36-rhyssul.1` with concrete deps but without rewriting Alchemy's internal self-imports from `alchemy/...` to `@rhyssul/alchemy/...`, so the CLI failed when installed from the scoped tarball. Scoped vendored packages need self-import rewrites in the staging artifact.
26+
- Tried to publish a prerelease `@rhyssul/alchemy` version without an npm dist-tag. `npm publish` requires `--tag beta` for prerelease versions, so include it in the staging publish command.
27+
- Prepared a staged `@rhyssul/alchemy` publish artifact but ran `npm publish` from the repo root, which attempted to publish the private workspace and failed. Always set the working directory to the staging folder for vendored package publishes.
28+
- I treated the 0.x `alchemy-run/alchemy` `TanStackStart` example as if it
29+
should be manually ported into the Effect v2 fork. That mixed API lines.
30+
For v2, update/rebase from `alchemy-run/alchemy-effect` and only carry
31+
deliberate fork patches; don't import 0.x framework resources by hand.
32+
- Repeated the staged-package directory mistake with `npm pack --dry-run` while preparing the vendored Alchemy package. Treat every npm packaging command as needing an explicit staged-package `workdir`, not just `npm publish`.
33+
- Initially modeled a pure Cloudflare Worker service binding with a `Type` field. In Alchemy, `Type` makes the object look like a provisioned resource and the platform tries provider resolution; binding-only markers should follow the Images/Artifacts `kind` marker pattern.
34+
- Tried to derive the cloud app's global `Env` from `Cloudflare.InferEnv<ReturnType<typeof cloudWorker>>` while that worker binds `McpSessionDO`. Because the DO implementation itself depends on the global `Env`, this creates a type cycle (`McpSessionDO` referenced in its own base expression). Keep the cloud binding env explicit until Alchemy can model self-referential DO envs without an app-local declaration cycle.
35+
- Ran `bun run --cwd apps/cloud test -- <unit test files>` expecting the file filters to apply only to the default Vitest run. The package script runs default Vitest and then node-config Vitest, so the same filters made the node-config half fail with "No test files found." Use direct `node ../../node_modules/vitest/vitest.mjs run <files>` from `apps/cloud` for default-config file filters.
36+
- Ran the Alchemy fork's `Cloudflare/Workers/Worker.test.ts` as a local verification check. That test performs live Cloudflare worker lifecycle operations and also spawned Node against raw TypeScript, which failed under Node 25 strip-only mode on parameter properties. Prefer app dry-runs plus focused fork type/build checks unless the fork test environment is explicitly prepared.
37+
- While adding local static asset support to Alchemy's sidecar, I first let the
38+
asset middleware check every GET/HEAD path. The disk service returned 200 for
39+
`/`, so SSR routes were incorrectly served as static assets. Local asset
40+
middleware should only intercept known static asset paths and forward normal
41+
routes to the user worker.
42+
- I treated the initial `performance.now` prelude fix as verified after a
43+
shallow root/assets smoke test. The generated bundle later overwrote
44+
`globalThis.performance` with an undefined unenv placeholder, so delayed
45+
request traffic still crashed. For Alchemy/workerd fixes, inspect the final
46+
`.alchemy/bundles` output and run repeated request probes before calling the
47+
runtime fixed.
48+
- While verifying the storage-drizzle fix, I ran the same package-local command
49+
twice with a repo-root file path that did not exist from that working
50+
directory. When switching `workdir`, adjust file paths or split search and
51+
package script commands.
52+
- I again ran the cloud package `bun run test` script with a file/config filter.
53+
That script invokes both Vitest configs, so forwarded `--config` args can
54+
duplicate and fail. For one cloud test file, run
55+
`node ../../node_modules/vitest/vitest.mjs run --config vitest.node.config.ts <file>`
56+
from `apps/cloud`.
57+
- While restoring Alchemy MCP e2e coverage, I first adapted Alchemy's typed
58+
Durable Object namespace back into the old Cloudflare `DurableObjectNamespace`
59+
ambient type with `as unknown` casts. The correct long-term shape is to type
60+
`Env.MCP_SESSION` as Alchemy's native
61+
`DurableObjectNamespaceResource<McpSessionShape>` and pass the namespace
62+
through directly.
63+
- I tried to quiet workerd request-stream stderr by disabling MCP POST body
64+
fingerprinting without first proving the root cause. That made the local
65+
worker less stable. Keep the passing realistic behavior test first, then
66+
isolate noisy local-runtime logs separately.
67+
- I treated a clean git source tree as enough evidence that a production deploy
68+
artifact was clean. The ignored `apps/cloud/dist` directory still contained
69+
stale Alchemy output from another branch. For deploy debugging, inspect and
70+
clean generated output as part of the diagnosis, not just tracked files.
71+
- While looking for local Claude Code MCP credentials, I let a command print a
72+
short-lived refresh token into the tool output. Never print credential JSON
73+
directly; use `jq` to select only non-secret metadata or pipe secret-bearing
74+
fields into commands without echoing them.
75+
- During the Alchemy package republish, I published
76+
`@rhyssul/alchemy@2.0.0-beta.36-rhyssul.10` before checking that the
77+
packaged manifest had concrete dependency versions. The package still had
78+
monorepo `catalog:` ranges and was unusable from this repo; inspect the packed
79+
manifest before publishing vendored packages.
80+
- While diagnosing the MCP regression, I initially used broad grep-style
81+
searches across auth/session files. Even with later redaction, that is the
82+
wrong default around credential stores. Use targeted JSON parsers that print
83+
only non-secret metadata, and pass tokens directly to test commands without
84+
echoing them.
85+
- While verifying PR 263's MCP host tests, I first passed Jest's
86+
`--runInBand` flag to Vitest. Vitest 4 rejects that option; run package
87+
tests with `bun run --cwd <package> test <path>` instead.
88+
- I briefly switched an existing plain async Vitest file to import from
89+
`@effect/vitest` without converting it to `it.effect`, which failed at suite
90+
load. Convert tests intentionally or keep the file's current harness for a
91+
small scoped change.
92+
- While restarting the PR 263 local MCP server, I used `pkill -f` with a
93+
pattern that also matched the active restart shell command. Stop known PID
94+
files/process groups first, then use narrower process checks instead of a
95+
broad `pkill -f` pattern embedded in the same command string.
96+
- After moving code between workspace packages, I first ran
97+
`bun install --lockfile-only`, which updated the lockfile but left local
98+
workspace links and newly added dependencies unresolved. Run full
99+
`bun install` after package.json dependency changes before typechecking.
100+
- While iterating on dynamic UI mutation state, I initially added server-side
101+
regex rejections for common React Query patterns like `autoRenew: !autoRenew`
102+
and stale success labels. That treated a runtime consistency issue as model
103+
misuse. Prefer making the generated UI runtime preserve expected library
104+
semantics, and reserve hard validation for code that cannot evaluate safely.
105+
- While investigating a production WorkOS/Axiom issue, I started an `op run`
106+
command against `.env.production` in a non-interactive tool session and it
107+
hung waiting for 1Password. Prefer the available MCP tools for production
108+
support queries, or use an explicitly interactive path before invoking `op`.
109+
- While counting FumaDB LOC from the parent repo, I used `git -C` to list files
110+
but ran `wc` from the wrong working directory, producing missing-file noise
111+
and bogus totals. Run path-consuming commands from the same repo root or
112+
prefix the listed paths before piping them onward.
113+
- In the storage SDK MVP, I initially shaped the prototype around a new
114+
`install/context` API instead of mirroring the existing
115+
`definePlugin(() => ({ storage, extension }))` SDK shape. For architectural
116+
MVPs, preserve local API vocabulary first so the prototype is readable
117+
against the real codebase.
118+
- In the same MVP, after moving plugin-owned storage out of core, I
119+
overcorrected by removing plugin examples entirely. Keep plugin packages in
120+
the monorepo as external SDK consumers when testing core/plugin separation.
121+
- During the FumaDB cutover test restoration, I tried to use `python` for a
122+
small text rewrite and the command failed because this environment does not
123+
have `python` on PATH. Use `apply_patch`, `perl`, or `node` for local edits
124+
here instead of assuming Python is available.

0 commit comments

Comments
 (0)