Skip to content

module: add clearCache for CJS and ESM#61767

Open
anonrig wants to merge 1 commit intonodejs:mainfrom
anonrig:yagiz/node-module-clear-cache
Open

module: add clearCache for CJS and ESM#61767
anonrig wants to merge 1 commit intonodejs:mainfrom
anonrig:yagiz/node-module-clear-cache

Conversation

@anonrig
Copy link
Copy Markdown
Member

@anonrig anonrig commented Feb 10, 2026

Introduce Module.clearCache() to invalidate CommonJS and ESM module caches with optional resolution context, enabling HMR-like reloads. Document the API and add tests/fixtures to cover cache invalidation behavior.

@nodejs-github-bot
Copy link
Copy Markdown
Collaborator

Review requested:

  • @nodejs/loaders

@nodejs-github-bot nodejs-github-bot added esm Issues and PRs related to the ECMAScript Modules implementation. module Issues and PRs related to the module subsystem. needs-ci PRs that need a full CI run. labels Feb 10, 2026
@anonrig anonrig force-pushed the yagiz/node-module-clear-cache branch 2 times, most recently from 90303e6 to 1d0accc Compare February 10, 2026 21:25
@anonrig anonrig added semver-minor PRs that contain new features and should be released in the next minor version. notable-change PRs with changes that should be highlighted in changelogs. labels Feb 10, 2026
@github-actions
Copy link
Copy Markdown
Contributor

The notable-change PRs with changes that should be highlighted in changelogs. label has been added by @anonrig.

Please suggest a text for the release notes if you'd like to include a more detailed summary, then proceed to update the PR description with the text or a link to the notable change suggested text comment. Otherwise, the commit will be placed in the Other Notable Changes section.

@mcollina
Copy link
Copy Markdown
Member

I’m relatively +1 on having this in Node.js, but I recall having a a lot of discussions about this @GeoffreyBooth and @nodejs/loaders teams about this, and it would massively break the spec, expectations, and invariants regarding ESM.

(Note, this is what people have been asking us to add for a long time).

My personal objection to this API is that it would inadvertently leak memory at every turn, so while this sounds good in theory, in practice it would significantly backfire in long-running scenarios. An option could be to expose it only behind a flag, putting the user in charge of choosing this behavior.

Every single scenario where I saw HMR in Node.js ends up in memory leaks. This is the reason why I had so much interest and hopes for ShadowRealm.

Copy link
Copy Markdown
Member

@benjamingr benjamingr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am still +1 on the feature from a user usability point of view. Code lgtm.

@benjamingr
Copy link
Copy Markdown
Member

Every single scenario where I saw HMR in Node.js ends up in memory leaks. This is the reason why I had so much interest and hopes for ShadowRealm.

We're giving users a tool, it may be seen as a footgun by some but hopefully libraries that use the API correctly and warn users about incorrect usage emerge.

@anonrig
Copy link
Copy Markdown
Member Author

anonrig commented Feb 10, 2026

@mcollina Thanks for the feedback. I agree the ESM semantics concerns are real. This API doesn’t change the core ESM invariants (single instance per URL); it only removes Node's internal cache entries to allow explicit reloads in opt‑in workflows. Even with that, existing references (namespaces, listeners, closures) can keep old graphs alive, so this is still potentially leaky unless the app does explicit disposal. I’ll make sure the docs call out the risks and the fact that this only clears Node’s internal caches, and I’d like loader team input on the final shape of the API.

This commit should address some of your concerns. b3bd79a

I am still +1 on the feature from a user usability point of view. Code lgtm.

Thanks for the review @benjamingr. Would you mind re-reviewing again so I can trigger CI?

@Nsttt
Copy link
Copy Markdown

Nsttt commented Feb 10, 2026

Thanks a lot for this ❤️

@Jamesernator
Copy link
Copy Markdown

Jamesernator commented Feb 10, 2026

Rather than violating ESM invariants, can't node just provide a function that imports a url?

i.e. While the given example of:

const url = new URL('./mod.mjs', import.meta.url);
await import(url.href);

clearCache(url);
await import(url.href); // re-executes the module

is indeed not spec compliant, it's perfectly legal to have something like:

import { clearCache, importModule } from "node:module";

await importModule(someUrl);
clearCache();
await importModule(someUrl); // reexecute

@codecov
Copy link
Copy Markdown

codecov bot commented Feb 10, 2026

Codecov Report

❌ Patch coverage is 96.25000% with 18 lines in your changes missing coverage. Please review.
✅ Project coverage is 89.70%. Comparing base (5ff1eab) to head (6d79dc2).
⚠️ Report is 3 commits behind head on main.

Files with missing lines Patch % Lines
lib/internal/modules/clear.js 95.50% 13 Missing ⚠️
lib/internal/modules/esm/module_map.js 91.17% 3 Missing ⚠️
src/node_modules.cc 87.50% 0 Missing and 2 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main   #61767      +/-   ##
==========================================
+ Coverage   89.69%   89.70%   +0.01%     
==========================================
  Files         695      696       +1     
  Lines      214417   214895     +478     
  Branches    41059    41149      +90     
==========================================
+ Hits       192321   192779     +458     
- Misses      14156    14169      +13     
- Partials     7940     7947       +7     
Files with missing lines Coverage Δ
lib/internal/modules/cjs/loader.js 98.16% <100.00%> (+0.02%) ⬆️
lib/internal/modules/esm/loader.js 98.81% <100.00%> (+0.02%) ⬆️
lib/internal/modules/esm/translators.js 97.79% <100.00%> (+0.12%) ⬆️
lib/internal/modules/helpers.js 98.78% <100.00%> (+0.06%) ⬆️
lib/internal/modules/package_json_reader.js 99.74% <100.00%> (+0.01%) ⬆️
lib/module.js 100.00% <100.00%> (ø)
src/node_modules.h 100.00% <ø> (ø)
src/node_modules.cc 80.09% <87.50%> (+0.28%) ⬆️
lib/internal/modules/esm/module_map.js 98.18% <91.17%> (-0.30%) ⬇️
lib/internal/modules/clear.js 95.50% <95.50%> (ø)

... and 31 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Copy Markdown
Member

@joyeecheung joyeecheung left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I am +1 to the idea in general, I am afraid the current API may bring more problem than it solves...see the comments.

(Granted it isn't really a problem unique to this specific design, I think the issue is more that this is not a very well solved problem so far, I don't really know what it should look like, though I think I might be able to point out what it should not look like to avoid adding/re-introducing leaks/use-after-frees that user land workarounds can already manage)

@ScriptedAlchemy
Copy link
Copy Markdown

ScriptedAlchemy commented Feb 11, 2026

I was the one requesting this while sitting next to yagiz today.
Some context:

We take advantage of Module Federation which allows us to distribute code at runtime. However, when parts of the distributed system are updated, it gets stuck in module cache.

I've had some workarounds, like attempting to purge require cache - however when it comes to esm, it's a difficult problem. Since we do this distribution primarily in production, and there can be thousands of updates a day, I block esm from being supported because it'll leak memory - which was fine for several years but becoming more problematic in modern tooling.

On lambda we cannot just exit a process and bring a new one up without triggering a empty deploy, which has generally been a perf hit to cold start a new lambda vs try and "reset" the module cache for primitive hot reload.

Now, I know this might be controversial, or not recommended - but the reality is that many large companies use federation, most fortune 50 companies use it heavily. All of them are relying on userland cobbling I've created. If there is a solution, it would be greatly appreciated by all of my users.

I believe this would also be very useful in general for tooling like rspack etc where we have universal dev serves.

If invalidation of specific modules causes complexity, I'd be more than happy with a nuclear option like resetModuleCache() which just clears everything entirely. Would be a little slower, but nothing is slower than killing a process and bringing up a new one.

"Soft Restart" node without killing it.
Yes, I'm aware of various footguns like globals, prototype pollution etc.
These so far have been easy to mitigate and none of the user base has reported any major issues around it, whereas my cobbled together solution poses a much bigger issue vs footguns.

Don't have much opinion on spec compliance etc, can go through NAPI as well if that would avoid any spec concerns or pushback.

@jsumners-nr
Copy link
Copy Markdown

Chiming in to say that re-loading a module is very helpful in tests. We can do this with the fabulous CJS paradigm, but ESM does not have a viable equivalent and it should.

@joyeecheung
Copy link
Copy Markdown
Member

joyeecheung commented Feb 11, 2026

I think there are still quite a few places that need updates/tests - I tried my best to find them, but there are some dusty corners in the module loader that I have never poked at, you might want to take a heap snapshot or write more tests with v8.queryObject() to verify:

  • What happens when a closure in a module errors (or more specifically when the error stack is prepared by poking at various caches) after the cache of the original module is cleared? Especially if it has source maps and --enable-source-maps is on?
  • This is tricky, but cjsModule[parent] and cjsModule[kLastModuleParent] could need an update too if you yank the parents out of the cache. Otherwise the parent can get leaked.
  • When dynamic import(cjs) happens, there can be a point where the CJS module cache entry for the requested module and its dependencies are synchronously populated for export detection, but they will only be compiled and evaluated in the next microtask queue checkpoint, yet here import() itself can already return since it's async, and some code elsewhere could clear the cache before another checkpoint (likely an await) actually spins the evaluation - in the evaluation callback of cjs facades, it will then try to look up the caches again, and see a mismatch between "module whose exports are detected" v.s. "module that's actually being compiled and evaluated" - races of this kind has been a source of subtle bugs, we sort of made most of them go away by making resolution and loading entirely synchronous, but the cache clearing can expose new internal details that add another bug surface that's worth checking.
  • The cjsCache in the esm translators (there's a TODO about using WeakMap instead, maybe that works?)
  • The wasm facade module has a custom import.meta initializer that contains a closure (implemented in createDynamicModule), which in turn has references crossing the wasm boundary, not sure if that can create another source of leaks.

@anonrig
Copy link
Copy Markdown
Member Author

anonrig commented Feb 11, 2026

I think I addressed all of your concerns @joyeecheung. Let me know if I missed anything!

@GeoffreyBooth
Copy link
Copy Markdown
Member

I’m relatively +1 on having this in Node.js, but I recall having a a lot of discussions about this @GeoffreyBooth and @nodejs/loaders teams about this, and it would massively break the spec, expectations, and invariants regarding ESM.

Just pinging @guybedford to speak on the spec concerns. I think we should wait for him or someone similarly knowledgeable about the spec to comment before landing.

In general I'm +1 on the feature, assuming it can be safely implemented. My (dim) recollection was that the last time we considered it, it was impossible to modify an ES module after it had been loaded into V8. Has that changed in recent years? How do you handle cases like import { foo } from './bar.js' where bar.js gets reloaded and no longer has a foo export, and the importing code calls foo()? That was part of the complexity, that ESM has this linking stage and so presumably replaced modules need to have the same shapes/exports or else the linking gets invalidated.

@anonrig anonrig requested a review from guybedford February 12, 2026 01:40
@petamoriken
Copy link
Copy Markdown
Contributor

petamoriken commented Mar 27, 2026

@guybedford Are there any minutes for the TC39 modules meeting? Will this topic be discussed at the upcoming TC39 plenary meeting?

I'd like to add these topics to the agenda at the next TC39 modules coming up next Thursday 26. The meeting is on the public calendar if anyone here interested would like to join. There are no concerns from my side, but I think it's a very interesting spec discussion to have nonetheless.


update: It is reassuring to know that there are plans to modify the ECMAScript side.

To align spec caching with this PR will be some work yet, and it would help if we can align on a simple model to start.

#61767 (comment)

Copy link
Copy Markdown
Member

@jasnell jasnell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've got a number of concerns with this that I'd like to see addressed before it lands.

  1. There is a V8-level memory leak risk. For the cache-busting URLs with dynamic import(), old modules should be collectible and there likely is not a problem. However, for static import graphs I believe this introduces a definite memory leak risk.

  2. The docs should be stronger about the leak risk. Static import graphs create permanent V8-internal references that prevent collection.

  3. There's some missing test coverage. For instance, a test verifying what happens when you clear a module that was statically imported by a still-alive parent. The test should either demonstrate that it is leaked or collected.

  4. I'm not convinced the loadCJSModuleWithModuleLoad race test covers all the edge cases while a concurrent dynamic import() of CJS is in-flight.

  5. It's not clear if the hash_to_module_map cleanup is sufficient for same-URL re-import. There might be a memory leak case in hash_to_module_map that needs to be evaluated.

Demonstrating that the memory-leak risk is inconsequential, or that the existing tests adequately cover it, would be enough to clear this objection.

@anonrig
Copy link
Copy Markdown
Member Author

anonrig commented Mar 27, 2026

@jasnell Thanks for the detailed review. Addressing each point:


1. V8-level memory leak risk for static import graphs

You are correct that this is a real concern. When a parent module P statically imports M, `v8::Module::InstantiateModule` creates a permanent V8-internal strong reference from P's compiled module record to M's. Node.js's `clearCache` only removes M from JS-level caches (`loadCache`, `resolveCache`, `require.cache`). It cannot sever the V8-internal link — M's `ModuleWrap` (and its `v8::Globalv8::Module`) remains alive for as long as P is alive.

The leak is bounded, not unbounded. Each clear+reimport cycle with a live static parent produces exactly one stale `ModuleWrap` (the original), which stays alive for P's lifetime. It does not accumulate per cycle — see point 5 below.

For dynamic imports (`await import()` with no static parent), the old `ModuleWrap` becomes eligible for GC once it is removed from the loadCache and all JS-land namespace references are dropped. The existing module-wrap-leak test and the new static-import-leak test confirm this.


2. Docs should be stronger about the leak risk

Done. Added a dedicated "Memory retention and static imports" section that explicitly states:

  • `clearCache` only removes JS-level cache entries; it does not affect V8-internal module graph references
  • Static imports create permanent V8-internal strong references that cannot be severed by `clearCache`
  • The split-brain consequence: the static parent keeps using the old instance while new `import()` callers see the fresh instance
  • The retention is bounded (one stale instance per module per live static parent), not unbounded
  • Dynamic imports are collectible after clearing when JS references are dropped
  • Recommends cache-busting URLs and avoiding static imports of hot-reloaded modules

3. Missing test coverage for cleared statically-imported modules

Added `test/es-module/test-module-clear-cache-static-import-leak.mjs` with a fixture `test/fixtures/module-cache/esm-static-parent.mjs` that statically imports the counter.

Part 1 — static parent retention: Loads the static parent (which pins counter v1). Calls `clearCache(counterURL)` then `await import(counterURL)` to get counter v2. Then:

  • `queryObjects(ModuleWrap, { format: 'count' })` is higher after the re-import — proves the old `ModuleWrap` was not collected
  • `parent.count === 1` while `fresh.count === 2` — documents the split-brain behaviour explicitly

Part 2 — dynamic-only modules are freed: Uses `checkIfCollectableByCounting` across 8×4 cache-busting import/clear cycles to confirm that without a static parent, cleared `ModuleWrap`s are garbage-collected.


4. CJS concurrent import race test coverage

Rewrote `test-module-clear-cache-import-cjs-race.mjs` to cover three scenarios:

  • Scenario A (original): `clearCache` fires before the in-flight import settles — the original import still resolves correctly
  • Scenario B (new): Two concurrent `import(url)` calls share the same in-flight `ModuleJob`; `clearCache` fires between them; a third import starts after. Verifies p1 and p2 (sharing the cached job) resolve to the same module instance, while p3 (after the clear) is a distinct, freshly-executed instance with an incremented count
  • Scenario C (new): Repeated serial cycles work correctly

5. hash_to_module_map cleanup for same-URL re-import

Added `test/es-module/test-module-clear-cache-hash-map.mjs` that evaluates this directly.

`hash_to_module_map` is an `unordered_multimap<int, ModuleWrap*>`. Entries are added in the `ModuleWrap` constructor and removed in `~ModuleWrap()`. `clearCache` does not directly touch it — cleanup happens when `ModuleWrap` objects are GC'd.

  • For dynamic imports: `checkIfCollectableByCounting` confirms that cleared `ModuleWrap`s are eventually collected and their entries removed. No unbounded growth.
  • For static-parent cycles: the test verifies via `queryObjects` that each clear+reimport adds exactly +1 `ModuleWrap` — not unbounded accumulation.

`GetFromModule` compares `v8::Local` handles directly, so the multimap correctly distinguishes old-M from new-M even when both coexist. No correctness issue from multiple entries. The map grows by one per cycle with a static parent, but those entries are removed when the parent is eventually collected — bounded by the number of live static parents.

@anonrig anonrig requested a review from jasnell March 27, 2026 19:14
@jasnell jasnell dismissed their stale review March 27, 2026 19:24

Only non-blocking concerns remaining.

@nodejs-github-bot
Copy link
Copy Markdown
Collaborator

@anonrig anonrig added the request-ci Add this label to start a Jenkins CI on a PR. label Apr 2, 2026
@github-actions github-actions bot removed the request-ci Add this label to start a Jenkins CI on a PR. label Apr 2, 2026
@nodejs-github-bot
Copy link
Copy Markdown
Collaborator

@nodejs-github-bot
Copy link
Copy Markdown
Collaborator

@anonrig
Copy link
Copy Markdown
Member Author

anonrig commented Apr 2, 2026

CI: https://ci.nodejs.org/job/node-test-pull-request/72436/

@nodejs/build any idea why this doesn't work?

@anonrig anonrig force-pushed the yagiz/node-module-clear-cache branch from 2a72a4f to 6d79dc2 Compare April 2, 2026 21:02
@anonrig
Copy link
Copy Markdown
Member Author

anonrig commented Apr 2, 2026

I've squashed last 19 commits, rebased and force pushed. Can someone re-review and trigger request-ci flow?

@panva panva added the request-ci Add this label to start a Jenkins CI on a PR. label Apr 3, 2026
@github-actions github-actions bot removed the request-ci Add this label to start a Jenkins CI on a PR. label Apr 3, 2026
@nodejs-github-bot
Copy link
Copy Markdown
Collaborator

cycles.

For **dynamically imported** modules (`await import('./M.mjs')` with no live
static parent holding the result), the old `ModuleWrap` becomes eligible for
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
static parent holding the result), the old `ModuleWrap` becomes eligible for
static parent holding the result), the old module becomes eligible for

We shouldn't be talking about ModuleWrap in the docs.

Comment on lines +113 to +115
`clearCache` only removes references from the Node.js **JavaScript-level** caches
(the ESM load cache, resolve cache, CJS `require.cache`, and related structures).
It does **not** affect V8-internal module graph references.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`clearCache` only removes references from the Node.js **JavaScript-level** caches
(the ESM load cache, resolve cache, CJS `require.cache`, and related structures).
It does **not** affect V8-internal module graph references.
`clearCache` only removes references from the Node.js internal caches
(the ESM load cache, resolve cache, CJS `require.cache`, and related structures).
It does **not** affect JavaScript-level module graph references.

I feel that from a user's POV, the description is the other way around: JS-level references essentially comes from user code, the JS engine is only holding them on the user's behalf, not that V8 itself needs them for some reason. The cleared references are technically Node.js internals.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think "JavaScript-level module graph" imparts what you're hoping it does. If I didn't read this diff and associated review comment, I would have really struggled to understand what it means.

In my opinion, there should be an example that shows what this feature accomplishes. I keep getting the impression that it will solve the testing scenario I described above. But it's not clear to me from the explanation if that is true, or if this is meant to solve some other cache problem.

Comment on lines +90 to +107
const request = isURL(specifier) ? specifier.href : specifier;

if (!parentPath && isRelative(request)) {
return null;
}

const parent = parentPath ? createParentModuleForClearCache(parentPath) : null;
try {
const { filename, format } = resolveForCJSWithHooks(request, parent, false, false);
if (format === 'builtin') {
return null;
}
return filename;
} catch {
// Resolution can fail for non-file specifiers without hooks - return null
// to silently skip clearing rather than throwing.
return null;
}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
const request = isURL(specifier) ? specifier.href : specifier;
if (!parentPath && isRelative(request)) {
return null;
}
const parent = parentPath ? createParentModuleForClearCache(parentPath) : null;
try {
const { filename, format } = resolveForCJSWithHooks(request, parent, false, false);
if (format === 'builtin') {
return null;
}
return filename;
} catch {
// Resolution can fail for non-file specifiers without hooks - return null
// to silently skip clearing rather than throwing.
return null;
}
const parent = parentPath ? createParentModuleForClearCache(parentPath) : null;
try {
const { filename, format } = resolveForCJSWithHooks(specifier, parent, false, false);
if (format === 'builtin') {
return null;
}
return filename;
} catch {
// Resolution can fail for non-file specifiers without hooks - return null
// to silently skip clearing rather than throwing.
return null;
}

require doesn't understand URLs, it only accepts paths. If resolver is require and the specifier is a URL object, it should lead to ERR_INVALID_ARG_TYPE.

Comment on lines +2 to +20
// Evaluates the hash_to_module_map memory behaviour across clearCache cycles.
//
// hash_to_module_map is a C++ unordered_multimap<int, ModuleWrap*> on the
// Environment. Every new ModuleWrap adds an entry; the destructor removes it.
// Clearing the Node-side loadCache does not directly touch hash_to_module_map —
// entries are removed only when ModuleWrap objects are garbage-collected.
//
// We verify two invariants:
//
// 1. DYNAMIC imports: after clearCache + GC, the old ModuleWrap is collected
// and therefore its hash_to_module_map entry is removed. The map does NOT
// grow without bound for purely-dynamic import/clear cycles.
// (Verified via checkIfCollectableByCounting.)
//
// 2. STATIC imports: when a parent P statically imports M, clearing M from
// the load cache does not free M's ModuleWrap (the static link keeps it).
// Each re-import adds one new entry while the old entry stays for the
// lifetime of P. This is a bounded, expected retention (not an unbounded
// leak): it is capped at one stale entry per module per live static parent.
Copy link
Copy Markdown
Member

@joyeecheung joyeecheung Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel that we should avoid talking about hash_to_module_map here, that can be changed later and the comment would then become weirdly out of sync in the test. Something more generic like "internal book-keeping" is enough.

Comment on lines +43 to +46
cascadedLoader.deleteResolveCacheEntry = function(specifier, parentURL, importAttributes) {
capturedCalls.push({ specifier, parentURL, importAttributes });
return original.call(this, specifier, parentURL, importAttributes);
};
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Monkey-patching an internal method to check if it's invoked seems rather brittle, and doesn't really guarantee that it's working. I think for the import behavior we can just check actual behaviors (e.g. it re-resolves the next time or that you can observe the registered hooks is called again). For require we only need to check that it doesn't throw.

Comment on lines +23 to +24
assert.ok(cascadedLoader.hasResolveCacheEntry(specifier, parentURL),
'resolve cache should have an entry after import');
Copy link
Copy Markdown
Member

@joyeecheung joyeecheung Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this? Other assertions already seem enough. It's better to avoid testing internals as much as possible to avoid the churn whenever they get modified.

Comment on lines +12 to +17
const {
privateSymbols: {
module_first_parent_private_symbol,
module_last_parent_private_symbol,
},
} = internalBinding('util');
Copy link
Copy Markdown
Member

@joyeecheung joyeecheung Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's better to avoid referencing internal symbols. Testing each one of known internal references are severed in separate tests is neither necessary nor sufficient to ensure that there are not any remaining references internally - if there are 5 types of references in total, have 4 tests each testing 1 type of reference can be severed still won't ensure that there's no leak, and it won't catch anything if one more type of reference is added internally. On the other hand, if some internal changes just remove one type of them since they are just no longer useful (e.g. we no longer maintain first parent but always just use last parent, as a TODO suggested), then a test like this is just maintenance burden. Something like verifying that required ESM can still be cleared via checkIfCollectableByCounting is probably what's really worth testing & what users would actually care about, and would only break when internal changes introduce a leak that's worth catching.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

esm Issues and PRs related to the ECMAScript Modules implementation. module Issues and PRs related to the module subsystem. needs-ci PRs that need a full CI run. notable-change PRs with changes that should be highlighted in changelogs. review wanted PRs that need reviews. semver-minor PRs that contain new features and should be released in the next minor version.

Projects

None yet

Development

Successfully merging this pull request may close these issues.