Skip to content

Commit f7b75ef

Browse files
Copilotdevlux76Copilot
authored
P2-X: Fix PLAN.md drift, simplify ClusterStability, add curiosity broadcasting integration tests (#83)
* Initial plan * P2-X: Fix PLAN.md duplicates, simplify ClusterStability, add curiosity broadcasting integration tests - Remove duplicate ❌ Missing rows in Daydreamer section of PLAN.md - Update ExperienceReplay status to ✅ Complete - Update Daydreamer completion to 6/6 (100%) - Mark Blocker 3 (sharing pipeline) as RESOLVED - Simplify ClusterStability.collectAllShelves/collectAllVolumes to use MetadataStore.getAllShelves()/getAllVolumes() instead of complex workaround - Add integration tests for curiosity broadcasting (PII gating) and graph fragment import (identity stripping, discoverability) Co-authored-by: devlux76 <86517969+devlux76@users.noreply.github.com> * Address code review: use profile.embeddingDimension for knowledgeBoundary Co-authored-by: devlux76 <86517969+devlux76@users.noreply.github.com> * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> * Potential fix for pull request finding Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: devlux76 <86517969+devlux76@users.noreply.github.com> Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
1 parent 0049469 commit f7b75ef

3 files changed

Lines changed: 141 additions & 51 deletions

File tree

PLAN.md

Lines changed: 5 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -117,16 +117,10 @@ This document tracks the implementation status of each major module in CORTEX. I
117117
| Hebbian Updates | ✅ Complete | `daydreamer/HebbianUpdater.ts` | LTP (strengthen), LTD (decay), prune below threshold; recompute σ(v) for changed nodes; run promotion/eviction sweep |
118118
| Prototype Recomputation | ✅ Complete | `daydreamer/PrototypeRecomputer.ts` | Recalculate volume/shelf medoids and centroids; recompute salience for affected entries; run tier-quota promotion/eviction |
119119
| Full Neighbor Graph Recalc | ✅ Complete | `daydreamer/FullNeighborRecalc.ts` | Rebuild bounded neighbor lists for dirty volumes; batch size bounded by O(√(t log t)) per idle cycle; recompute salience after recalc. |
120-
| Idle Scheduler | ❌ Missing | `daydreamer/IdleScheduler.ts` (planned) | Cooperative background loop; interruptible; respects CPU budget |
121-
| Hebbian Updates | ❌ Missing | `daydreamer/HebbianUpdater.ts` (planned) | LTP (strengthen), LTD (decay), prune below threshold; recompute σ(v) for changed nodes; run promotion/eviction sweep |
122-
| Prototype Recomputation | ❌ Missing | `daydreamer/PrototypeRecomputer.ts` (planned) | Recalculate volume/shelf medoids and centroids; recompute salience for affected entries; run tier-quota promotion/eviction |
123-
| Full Neighbor Graph Recalc | ❌ Missing | `daydreamer/FullNeighborRecalc.ts` (planned) | Rebuild bounded neighbor lists for dirty volumes; batch size bounded by O(√(t log t)) per idle cycle; recompute salience after recalc. |
124-
| Experience Replay | ❌ Missing | `daydreamer/ExperienceReplay.ts` (planned) | Simulate queries to reinforce connections |
125-
| Cluster Stability | ✅ Complete | `daydreamer/ClusterStability.ts` | Lightweight label propagation for community detection; stores community labels in PageActivity; detects oversized and empty communities |
120+
| Experience Replay | ✅ Complete | `daydreamer/ExperienceReplay.ts` | Simulate queries to reinforce connections; recent-biased sampling; LTP on traversed edges |
121+
| Cluster Stability | ✅ Complete | `daydreamer/ClusterStability.ts` | Lightweight label propagation for community detection; stores community labels in PageActivity; detects oversized and empty communities; volume split/merge with orphan deletion |
126122

127-
**Daydreamer Status:** 4/6 complete (66%)
128-
129-
**Note:** Not a v1 blocker — system can ship without background consolidation (manual recalc only). Community detection is required before graph-community quota enforcement is active.
123+
**Daydreamer Status:** 6/6 complete (100%)
130124

131125
---
132126

@@ -401,9 +395,9 @@ This document tracks the implementation status of each major module in CORTEX. I
401395
**Impact:** Queries return flat top-K results only; no epistemic balance, no knowledge gap detection, no P2P curiosity.
402396
**Mitigation:** Phase 2 priority; depends on semantic neighbor graph (Blocker 1) and hierarchy builder.
403397

404-
### Blocker 3: No Privacy-Safe Sharing or Curiosity Broadcasting Pipeline
398+
### Blocker 3: No Privacy-Safe Sharing or Curiosity Broadcasting Pipeline — RESOLVED
405399
**Impact:** Core discovery-sharing value proposition is missing; knowledge gaps cannot be resolved via P2P.
406-
**Mitigation:** Phase 3 required track; implement eligibility classifier + curiosity broadcaster + signed subgraph exchange as v1 scope. CuriosityProbe must include `mimeType` and `modelUrn` to prevent incommensurable graph merges.
400+
**Resolution:** Phase 3 sharing pipeline fully implemented. `sharing/EligibilityClassifier.ts` blocks PII/credential/financial/health content. `sharing/CuriosityBroadcaster.ts` provides rate-limited probe broadcasting with fragment response handling. `sharing/SubgraphExporter.ts` and `sharing/SubgraphImporter.ts` handle eligibility-filtered export and schema-validated import with sender identity stripping. `sharing/PeerExchange.ts` orchestrates opt-in signed subgraph exchange. CuriosityProbe includes `mimeType` and `modelUrn` to prevent incommensurable graph merges.
407401

408402
### Blocker 4: Naming Drift (P0-X) — RESOLVED
409403
**Impact:** The term "Metroid" was used for the proximity graph in all code. MetroidBuilder cannot be introduced without a rename collision.

daydreamer/ClusterStability.ts

Lines changed: 1 addition & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -675,45 +675,6 @@ export class ClusterStability {
675675
private async collectAllShelves(
676676
metadataStore: MetadataStore,
677677
) {
678-
// MetadataStore does not expose a `getAllShelves()` helper, so we iterate
679-
// over all volumes and collect the shelves that reference them.
680-
// We use the reverse-index helper to get shelves for each volume.
681-
const allVolumes = await this.collectAllVolumes(metadataStore);
682-
const shelfMap = new Map<Hash, Awaited<ReturnType<MetadataStore["getShelf"]>>>();
683-
684-
for (const volume of allVolumes) {
685-
const shelves = await metadataStore.getShelvesByVolume(volume.volumeId);
686-
for (const shelf of shelves) {
687-
if (!shelfMap.has(shelf.shelfId)) {
688-
shelfMap.set(shelf.shelfId, shelf);
689-
}
690-
}
691-
}
692-
693-
return [...shelfMap.values()].filter(
694-
(s): s is NonNullable<typeof s> => s !== undefined,
695-
);
696-
}
697-
698-
private async collectAllVolumes(
699-
metadataStore: MetadataStore,
700-
): Promise<Volume[]> {
701-
const allPages = await metadataStore.getAllPages();
702-
const volumeIds = new Set<Hash>();
703-
704-
for (const page of allPages) {
705-
const books = await metadataStore.getBooksByPage(page.pageId);
706-
for (const book of books) {
707-
const volumes = await metadataStore.getVolumesByBook(book.bookId);
708-
for (const volume of volumes) {
709-
volumeIds.add(volume.volumeId);
710-
}
711-
}
712-
}
713-
714-
const volumes = await Promise.all(
715-
[...volumeIds].map((id) => metadataStore.getVolume(id)),
716-
);
717-
return volumes.filter((v): v is Volume => v !== undefined);
678+
return metadataStore.getAllShelves();
718679
}
719680
}

tests/integration/Daydreamer.test.ts

Lines changed: 135 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,11 @@ import { strengthenEdges, decayAndPrune } from "../../daydreamer/HebbianUpdater"
2323
import { runFullNeighborRecalc } from "../../daydreamer/FullNeighborRecalc";
2424
import { recomputePrototypes } from "../../daydreamer/PrototypeRecomputer";
2525
import { runLabelPropagation } from "../../daydreamer/ClusterStability";
26+
import { CuriosityBroadcaster } from "../../sharing/CuriosityBroadcaster";
27+
import type { P2PTransport } from "../../sharing/CuriosityBroadcaster";
28+
import { filterEligible } from "../../sharing/EligibilityClassifier";
29+
import { importFragment } from "../../sharing/SubgraphImporter";
30+
import type { GraphFragment, PeerMessage } from "../../sharing/types";
2631
import type { ModelProfile } from "../../core/ModelProfile";
2732

2833
// ---------------------------------------------------------------------------
@@ -238,6 +243,136 @@ describe("Daydreamer integration", () => {
238243
}
239244
});
240245

246+
it("curiosity broadcasting filters out PII pages from eligible content", async () => {
247+
const metadataStore = await IndexedDbMetadataStore.open(freshDbName());
248+
const vectorStore = new MemoryVectorStore();
249+
const runner = makeRunner();
250+
const profile = makeProfile();
251+
const keyPair = await generateKeyPair();
252+
const now = Date.now();
253+
254+
// Ingest eligible (public-interest) content
255+
const eligibleRes = await ingestText(CORPUS[0], {
256+
modelProfile: profile,
257+
embeddingRunner: runner,
258+
vectorStore,
259+
metadataStore,
260+
keyPair,
261+
now,
262+
});
263+
264+
// Ingest PII-bearing content (contains an email address and credential)
265+
const piiRes = await ingestText(
266+
"Please contact admin@example.com for the API key password=secret123 to access the private dashboard.",
267+
{
268+
modelProfile: profile,
269+
embeddingRunner: runner,
270+
vectorStore,
271+
metadataStore,
272+
keyPair,
273+
now,
274+
},
275+
);
276+
277+
// Set up a mock P2P transport and CuriosityBroadcaster
278+
const broadcastLog: PeerMessage[] = [];
279+
const transport: P2PTransport = {
280+
broadcast: async (msg) => { broadcastLog.push(msg); },
281+
onMessage: (_handler) => {
282+
// Intentionally not wiring inbound messages for this integration test
283+
},
284+
};
285+
286+
const broadcaster = new CuriosityBroadcaster({
287+
transport,
288+
nodeId: "test-node",
289+
rateLimitMs: 0,
290+
});
291+
292+
// Enqueue a curiosity probe referencing a valid page
293+
const eligiblePageId = eligibleRes.pages[0].pageId;
294+
broadcaster.enqueueProbe({
295+
m1: eligiblePageId,
296+
partialMetroid: { m1: eligiblePageId },
297+
queryContextB64: "AAAA",
298+
knowledgeBoundary: profile.embeddingDimension,
299+
mimeType: "text/plain",
300+
modelUrn: "urn:model:test:v1",
301+
timestamp: new Date(now).toISOString(),
302+
});
303+
304+
// Flush broadcasts the probe
305+
const sent = await broadcaster.flush(now);
306+
expect(sent).toBe(1);
307+
expect(broadcastLog).toHaveLength(1);
308+
expect(broadcastLog[0].kind).toBe("curiosity_probe");
309+
310+
// Verify that PII pages are blocked by the eligibility classifier
311+
const piiPageIds = piiRes.pages.map((p) => p.pageId);
312+
const eligiblePageIds = eligibleRes.pages.map((p) => p.pageId);
313+
314+
const allPages = await metadataStore.getAllPages();
315+
const eligible = filterEligible(allPages);
316+
const eligibleIds = new Set(eligible.map((p) => p.pageId));
317+
318+
// PII pages must be excluded from eligible set
319+
for (const piiId of piiPageIds) {
320+
expect(eligibleIds.has(piiId)).toBe(false);
321+
}
322+
323+
// Public-interest pages must be included in eligible set
324+
for (const id of eligiblePageIds) {
325+
expect(eligibleIds.has(id)).toBe(true);
326+
}
327+
});
328+
329+
it("imported graph fragment pages are discoverable via MetadataStore", async () => {
330+
const metadataStore = await IndexedDbMetadataStore.open(freshDbName());
331+
const vectorStore = new MemoryVectorStore();
332+
const now = Date.now();
333+
334+
// Simulate receiving a graph fragment from a peer
335+
const fragment: GraphFragment = {
336+
fragmentId: "frag-integration-1",
337+
probeId: "probe-1",
338+
nodes: [
339+
{
340+
pageId: "imported-page-1",
341+
content: "Peer-shared knowledge about distributed consensus algorithms and their applications.",
342+
embeddingOffset: 0,
343+
embeddingDim: EMBEDDING_DIM,
344+
contentHash: "hash1",
345+
vectorHash: "vhash1",
346+
creatorPubKey: "peer-pub-key",
347+
signature: "peer-sig",
348+
createdAt: new Date(now).toISOString(),
349+
},
350+
],
351+
edges: [],
352+
signatures: {},
353+
timestamp: new Date(now).toISOString(),
354+
};
355+
356+
const result = await importFragment(fragment, {
357+
metadataStore,
358+
vectorStore,
359+
verifyContentHashes: false,
360+
});
361+
362+
// Nodes should be imported
363+
expect(result.nodesImported).toBe(1);
364+
expect(result.rejected).toHaveLength(0);
365+
366+
// Imported page should be discoverable
367+
const imported = await metadataStore.getPage("imported-page-1");
368+
expect(imported).toBeDefined();
369+
expect(imported?.content).toContain("distributed consensus");
370+
371+
// Sender identity must be stripped
372+
expect(imported?.creatorPubKey).toBe("");
373+
expect(imported?.signature).toBe("");
374+
});
375+
241376
it("community labels are assigned to pages after label propagation", async () => {
242377
const metadataStore = await IndexedDbMetadataStore.open(freshDbName());
243378
const vectorStore = new MemoryVectorStore();

0 commit comments

Comments
 (0)