feat(library-selection): foundation + Phase 0 scaffolding for #205#207
feat(library-selection): foundation + Phase 0 scaffolding for #205#207
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 12 minutes and 12 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (25)
📝 WalkthroughWalkthroughIntroduces two new Rust crates for Changes
Sequence Diagram(s)sequenceDiagram
participant Builder as Builder/Main
participant Resolver as resolve()
participant Scanner as scan()
participant Walker as walk()
participant AttribPass as Attribution Pass
participant ReconPass as Reconciliation Pass
Builder->>Resolver: seeds, search_paths, libraries
Note over Resolver: First Pass: Initial Discovery
Resolver->>Walker: seeds, combined search_paths
Walker->>Scanner: read each file
Scanner-->>Walker: `#include` directives
Walker->>Walker: resolve includes (quoted→search)
Walker->>Walker: canonicalize & deduplicate
Walker-->>Resolver: reached files, unresolved
Resolver->>AttribPass: reached files vs lib include_dirs
AttribPass->>AttribPass: prefix-match attribution
AttribPass-->>Resolver: selected libraries
Note over Resolver: Loop: Reconciliation Passes
loop Until stabilized
Resolver->>Walker: seeds + selected lib sources
Walker->>Walker: transitive include resolution
Walker-->>Resolver: newly reached files
Resolver->>AttribPass: check for new lib matches
AttribPass-->>Resolver: newly dependent libraries
end
Resolver-->>Builder: Selection {required_libraries, source_files, ...}
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (18)
ci/env.py (1)
78-90: Apply the same PATH normalization toclean_env()for consistency.
activate()was updated to dedupe.cargo/binusingos.path.normcase(os.path.normpath(...))so the rustup-managed cargo wins on Windows hosts.clean_env()still does a raw substring/membership check at Line 84 (if cargo_bin not in path_parts), so on Windows it can leave a differently-cased duplicate entry earlier inPATHand silently re-introduce the exact problemactivate()was hardened against.♻️ Proposed fix
def clean_env(): """Return an env dict with .cargo/bin on PATH and VIRTUAL_ENV removed. Useful for subprocess calls where venv interference with Rust builds should be avoided. """ env = os.environ.copy() - # Ensure .cargo/bin is on PATH + # Ensure .cargo/bin is at the front of PATH and dedupe any equivalent + # entry further down (matches activate() semantics on Windows). cargo_bin = find_rust_bin() if cargo_bin: - path_parts = env.get("PATH", "").split(os.pathsep) - if cargo_bin not in path_parts: - env["PATH"] = cargo_bin + os.pathsep + env.get("PATH", "") + norm = os.path.normcase(os.path.normpath(cargo_bin)) + parts = env.get("PATH", "").split(os.pathsep) + filtered = [ + p for p in parts if os.path.normcase(os.path.normpath(p)) != norm + ] + env["PATH"] = cargo_bin + os.pathsep + os.pathsep.join(filtered) # Remove VIRTUAL_ENV to avoid venv interference env.pop("VIRTUAL_ENV", None) return env🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ci/env.py` around lines 78 - 90, clean_env() currently checks "if cargo_bin not in path_parts" using raw strings which can miss differently-cased duplicates on Windows; update clean_env() to normalize paths the same way activate() does by applying os.path.normcase(os.path.normpath(...)) to cargo_bin and to each entry in path_parts before membership checks and deduping, and ensure when you prepend cargo_bin you first remove any normalized duplicates from path_parts so .cargo/bin isn't reintroduced under a different case; use the existing find_rust_bin(), cargo_bin, path_parts and env variables in this change.crates/fbuild-header-scan/Cargo.toml (1)
9-10: Nit: empty[dependencies]table can be removed.The crate currently has no runtime dependencies, so the
[dependencies]header on Line 9 is decorative. Harmless to keep, but trivially removable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/fbuild-header-scan/Cargo.toml` around lines 9 - 10, Remove the empty `[dependencies]` table from Cargo.toml since the crate has no runtime dependencies; locate the standalone `[dependencies]` header and delete that section (the lone `[dependencies]` line and any empty whitespace following it) so the manifest contains only relevant entries.crates/fbuild-header-scan/src/walker.rs (3)
43-46:read_to_stringfailures are silently skipped.Non-UTF-8 sources or transient I/O failures cause the file (and its transitive includes) to drop out of the walk with no signal. Acceptable for the "false positives OK, false negatives not" rule — but worth a
tracing::debug!so a missed-library bug is diagnosable from logs without re-running with custom instrumentation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/fbuild-header-scan/src/walker.rs` around lines 43 - 46, The loop currently swallows std::fs::read_to_string failures silently; change the read_to_string call in the while let Some(file) = queue.pop_front() loop (the block around read_to_string(&file)) to capture the Err(e) case and emit a tracing::debug! (including the file path and error) before continuing, preserving the current behavior but making non-UTF-8 or transient I/O failures visible in logs for diagnostics.
35-54: Minor: shadowing thecanonfunction withlet canon = canon(...).Inside the seed loop and the BFS loop you bind
let canon = canon(seed)/let canon = canon(&resolved). It works because the RHS function call resolves to the outer-scopecanon, but the shadow then makes it impossible to call the helper again in the same block and is unnecessarily confusing on review. Considerlet canonical = canon(...)(or rename the helper tocanonicalize_or_input).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/fbuild-header-scan/src/walker.rs` around lines 35 - 54, The code shadows the helper function canon by declaring local variables named canon in the seed loop and BFS loop; rename those locals (e.g., canonical, canonical_path, or canon_path) where you do let canon = canon(...) so you call the canon(...) function and store its result in a non-conflicting variable (affecting the seed loop and the BFS loop where you insert into visited, reached, and push_back into queue after resolve/scan). Ensure all subsequent uses in those blocks (visited.insert, reached.insert, queue.push_back) use the new local name and that the helper function name canon remains callable elsewhere.
113-120: Test assertion is fragile w.r.t. tempdir canonicalization.
reachedpaths are canonicalized buttmp.path().join("other")is not — on macOS (/var/folders/...→/private/var/folders/...) and Windows (UNC\\?\prefix on canonicalize) thestarts_withcheck can spuriously hold or miss in the future. Canonicalize theotherdirectory once and compare against that, e.g.:- assert!( - res.reached - .iter() - .any(|p| p.ends_with("foo.h") && !p.starts_with(tmp.path().join("other"))), + let canon_other_dir = std::fs::canonicalize(tmp.path().join("other")).unwrap(); + assert!( + res.reached + .iter() + .any(|p| p.ends_with("foo.h") && !p.starts_with(&canon_other_dir)),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/fbuild-header-scan/src/walker.rs` around lines 113 - 120, The test is brittle because res.reached contains canonicalized paths while the comparison uses tmp.path().join("other") which may not be canonicalized; update the test around the call to walk (using main and res.reached) to canonicalize the "other" directory once (e.g., call canonicalize on tmp.path().join("other") before the assert) and then use that canonical path in the starts_with check so comparisons are stable across macOS/Windows path canonicalization quirks.crates/fbuild-build/src/framework_libs.rs (2)
87-94:flatten()silently swallows walk errors.
.into_iter().filter_entry(should_scan_entry).flatten()discards everyErr(permission denied, broken symlink, IO failure) without logging. On a CI runner with a partially-readable project tree this would silently drop seed files and produce an under-selected library set with no signal. Consider logging dropped entries atdebug!/warn!so resolution misses become diagnosable.♻️ Sketch
- for entry in WalkDir::new(root) - .into_iter() - .filter_entry(should_scan_entry) - .flatten() - { + for entry in WalkDir::new(root).into_iter().filter_entry(should_scan_entry) { + let entry = match entry { + Ok(e) => e, + Err(err) => { + tracing::debug!("framework seed walk skipped: {err}"); + continue; + } + }; if !entry.file_type().is_file() { continue; } if is_source_or_header_file(entry.path()) { seeds.push(entry.path().to_path_buf()); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/fbuild-build/src/framework_libs.rs` around lines 87 - 94, The current iteration uses .into_iter().filter_entry(should_scan_entry).flatten(), which silently drops Err values from WalkDir; change the loop to explicitly match on each Result from WalkDir::new(root).into_iter().filter_entry(should_scan_entry) (i.e. handle the Result returned instead of calling flatten()), logging any Err(e) with debug! or warn! (including context like the path/root and the error) and continue on errors, and only call entry.file_type().is_file() for the Ok(entry) branch so IO/permission/broken-symlink failures are visible in logs.
46-54: Consider demoting per-library selection log todebug!.
tracing::info!runs once per selected library on every resolve, so a teensy build that picks up SPI/Wire/SerialFlash/etc. will spam the orchestrator's info stream on every invocation. A single info-level summary line ("selected N framework libraries: …") plus per-librarydebug!would be friendlier; the current shape is fine if the orchestrator already runs atwarnby default.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/fbuild-build/src/framework_libs.rs` around lines 46 - 54, Change the per-library tracing::info! inside the loop that iterates over selection.required_libraries and finds matches in libraries to tracing::debug! so individual library lines are emitted at debug level; additionally, emit a single summary info-level line before or after the loop (e.g., "selected N framework libraries: [names]") using selection.required_libraries.len() and the collected names to avoid spamming the info stream. Ensure you update the logging call site that references lib.name and lib.source_files.len() (the block using tracing::info! with those symbols) to tracing::debug! and add the new tracing::info! summary that aggregates the selected library names/count.crates/fbuild-header-scan/src/scanner.rs (1)
231-237: Cosmetic:openis bound only to be discarded.
let (open, close, kind) = match ...thenlet _ = open;reads as a leftover from refactoring. Either drop the binding entirely (let (_, close, kind) = ...) or use it (e.g., to assert symmetry between open/close). Not important.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/fbuild-header-scan/src/scanner.rs` around lines 231 - 237, The tuple binding introduces an unused variable `open` and the subsequent `let _ = open;` is a leftover; update the match that sets `(open, close, kind)` to instead omit the unused binding (e.g., bind `(_, close, kind)`), or remove the `let _ = open;` and use `open` meaningfully (for example to assert symmetry vs `close`), locating the change around the match on `bytes[p]` that returns `IncludeKind::Angled`/`IncludeKind::Quoted`.crates/fbuild-test-support/src/elf_probe.rs (2)
152-158:text_data_bss_sumre-parses the ELF three times.Each
section_sizecall invokesself.parse()and rebuilds the full section list. For a 500 KB firmware ELF in a test that's fine, but if this gets called in a loop (e.g. across all targets in a perf gate) it's quadratic in section count. Trivial fix:- pub fn text_data_bss_sum(&self) -> Result<u64, ElfProbeError> { - let mut total: u64 = 0; - for name in [".text", ".data", ".bss"] { - total = total.saturating_add(self.section_size(name)?); - } - Ok(total) - } + pub fn text_data_bss_sum(&self) -> Result<u64, ElfProbeError> { + let sections = self.sections()?; + let mut total: u64 = 0; + for name in [".text", ".data", ".bss"] { + if let Some(s) = sections.iter().find(|s| s.name == name) { + total = total.saturating_add(s.size); + } + } + Ok(total) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/fbuild-test-support/src/elf_probe.rs` around lines 152 - 158, text_data_bss_sum currently calls section_size three times which each calls self.parse(), causing repeated full parses; change it to parse once and sum sizes from the already-parsed data: call self.parse() (or obtain the parsed section list) once at the start of text_data_bss_sum, then iterate the parsed sections to accumulate sizes for ".text", ".data", and ".bss" (or use a new helper that accepts the parsed sections) instead of calling section_size repeatedly; keep references to the existing functions section_size and parse to guide where to change.
188-310: Thebuild_elftest fixture is doing a lot — consider extracting into its own helper module.
build_elfis ~120 lines ofobject::write::elf::Writerplumbing, only used inside this one test module. As moreElfProbetests get added (and potentially other crates that want fixture ELFs), this is going to grow. Pull it intocrates/fbuild-test-support/src/elf_fixture.rs(ortests/common/) and re-export from the crate root behind#[cfg(test)]or atest-fixturesfeature. Optional, but it would also let other crates reuse it without copy-paste.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/fbuild-test-support/src/elf_probe.rs` around lines 188 - 310, The build_elf helper in elf_probe.rs (function build_elf) is large and duplicated across tests; extract it into a dedicated test fixture module (e.g., elf_fixture.rs) inside the fbuild-test-support crate (or a tests/common module), move all Writer-related plumbing and helper types there, and re-export the fixture from the crate root behind #[cfg(test)] or a feature like "test-fixtures"; update elf_probe.rs to call the extracted build_elf function (or a thin wrapper) so tests import the fixture rather than containing the implementation inline.ci/measure_baseline_205.py (3)
119-122: Optional: Ruff RUF005 — prefer list spread over concatenation.Static analysis flags both
orderconstructions:- if prefer_arm: - order = [arm] + pio + [llvm, plain] - else: - order = [llvm, arm, plain] + pio + if prefer_arm: + order = [arm, *pio, llvm, plain] + else: + order = [llvm, arm, plain, *pio]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ci/measure_baseline_205.py` around lines 119 - 122, Ruff RUF005 flags the list concatenations used to build order in the prefer_arm branch and else branch; replace concatenation with list unpacking to satisfy the rule: when prefer_arm is true change the construction from [arm] + pio + [llvm, plain] to [arm, *pio, llvm, plain], and when false change [llvm, arm, plain] + pio to [llvm, arm, plain, *pio], updating the variable order accordingly (refer to prefer_arm, order, arm, pio, llvm, plain).
50-50: Cosmetic: mixedList[str]/tuple[bool, str]style.
from __future__ import annotationsis in effect, so PEP 585 lowercase generics work everywhere. Either standardize onList/Tuple/Optional(typing) or onlist/tuple/X | None(PEP 604). Currently it's a mix (List[str]at 50/130 vstuple[bool, str]at 145).Also applies to: 130-130, 145-145
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ci/measure_baseline_205.py` at line 50, The code mixes typing-style generics and PEP585/PEP604 natives: remove the typing import "from typing import List, Optional" and standardize on built-in generics and unions (use list[str] instead of List[str], and use X | None instead of Optional[X]); keep tuple[bool, str] as-is. Update any function or variable annotations that currently use List[str] and Optional[...] to use list[str] and the | None form respectively (e.g., replace Optional[Foo] with Foo | None).
495-497: Exit-code condition is currentlyany_ok or has_data, buthas_dataalready impliesany_okis a subset.
has_data = any(r.tu_count is not None or r.status == "ok" for r in results). Any element withstatus == "ok"already makeshas_datatrue, soany_ok or has_datacollapses to justhas_data. Either dropany_okor change the rule (e.g., require at least oneokfor exit 0, treatbuild_failedwith a partial compdb as a non-zero "soft failure"). The docstring says "Exit code is 0 if at least one target produced data, 1 if every target was skipped or failed", which matcheshas_dataalone — recommend simplifying toreturn 0 if has_data else 1.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ci/measure_baseline_205.py` around lines 495 - 497, The exit-code logic is redundant: remove the unused/duplicative any_ok check and return 0 if has_data (any(r.tu_count is not None or r.status == "ok" for r in results)) else 1; update the return to "return 0 if has_data else 1" and delete the any_ok assignment (references: any_ok, has_data, results).crates/fbuild-library-select/src/lib.rs (1)
95-121: Pass-2 reconciliation re-walks the full seed+source set every iteration; consider walking only the delta.Each iteration constructs
recon_seeds = seeds + every_source_of_every_selected_libfrom scratch and re-walks the full graph. The walker has its ownvisitedset, but it's local to eachwalk()call, so already-traversed files are re-read and re-scanned on every reconciliation round. For Arduino-class graphs (1–2 rounds) this is fine; for larger frameworks it's O(rounds × total_files).A cheap improvement is to walk only newly added sources each round (seeds + sources of libs added since last round) and union the reached set:
- loop { - let mut recon_seeds: Vec<PathBuf> = seeds.to_vec(); - for idx in &selected { - for src in &libraries[*idx].source_files { - recon_seeds.push(src.clone()); - } - } - let res = walk(&recon_seeds, &full_search_paths); + let mut last_selected: BTreeSet<usize> = BTreeSet::new(); + loop { + let new_indices: Vec<usize> = + selected.difference(&last_selected).copied().collect(); + let mut recon_seeds: Vec<PathBuf> = if last_selected.is_empty() { + seeds.to_vec() + } else { + Vec::new() + }; + for idx in &new_indices { + for src in &libraries[*idx].source_files { + recon_seeds.push(src.clone()); + } + } + last_selected = selected.clone(); + let res = walk(&recon_seeds, &full_search_paths);Optional, since perf isn't the immediate goal of this PR — but note this is the hot path that Phase 4 zccache will memoize.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/fbuild-library-select/src/lib.rs` around lines 95 - 121, The loop currently rebuilds recon_seeds as seeds + sources of all selected libs and calls walk() each iteration, re-walking already-traversed files; instead maintain a prev_selected set and on each iteration only build recon_seeds from seeds plus sources of newly-selected libs (i.e., indices in selected \ prev_selected), call walk(&delta_seeds, &full_search_paths), then union res.reached into a cumulative reached set (used to update all_included and to test path_in_any when expanding canon_lib_dirs), union res.unresolved into all_unresolved, and set prev_selected = selected before the next iteration so only the delta is re-walked next time (keep using symbols selected, prev_selected, recon_seeds/delta_seeds, walk, res, all_included, all_unresolved, canon_lib_dirs).DONE.md (1)
123-268: Consolidate the duplicated "Victory re-confirmation" sections.Lines 123–268 contain five near-identical "Victory re-confirmation" / "Re-verification" blocks all dated 2026-04-24, each repeating the same gate matrix and follow-up phase listing. This reads as session-log noise rather than a finalized DONE record and will keep growing on every subsequent re-verification. Suggest collapsing them into a single "Final verification (2026-04-24)" section plus, if needed, a short "Re-runs" log table with just date / commands / result.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@DONE.md` around lines 123 - 268, There are five near-identical "Victory re-confirmation"/"Re-verification" blocks dated 2026-04-24 that duplicate the same gate matrix and follow-up notes; collapse them into a single consolidated section titled e.g. "Final verification (2026-04-24)" that preserves one gate table, the notable suite counts, and the follow-up phase list, and optionally add a short "Re-runs" table with just date / command / result for any repeated sweeps; remove the extra repeated headings ("Victory re-confirmation", "Re-verification") and duplicated paragraphs so the DONE.md contains only one authoritative verification record for that date while keeping the essential details and follow-up references.crates/fbuild-test-support/src/compile_db.rs (1)
173-180: Optional: avoid cloning paths intu_count/files.
files()clones everyPathBufto build aHashSet<PathBuf>, andtu_count()does the same just to read the length. For small test fixtures this is fine; for assertions over a 500-TU teensy compdb it's needless allocation. A non-cloning variant:- pub fn tu_count(&self) -> usize { - self.files().len() - } + pub fn tu_count(&self) -> usize { + self.entries.iter().map(|e| e.file.as_path()).collect::<HashSet<_>>().len() + }…or expose a
fn distinct_files(&self) -> HashSet<&Path>helper. Pure ergonomics.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/fbuild-test-support/src/compile_db.rs` around lines 173 - 180, The current files() and tu_count() clone every PathBuf which is unnecessary; change files() to return non-owning references (e.g., HashSet<&Path> or HashSet<&PathBuf>) or add a new distinct_files(&self) -> HashSet<&Path> helper that collects entries.iter().map(|e| e.file.as_path()) into the set, and update tu_count() to compute its length via that non-cloning set (or directly compute entries.iter().map(|e| e.file.as_path()).collect::<HashSet<_>>().len()) so you avoid allocating/cloning PathBufs; update references to files() accordingly or keep files() for API compatibility and add distinct_files()/tu_count() to use the non-cloning path collection.crates/fbuild-test-support/src/mini_framework.rs (2)
270-289: Nit:collect_seedssilently swallows non-NotFoundread_direrrors.
let Ok(entries) = std::fs::read_dir(dir) else { return; }is intentional for the lazyproject/srccase, but it also masks permission/IO errors mid-walk on a populated subtree, which would surface as a mysteriously empty seed list rather than a panic. For a test fixture this is fine, but if you ever see flaky CI on Windows it's worth converting to anexpectfor non-NotFounderrors.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/fbuild-test-support/src/mini_framework.rs` around lines 270 - 289, collect_seeds currently ignores all errors from std::fs::read_dir(dir) which can mask permission/IO problems; change the read_dir handling in collect_seeds to inspect the Err variant and only silently return when error.kind() == std::io::ErrorKind::NotFound, otherwise surface the error (e.g., panic/expect or log and panic) so permission/IO errors during the recursive walk are not swallowed; update the match surrounding std::fs::read_dir(dir) in the collect_seeds function accordingly.
97-115: Optional: guard against repeatadd_library(name)calls.Calling
add_library("SPI")twice silently re-runscreate_dir_all(no-op) and re-writesSPI.hback to empty, but any previously-written.cpp/extra(...)files from the first call are left in place. That mismatch could produce surprising fixtures if a test ever rebuilds the same library. Adebug_assert!(!lib_dir.exists())(or returning the existing builder cleanly) would make the contract explicit. Defer until a test actually needs it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/fbuild-test-support/src/mini_framework.rs` around lines 97 - 115, The add_library method silently overwrites the library header and leaves prior source files when called twice; update add_library to guard against repeated calls by checking lib_dir.exists() and either debug_assert!(!lib_dir.exists()) to catch accidental duplicates in tests or return an existing LibraryBuilder (constructed from lib_dir/src) when the directory already exists; locate the add_library function and the lib_dir, src_dir and default_header variables to implement the existence check and the chosen behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ci/measure_baseline_205.py`:
- Around line 309-313: The code uses Path.relative_to on r.project and
r.elf_path which can raise ValueError for resolved/symlinked paths; update the
markdown generation where lines.append(...) is called to compute a safe relative
path by either wrapping r.project.relative_to(REPO_ROOT) and
r.elf_path.relative_to(REPO_ROOT) in try/except ValueError (falling back to
str(path) or os.path.relpath(path, REPO_ROOT)) or directly call
os.path.relpath(path, REPO_ROOT) to avoid exceptions; refer to the uses of
r.project, r.elf_path, REPO_ROOT and the earlier project = (REPO_ROOT /
project_rel).resolve() to place the fix so a single odd path won't crash the
report.
In `@crates/fbuild-header-scan/src/scanner.rs`:
- Around line 132-148: The raw-string detection is firing inside identifiers
because is_raw_string_open is only checking the upcoming bytes; update the
scanner to require the character before the candidate opener is not an
identifier character (letters, digits, or underscore) or is start-of-file before
treating `R`/`L`/`u`/`U` as a raw-string opener (apply this guard where the code
sets State::RawString and computes open_quote/paren, and likewise in the
duplicate logic at the other occurrence around the 184-206 region); then add an
S-15-style unit test (e.g. "auto FooR = 0;\n#include <a.h>\n" should still
produce 1 include hit) to prevent regressions.
In `@crates/fbuild-library-select/src/lib.rs`:
- Around line 28-39: The docstring for Selection::required_libraries is wrong:
the code builds required_libraries by iterating a BTreeSet<usize> named selected
and pushing libraries[idx].name in input-index order, not lexicographic order;
change the implementation that constructs Selection (the code that iterates
selected and pushes lib.name) to produce a name-sorted Vec<String> (e.g. collect
names and sort by name before assigning to Selection.required_libraries) so the
field is deterministic and matches the doc, and add a regression test that
supplies libraries in non-lexicographic input order (e.g. ["Wire","SPI"]) and
asserts the returned required_libraries is sorted by name (["SPI","Wire"]);
alternatively, if you prefer to keep input order, update the docstring to say
"in input order" and add a test that locks that behavior.
In `@crates/fbuild-test-support/README.md`:
- Around line 57-64: The fenced code block in README.md showing the directory
tree is missing a language tag; update the triple-backtick fence that precedes
the directory layout (the block containing "<tmp>/framework/ libraries/
SPI/src/SPI.h ...") to use a language identifier (use "text") so the fence
becomes ```text, ensuring markdownlint MD040 is satisfied.
In `@crates/fbuild-test-support/src/elf_probe.rs`:
- Around line 84-105: The sections() implementation currently filters out
sections with size == 0, which hides legitimate zero-length entries like a
zero-sized .bss; remove the size filter so every section with a non-empty name
is returned (keep the existing empty-name check), update the doc comment for
sections() to state it returns all named sections including zero-sized ones, and
ensure this change is consistent with the related helpers section() and
section_size() (they can still return sizes of 0 as before).
In `@DONE.md`:
- Around line 237-238: The lines beginning with the token "#205" are triggering
a markdownlint MD018 false positive; locate occurrences of the literal "#205" at
the start of lines (the leading token) and either reflow the sentence so "#205"
is not the first token, or escape/wrap the token (replace with "\#205" or
"`#205`") so it is not parsed as an ATX heading; apply the same change to all
occurrences of the leading "#205".
- Around line 97-103: The fenced code block in DONE.md is missing a language
identifier (MD040); update the block starting with ``` to include a language
such as bash (e.g., change ``` to ```bash) so the commands (uv run soldr cargo
build..., cargo clippy, cargo fmt, cargo test, RUSTDOCFLAGS... cargo doc) are
syntax-highlighted and markdownlint-cli2 stops flagging MD040.
In `@tasks/baseline-205.md`:
- Around line 66-68: The fenced code block containing the command `uv run python
ci/measure_baseline_205.py --out tasks/baseline-205.md` is missing a language
tag; update the opening fence to "```bash" (or remove the redundant fence
entirely since the same command exists earlier) so markdownlint MD040 is
satisfied and the block is properly highlighted; modify the fence surrounding
that exact command to begin with ```bash.
---
Nitpick comments:
In `@ci/env.py`:
- Around line 78-90: clean_env() currently checks "if cargo_bin not in
path_parts" using raw strings which can miss differently-cased duplicates on
Windows; update clean_env() to normalize paths the same way activate() does by
applying os.path.normcase(os.path.normpath(...)) to cargo_bin and to each entry
in path_parts before membership checks and deduping, and ensure when you prepend
cargo_bin you first remove any normalized duplicates from path_parts so
.cargo/bin isn't reintroduced under a different case; use the existing
find_rust_bin(), cargo_bin, path_parts and env variables in this change.
In `@ci/measure_baseline_205.py`:
- Around line 119-122: Ruff RUF005 flags the list concatenations used to build
order in the prefer_arm branch and else branch; replace concatenation with list
unpacking to satisfy the rule: when prefer_arm is true change the construction
from [arm] + pio + [llvm, plain] to [arm, *pio, llvm, plain], and when false
change [llvm, arm, plain] + pio to [llvm, arm, plain, *pio], updating the
variable order accordingly (refer to prefer_arm, order, arm, pio, llvm, plain).
- Line 50: The code mixes typing-style generics and PEP585/PEP604 natives:
remove the typing import "from typing import List, Optional" and standardize on
built-in generics and unions (use list[str] instead of List[str], and use X |
None instead of Optional[X]); keep tuple[bool, str] as-is. Update any function
or variable annotations that currently use List[str] and Optional[...] to use
list[str] and the | None form respectively (e.g., replace Optional[Foo] with Foo
| None).
- Around line 495-497: The exit-code logic is redundant: remove the
unused/duplicative any_ok check and return 0 if has_data (any(r.tu_count is not
None or r.status == "ok" for r in results)) else 1; update the return to "return
0 if has_data else 1" and delete the any_ok assignment (references: any_ok,
has_data, results).
In `@crates/fbuild-build/src/framework_libs.rs`:
- Around line 87-94: The current iteration uses
.into_iter().filter_entry(should_scan_entry).flatten(), which silently drops Err
values from WalkDir; change the loop to explicitly match on each Result from
WalkDir::new(root).into_iter().filter_entry(should_scan_entry) (i.e. handle the
Result returned instead of calling flatten()), logging any Err(e) with debug! or
warn! (including context like the path/root and the error) and continue on
errors, and only call entry.file_type().is_file() for the Ok(entry) branch so
IO/permission/broken-symlink failures are visible in logs.
- Around line 46-54: Change the per-library tracing::info! inside the loop that
iterates over selection.required_libraries and finds matches in libraries to
tracing::debug! so individual library lines are emitted at debug level;
additionally, emit a single summary info-level line before or after the loop
(e.g., "selected N framework libraries: [names]") using
selection.required_libraries.len() and the collected names to avoid spamming the
info stream. Ensure you update the logging call site that references lib.name
and lib.source_files.len() (the block using tracing::info! with those symbols)
to tracing::debug! and add the new tracing::info! summary that aggregates the
selected library names/count.
In `@crates/fbuild-header-scan/Cargo.toml`:
- Around line 9-10: Remove the empty `[dependencies]` table from Cargo.toml
since the crate has no runtime dependencies; locate the standalone
`[dependencies]` header and delete that section (the lone `[dependencies]` line
and any empty whitespace following it) so the manifest contains only relevant
entries.
In `@crates/fbuild-header-scan/src/scanner.rs`:
- Around line 231-237: The tuple binding introduces an unused variable `open`
and the subsequent `let _ = open;` is a leftover; update the match that sets
`(open, close, kind)` to instead omit the unused binding (e.g., bind `(_, close,
kind)`), or remove the `let _ = open;` and use `open` meaningfully (for example
to assert symmetry vs `close`), locating the change around the match on
`bytes[p]` that returns `IncludeKind::Angled`/`IncludeKind::Quoted`.
In `@crates/fbuild-header-scan/src/walker.rs`:
- Around line 43-46: The loop currently swallows std::fs::read_to_string
failures silently; change the read_to_string call in the while let Some(file) =
queue.pop_front() loop (the block around read_to_string(&file)) to capture the
Err(e) case and emit a tracing::debug! (including the file path and error)
before continuing, preserving the current behavior but making non-UTF-8 or
transient I/O failures visible in logs for diagnostics.
- Around line 35-54: The code shadows the helper function canon by declaring
local variables named canon in the seed loop and BFS loop; rename those locals
(e.g., canonical, canonical_path, or canon_path) where you do let canon =
canon(...) so you call the canon(...) function and store its result in a
non-conflicting variable (affecting the seed loop and the BFS loop where you
insert into visited, reached, and push_back into queue after resolve/scan).
Ensure all subsequent uses in those blocks (visited.insert, reached.insert,
queue.push_back) use the new local name and that the helper function name canon
remains callable elsewhere.
- Around line 113-120: The test is brittle because res.reached contains
canonicalized paths while the comparison uses tmp.path().join("other") which may
not be canonicalized; update the test around the call to walk (using main and
res.reached) to canonicalize the "other" directory once (e.g., call canonicalize
on tmp.path().join("other") before the assert) and then use that canonical path
in the starts_with check so comparisons are stable across macOS/Windows path
canonicalization quirks.
In `@crates/fbuild-library-select/src/lib.rs`:
- Around line 95-121: The loop currently rebuilds recon_seeds as seeds + sources
of all selected libs and calls walk() each iteration, re-walking
already-traversed files; instead maintain a prev_selected set and on each
iteration only build recon_seeds from seeds plus sources of newly-selected libs
(i.e., indices in selected \ prev_selected), call walk(&delta_seeds,
&full_search_paths), then union res.reached into a cumulative reached set (used
to update all_included and to test path_in_any when expanding canon_lib_dirs),
union res.unresolved into all_unresolved, and set prev_selected = selected
before the next iteration so only the delta is re-walked next time (keep using
symbols selected, prev_selected, recon_seeds/delta_seeds, walk, res,
all_included, all_unresolved, canon_lib_dirs).
In `@crates/fbuild-test-support/src/compile_db.rs`:
- Around line 173-180: The current files() and tu_count() clone every PathBuf
which is unnecessary; change files() to return non-owning references (e.g.,
HashSet<&Path> or HashSet<&PathBuf>) or add a new distinct_files(&self) ->
HashSet<&Path> helper that collects entries.iter().map(|e| e.file.as_path())
into the set, and update tu_count() to compute its length via that non-cloning
set (or directly compute entries.iter().map(|e|
e.file.as_path()).collect::<HashSet<_>>().len()) so you avoid allocating/cloning
PathBufs; update references to files() accordingly or keep files() for API
compatibility and add distinct_files()/tu_count() to use the non-cloning path
collection.
In `@crates/fbuild-test-support/src/elf_probe.rs`:
- Around line 152-158: text_data_bss_sum currently calls section_size three
times which each calls self.parse(), causing repeated full parses; change it to
parse once and sum sizes from the already-parsed data: call self.parse() (or
obtain the parsed section list) once at the start of text_data_bss_sum, then
iterate the parsed sections to accumulate sizes for ".text", ".data", and ".bss"
(or use a new helper that accepts the parsed sections) instead of calling
section_size repeatedly; keep references to the existing functions section_size
and parse to guide where to change.
- Around line 188-310: The build_elf helper in elf_probe.rs (function build_elf)
is large and duplicated across tests; extract it into a dedicated test fixture
module (e.g., elf_fixture.rs) inside the fbuild-test-support crate (or a
tests/common module), move all Writer-related plumbing and helper types there,
and re-export the fixture from the crate root behind #[cfg(test)] or a feature
like "test-fixtures"; update elf_probe.rs to call the extracted build_elf
function (or a thin wrapper) so tests import the fixture rather than containing
the implementation inline.
In `@crates/fbuild-test-support/src/mini_framework.rs`:
- Around line 270-289: collect_seeds currently ignores all errors from
std::fs::read_dir(dir) which can mask permission/IO problems; change the
read_dir handling in collect_seeds to inspect the Err variant and only silently
return when error.kind() == std::io::ErrorKind::NotFound, otherwise surface the
error (e.g., panic/expect or log and panic) so permission/IO errors during the
recursive walk are not swallowed; update the match surrounding
std::fs::read_dir(dir) in the collect_seeds function accordingly.
- Around line 97-115: The add_library method silently overwrites the library
header and leaves prior source files when called twice; update add_library to
guard against repeated calls by checking lib_dir.exists() and either
debug_assert!(!lib_dir.exists()) to catch accidental duplicates in tests or
return an existing LibraryBuilder (constructed from lib_dir/src) when the
directory already exists; locate the add_library function and the lib_dir,
src_dir and default_header variables to implement the existence check and the
chosen behavior.
In `@DONE.md`:
- Around line 123-268: There are five near-identical "Victory
re-confirmation"/"Re-verification" blocks dated 2026-04-24 that duplicate the
same gate matrix and follow-up notes; collapse them into a single consolidated
section titled e.g. "Final verification (2026-04-24)" that preserves one gate
table, the notable suite counts, and the follow-up phase list, and optionally
add a short "Re-runs" table with just date / command / result for any repeated
sweeps; remove the extra repeated headings ("Victory re-confirmation",
"Re-verification") and duplicated paragraphs so the DONE.md contains only one
authoritative verification record for that date while keeping the essential
details and follow-up references.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 641341ad-504f-418b-b0d0-ddce10d9eddc
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (25)
Cargo.tomlDONE.mdci/env.pyci/measure_baseline_205.pycrates/fbuild-build/Cargo.tomlcrates/fbuild-build/src/framework_libs.rscrates/fbuild-header-scan/Cargo.tomlcrates/fbuild-header-scan/README.mdcrates/fbuild-header-scan/src/README.mdcrates/fbuild-header-scan/src/lib.rscrates/fbuild-header-scan/src/scanner.rscrates/fbuild-header-scan/src/walker.rscrates/fbuild-library-select/Cargo.tomlcrates/fbuild-library-select/README.mdcrates/fbuild-library-select/src/README.mdcrates/fbuild-library-select/src/lib.rscrates/fbuild-test-support/Cargo.tomlcrates/fbuild-test-support/README.mdcrates/fbuild-test-support/src/compile_db.rscrates/fbuild-test-support/src/elf_probe.rscrates/fbuild-test-support/src/lib.rscrates/fbuild-test-support/src/mini_framework.rstasks/README.mdtasks/baseline-205.mdtasks/zccache-kv-design.md
| ``` | ||
| uv run soldr cargo build --workspace # green (31s) | ||
| uv run soldr cargo clippy --workspace --all-targets -- -D warnings # green | ||
| uv run soldr cargo fmt --all --check # clean | ||
| uv run soldr cargo test --workspace # all test suites pass | ||
| RUSTDOCFLAGS="-D warnings" uv run soldr cargo doc --workspace --no-deps # green | ||
| ``` |
There was a problem hiding this comment.
Add a language identifier to the fenced code block.
markdownlint-cli2 flags MD040 here. Tag this fenced block as bash (or console) so syntax highlighting and lint stay green.
📝 Suggested fix
-```
+```bash
uv run soldr cargo build --workspace # green (31s)
uv run soldr cargo clippy --workspace --all-targets -- -D warnings # green
uv run soldr cargo fmt --all --check # clean
uv run soldr cargo test --workspace # all test suites pass
RUSTDOCFLAGS="-D warnings" uv run soldr cargo doc --workspace --no-deps # green📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ``` | |
| uv run soldr cargo build --workspace # green (31s) | |
| uv run soldr cargo clippy --workspace --all-targets -- -D warnings # green | |
| uv run soldr cargo fmt --all --check # clean | |
| uv run soldr cargo test --workspace # all test suites pass | |
| RUSTDOCFLAGS="-D warnings" uv run soldr cargo doc --workspace --no-deps # green | |
| ``` |
🧰 Tools
🪛 markdownlint-cli2 (0.22.1)
[warning] 97-97: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@DONE.md` around lines 97 - 103, The fenced code block in DONE.md is missing a
language identifier (MD040); update the block starting with ``` to include a
language such as bash (e.g., change ``` to ```bash) so the commands (uv run
soldr cargo build..., cargo clippy, cargo fmt, cargo test, RUSTDOCFLAGS... cargo
doc) are syntax-highlighted and markdownlint-cli2 stops flagging MD040.
| Foundation phases (0–3 plus the `framework_libs.rs` delegation in Phase 5) of | ||
| #205 remain green end-to-end. The scanner, walker, and PlatformIO-LDF-style |
There was a problem hiding this comment.
MD018 false positive caused by line-leading #205.
Lines 238 and 261 begin with #205 remain green …. markdownlint parses the leading # followed by digits as a malformed ATX heading. Either reflow the prose so #205 is not the first token, or escape it as \#205 / wrap as `#205` to silence the warning.
Also applies to: 260-261
🧰 Tools
🪛 markdownlint-cli2 (0.22.1)
[warning] 238-238: No space after hash on atx style heading
(MD018, no-missing-space-atx)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@DONE.md` around lines 237 - 238, The lines beginning with the token "#205"
are triggering a markdownlint MD018 false positive; locate occurrences of the
literal "#205" at the start of lines (the leading token) and either reflow the
sentence so "#205" is not the first token, or escape/wrap the token (replace
with "\#205" or "`#205`") so it is not parsed as an ATX heading; apply the same
change to all occurrences of the leading "#205".
Lands the Rust-native LDF-style library-selection foundation plus the remaining Phase 0 test-support scaffolding called for in #205's plan. ## What ships ### New crates (Phases 1–3) - `fbuild-header-scan` — `scan(&str) -> Vec<IncludeRef>` line-oriented C/C++ tokenizer + `walk(seeds, search_paths) -> WalkResult` BFS include-graph walker. 34 unit tests; quoted-first resolution, search-path precedence, cycle/diamond/depth-5 termination, deterministic sorted output. - `fbuild-library-select` — `resolve(seeds, search_paths, libraries)` PlatformIO-LDF-style two-pass resolver with path-prefix attribution (not basename matching). 5 unit tests including the #204 regression guard and same-basename-disambiguation case. ### Wiring (Phase 5 partial) - `fbuild-build/src/framework_libs.rs` now delegates to `fbuild-library-select`. Public API preserved (`resolve_framework_library_sources(libraries, project_dir, src_dir)`) so teensy / stm32 / avr / esp32 orchestrators consume the new resolver transparently. Old basename-matching helpers removed. ### Phase 0 test-support deliverables - `fbuild-test-support::MiniFramework` — fluent TempDir builder for fake Teensyduino / STM32duino / Arduino trees. Discoverable by the existing `discover_framework_libraries`. 12 unit tests + walker / resolver round-trip helpers. - `fbuild-test-support::ElfProbe` — `object`-crate wrapper exposing `sections()`, `symbols()`, `section_size()`, `text_data_bss_sum()`, `has_symbol_containing()`. Builds fixtures via `object::write` so no binary blobs are checked in. 9 unit tests. - `fbuild-test-support::CompileDb` — `compile_commands.json` parser with shell-aware `command`/`arguments` handling, relative-path resolution, and `forbidden_present()` for #204 regression checks. 11 unit tests. - `tasks/zccache-kv-design.md` — design note for the K/V surface needed by Phase 4. Filed as zackees/zccache#130 (folded into the existing zccache-artifact crate, not a new binary). - `ci/measure_baseline_205.py` + `tasks/baseline-205.md` (placeholder) — capture script for AC #1/#2/#3 baseline numbers; numeric capture deferred to a clean-CI run. ### Incidental fix - `ci/env.py::find_rust_bin` now requires `cargo` to actually exist in candidate bin dirs and `activate()` reorders rustup ahead of any pre-existing chocolatey cargo, restoring the lint hook on dual-rust Windows hosts. ## Behaviour changes 1. Unreferenced framework libraries no longer compile (#204 root cause). Path-prefix attribution requires the walker to actually resolve an include into a library's `include_dirs`, blocking FNET / Snooze / RadioHead / mbedtls from a Blink sketch on teensyLC. 2. STM32 SPI auto-discovers (#202). The walker resolves `<SPI.h>` inside `Arduino_Core_STM32/libraries/SPI/src/` without manual allowlists. 3. Same-basename libraries no longer collide. `"foo/config.h"` no longer pulls in a `Bar` library whose `bar/config.h` shares a basename. ## Verification - `uv run soldr cargo check --workspace --all-targets` — green. - `uv run soldr cargo clippy --workspace --all-targets -- -D warnings` — green (only the pre-existing `clippy.toml` MSRV info note; zero lint findings). - `uv run soldr cargo fmt --all --check` — clean. - `RUSTDOCFLAGS="-D warnings" uv run soldr cargo doc --workspace --no-deps` — green (15 crate docs). - `uv run soldr cargo test -p fbuild-header-scan -p fbuild-library-select -p fbuild-test-support` — 34 + 5 + 33 + 2 doctests = 74 passed, 0 failed. ## Out of scope (tracked follow-ups) - Phase 4 (zccache memoization) — pending zackees/zccache#130. - Baseline numeric capture — script ready; data capture pending CI. - Phase 6 (acceptance gates / ELF probes wired into board tests). - Phase 7 (perf gates). - Phase 8 (`fbuild lib-select --explain` CLI + final `framework_libs.rs` deletion). Refs: #202, #204, #205, zackees/zccache#130 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
3e247f7 to
f42b406
Compare
…docs (#208) Continues #205 on top of the foundation merged in PR #207. No-op for runtime behaviour; adds gates that *prove* the foundation behaves as specified, plus a diagnostic CLI and architecture docs. ## Phase 6 — acceptance gates (#[ignore]'d, CI-only) - `crates/fbuild-build/tests/teensylc_acceptance.rs` — builds teensyLC Blink end-to-end and asserts AC#1: * `.bss` <= 3 KB * No `fnet_` / `snooze_` / `RadioHead` / `mbedtls` symbol substrings (#204 regression guard) * `setup` / `loop` symbols present * `compile_commands.json` TU count <= 250 * No FNET / Snooze / RadioHead / mbedtls entries in compile DB - `crates/fbuild-build/tests/stm32_acceptance.rs` — builds an stm32f103c8 sketch that `#include`s `<SPI.h>` and asserts AC#4: * `SPIClass` symbol present in ELF * compile_commands.json contains an SPI library entry * No manual allowlist needed (closes #202) Both gates use `fbuild_test_support::{ElfProbe, CompileDb}` from PR #207 and are gated behind `#[ignore]` because they download real toolchains and Teensyduino / STM32duino frameworks. CI runs them via `cargo test -- --ignored`. ## Phase 8 — diagnostic CLI - `fbuild lib-select <project> -e <env>` — drives the resolver in-process and prints the selected library set. - `--explain` shows per-library trigger header + unresolved includes. - `--json` emits machine-readable output (`{ selected, unresolved, included_files, seeds, framework }`). - Mutually-exclusive flags (`conflicts_with`). - New `crates/fbuild-cli/src/lib_select.rs` (~420 LoC) owns the diagnostic flow without depending on `fbuild-build` — uses `fbuild-packages::library` directly to keep the CLI a leaf binary. - 3 new tests in `crates/fbuild-cli/tests/lib_select.rs` (help-output, missing-project exit code, flag-conflict). ## Phase 8 — architecture docs - `docs/architecture/library-selection.md` — new subsystem doc covering scanner / walker / resolver, path-prefix attribution, two-pass convergence, future Phase 4 cache. - `docs/CLAUDE.md` table updated. - `docs/INDEX.md` FAQ entries added. - `docs/architecture/README.md` lists the new doc. - `tasks/lessons.md` appended with the LDF-semantics lesson. ## Verification - `uv run soldr cargo check --workspace --all-targets` — green. - `uv run soldr cargo clippy --workspace --all-targets -- -D warnings` — green (only pre-existing `clippy.toml` MSRV info note). - `uv run soldr cargo fmt --all --check` — clean. - `RUSTDOCFLAGS="-D warnings" uv run soldr cargo doc --workspace --no-deps` — green. - `uv run soldr cargo test -p fbuild-cli` — 13 passed, 0 failed (3 new lib_select + existing). - Phase 6 acceptance tests `#[ignore]`'d; CI runs them with `--ignored`. ## Out of scope (still tracked) - Phase 4 — zccache K/V memoization (gated on zackees/zccache#130). - Phase 7 — perf gates wired into `bench/fastled-examples`. - Baseline numeric capture (`tasks/baseline-205.md` placeholder). Refs: #202, #204, #205, zackees/zccache#130 Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the foundation milestone of #205 and lands the remaining Phase 0 test-support scaffolding so Phase 6 acceptance gates have something to call.
Summary
fbuild-header-scanandfbuild-library-select— Rust-native LDF-style scanner + walker + two-pass resolver with path-prefix attribution.fbuild-build/src/framework_libs.rsnow delegates to the new resolver, so every orchestrator (teensy / stm32 / avr / esp32) consumes the new selection automatically — fixing teensy30/LC: Teensy orchestrator compiles unreferenced framework libraries (FNET, Snooze, RadioHead, mbedtls) — causes .bss / .dmabuffers RAM overflow #204 (teensyLC/teensy30 RAM overflow from FNET / Snooze / RadioHead / mbedtls being wrongly compiled) and stm32: Arduino_Core_STM32 framework libraries (SPI, Wire, ...) not discovered — fatal error: SPI.h: No such file or directory #202 (STM32 SPI auto-discovery).fbuild-test-supportgainsMiniFramework,ElfProbe, andCompileDb— the Phase 0 utilities the rest of the feat(library-selection): Rust-native LDF-style transitive header scanner, zccache-backed #205 plan depends on.ci/measure_baseline_205.py+tasks/baseline-205.md— capture script + placeholder, ready for a clean-CI run.tasks/zccache-kv-design.md— design note for the Phase 4 K/V cache, filed upstream as zackees/zccache#130 (folded into the existingzccache-artifactcrate; not a new binary; comprehensive + adversarial test plan).Verification
uv run soldr cargo check --workspace --all-targetsuv run soldr cargo clippy --workspace --all-targets -- -D warningsclippy.tomlMSRV info note)uv run soldr cargo fmt --all --checkRUSTDOCFLAGS=\"-D warnings\" uv run soldr cargo doc --workspace --no-depscargo test -p fbuild-header-scan -p fbuild-library-select -p fbuild-test-supportTest plan
uv run python ci/test.pylocally — confirms the wider workspace suite still passes.FNET / Snooze / RadioHead / mbedtlsshould be absent fromcompile_commands.json.uv run python ci/measure_baseline_205.pyon a host with Teensy / STM32 toolchains to populatetasks/baseline-205.md(numeric capture deferred — see that file for instructions).Follow-ups (tracked, not in this PR)
ElfProbe+CompileDb.fbuild lib-select --explainCLI + finalframework_libs.rsdeletion.Refs: #202, #204, #205, zackees/zccache#130
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Bug Fixes
Chores