Add Evaluators: prepare interface, and vectorisation utilities#157
Add Evaluators: prepare interface, and vectorisation utilities#157
Conversation
…and vectorisation utilities
Introduces `AbstractPPL.Evaluators`, a new submodule providing structural
plumbing for preparing and calling model evaluators, with or without AD.
Core types and functions:
- `Prepared{AD,E,C}` — wraps an evaluator with an AD backend type (for
dispatch) and an optional backend-specific cache; exported from AbstractPPL
- `prepare(problem, x; check_dims=true)` / `prepare(adtype, problem, x; check_dims=true)`
— structural and AD-aware preparation; AD-backend extensions implement the
three-argument form and thread `check_dims` to `VectorEvaluator{check_dims}`
- `value_and_gradient!!(prepared, x)` / `value_and_jacobian!!(prepared, x)`
— stub interface for derivative computation; returned arrays may alias
internal cache buffers (callers copy if they need to retain past the next call)
- `evaluate!!(evaluator, x)` — extends the existing AbstractPPL.evaluate!!
for Prepared, VectorEvaluator, and NamedTupleEvaluator
Evaluator shapes:
- `VectorEvaluator{CheckInput}` — wraps a callable with a flat-vector input
contract; CheckInput=false skips the per-call length check for AD hot paths
- `NamedTupleEvaluator{CheckInput}` — same for NamedTuple inputs with a
fixed prototype
Vectorisation utilities:
- `flatten_to!!(buf, x)` / `unflatten_to!!(x, buf)` — round-trip
vectorisation for Real, Complex, AbstractArray, Tuple, and NamedTuple;
@generated NamedTuple unflatten preserves type inferability
LogDensityProblems extension (updated):
- logdensity, dimension, capabilities for Prepared and VectorEvaluator
- logdensity_and_gradient copies the gradient out of the aliasable !! buffer
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
@penelopeysm, this PR keeps only the vector-/NamedTuple-typed evaluator in #155 and removes all autograd-related extensions (which can be added later). This should be ready for a final look. |
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #157 +/- ##
==========================================
+ Coverage 84.51% 85.10% +0.58%
==========================================
Files 10 13 +3
Lines 562 705 +143
==========================================
+ Hits 475 600 +125
- Misses 87 105 +18 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
- docs/src/evaluators.md: use `Prepared{<:AutoForwardDiff}` instead of
`Prepared{AutoForwardDiff}` so dispatch matches the parametric instance type
- test/evaluators/Evaluators.jl: rename DummyProblem → DummyModel, rename
the testset and parameter names to drop ADProblems-era terminology
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
@shravanngoswamii can you also help review this? |
|
AbstractPPL.jl documentation for PR #157 is available at: |
- Remove `export Prepared` and the `using .Evaluators: Prepared` re-import - Update docs example to import `Prepared` from `AbstractPPL.Evaluators` - Add a brief comment on the `evaluate!!` overloads Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ties to order 0
- Dispatch all LDP methods on `Prepared{<:Any,<:VectorEvaluator}` so
`NamedTupleEvaluator`-backed `Prepared` does not match (it cannot satisfy
LDP's vector-input contract).
- Default `capabilities` returns `LogDensityOrder{0}`; AD-backend extensions
opt into `LogDensityOrder{1}` by overloading on their cache type. Without
the overload, `logdensity_and_gradient` would hit the `value_and_gradient!!`
stub and fail.
- Test: demonstrate the backend opt-in pattern (TestADType overloads
`capabilities`), and verify NamedTupleEvaluator-backed `Prepared` has no
LDP methods defined (`dimension` throws, `capabilities` falls through to
LDP's `nothing` default).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
I'll take a more thorough look later. For my own edification, what's the relationship between this and #155? Is this meant to be "Part 1" of that PR (in which case that could be closed?) or something else? |
|
This PR should supersede #155. Later PRs will add autograd extensions. |
Reconstruct each leaf using `x`'s types rather than `eltype(buf)`, so a heterogeneous round-trip preserves `typeof(x)` instead of widening to the flat buffer's eltype. Add an opt-in `check_eltype` keyword that warns when `eltype(buf)` differs from `flat_eltype(x)`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The previous wording attributed the input-shape guarantee to AD libraries, but ForwardDiff/Mooncake do not validate user input. The actual contract is that the outer entry point validates `length(x)` once, and dual arrays derived from that `x` then flow through the AD loop without re-checking. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- check_dims=false is an opt-in trust mode where the caller takes responsibility for length(x); the AD inner-loop motivation is now framed as a typical use, not a guaranteed validation chain. - The no-AD `prepare(problem, x)` form exists so callers can write `prepare(...)` uniformly without knowing whether a backend is loaded, and downstream primal-only consumers can accept the result either way. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
These helpers are unused inside this PR but are intended for AD-backend extensions to share, so each `value_and_gradient!!` / `value_and_jacobian!!` produces a uniform error rather than rolling its own. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- test/run_extras.jl: drop the label→subdir mapping, use the label as the
path directly, rename the included test file to main.jl
- src/evaluators/utils.jl:
- simplify flat_length tuple/namedtuple cases to sum(flat_length, ...)
- replace getfield with x[name] in the _unflatten generator
- reject Symmetric/Hermitian/Diagonal/Triangular/Tri/Bi-diagonal arrays
up front rather than emitting broken round-trip results
- src/evaluators/Evaluators.jl: fold integer-input rejection into the
VectorEvaluator method body (parametrise on T, branch on T<:Integer);
drop the unused first arg of _reject_integer_input
- docs/src/evaluators.md: fix the f=...\\ncache=... indentation typo;
expand the !! aliasing note with an example showing safe usage
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
Thanks for the careful review! All comments have been addressed.
|
penelopeysm
left a comment
There was a problem hiding this comment.
just a couple more things
- Drop `init=0` from `flat_length` tuple/namedtuple sums and add explicit empty-case methods to mirror `flat_eltype`'s pattern. - Reject non-one-based arrays via `Base.require_one_based_indexing` (previously `copyto!(buf, offset, x, 1, n)` BoundsErrored on OffsetArrays silently). Checked at user-facing buf entry points and at the array dispatches in `_flatten_to!` / `_unflatten`. - Regression tests for both rejection paths. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
@penelopeysm / @shravanngoswamii should be ready to merge if you are happy. |
If it's fine, can you please wait for the end of the day! I am happy to approve it afterwards, let me look more carefully at it. |
|
I can't, in good conscience, approve this PR, because I still don't agree with the direction. If you want to merge it, you have the permissions on GitHub to do so. |
@penelopeysm, your input on Julia software engineering is appreciated. |
The previous shape check relied on `typeof` equality, which treats
`(b=zeros(2),)` and `(b=[1.0],)` as the same shape because both are
`Vector{Float64}`. Add a structural `_shapes_match` recursion that
compares `size` for array leaves (recursing into non-numeric eltypes)
and rejects unsupported leaf types up front, matching the
`Real`/`Complex`/`AbstractArray`/`Tuple`/`NamedTuple` contract used by
the flatten/unflatten utilities. Document the supported leaves in the
`NamedTupleEvaluator` docstring and the user-facing docs.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Replace the `_StructuredArray` reject-list with a `FlattableArray =
Union{Array,SubArray}` opt-in. This rejects `Adjoint`/`Transpose` (whose
`similar` strips the wrapper and silently breaks the type round-trip)
and any other custom `AbstractArray` whose round-trip we cannot
guarantee. Also clarify the `check_eltype` warning to mention the
`InexactError` that follows when narrowing conversions fail.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
All perspectives are appreciated. This PR introduces a self-contained evaluator interface that is reusable across DynamicPPL, Bijectors, and AdvancedVI, and cleanly separates primal evaluation from automatic differentiation interfaces, which will be introduced in subsequent PRs. |
- src/evaluators/utils.jl: drop SubArray from FlattableArray allowlist; views
silently lost their wrapper through similar(::SubArray) so the typeof round-trip
invariant did not hold.
- src/evaluators/Evaluators.jl: extend NamedTupleEvaluator docstring to mirror
VectorEvaluator's CheckInput=false guidance, calling out that the prototype
typeof check rejects dual/shadow leaves; narrow the AD-extension error hint
to fire only when no extension has registered an AD-aware prepare method;
add a brief comment on VectorEvaluator{false} noting the `T <: Integer`
branch resolves at compile time.
- ext/AbstractPPLLogDensityProblemsExt.jl: drop redundant instance-method
capabilities delegations now that we rely on LogDensityProblems' built-in
capabilities(x) = capabilities(typeof(x)).
- test/evaluators/utils.jl: extend rejection test with a SubArray case; drop
obsolete view round-trip edge case.
- test/evaluators/Evaluators.jl: replace the internal _assert_namedtuple_shape
direct-call asserts with public-call coverage.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- src/evaluators/Evaluators.jl: collapse three trivial evaluate!! overloads
into one Union method; remove unused _assert_jacobian_output and
_assert_supported_output helpers (will be reintroduced by AD-extension PRs
that actually call them); consolidate the integer-rejection compile-time
rationale onto the helper; document the m.nargs >= 4 gate in the error hint.
- src/evaluators/utils.jl: drop the single-use FlattableArray{T} = Array{T}
alias and inline Array{...} at its 4 use sites; tighten the allow-list
rationale onto the typeof round-trip contract; fold the heterogeneous
round-trip note into the unflatten_to!! docstring and drop the verbose
worked-example comment block (covered by the dedicated test).
- ext/AbstractPPLLogDensityProblemsExt.jl: add a one-line WHY for the copy(grad)
in logdensity_and_gradient (LDP requires a non-aliased gradient).
- test/evaluators/utils.jl: trim the seven-element non-Array rejection loop to
three representatives (structured wrapper, view, non-one-based) and drop the
unused Diagonal/UpperTriangular imports.
- test/run_extras.jl: collapse the two-step LABEL validation into a single
membership check.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Introduces
AbstractPPL.Evaluators, a new submodule that provides the structural plumbing for preparing and calling model evaluators, with or without AD.Core functions
prepare(problem, x; check_dims=true)/prepare(adtype, problem, x; check_dims=true)— structural and AD-aware preparation; AD-backend extensions implement the three-argument formvalue_and_gradient!!(prepared, x)/value_and_jacobian!!(prepared, x)— stub interface for derivative computation; returned arrays may alias internal cache buffersevaluate!!(evaluator, x)— extends the existingAbstractPPL.evaluate!!forPrepared,VectorEvaluator, andNamedTupleEvaluatorEvaluators
VectorEvaluator{CheckInput}— wraps a callable with a flat-vector input contract;CheckInput=falseskips the per-call length check for AD hot pathsNamedTupleEvaluator{CheckInput}— same forNamedTupleinputs with a fixed prototypeUtilities
flatten_to!!(buf, x)/unflatten_to!!(x, buf)— round-trip vectorisation forReal,Complex,AbstractArray,Tuple, andNamedTuple;@generatedNamedTuple unflatten preserves typeinferability
LogDensityProblems extension
logdensity,dimension,capabilitiesforPreparedandVectorEvaluatorlogdensity_and_gradientcopies the gradient out of the aliasable!!buffer