Skip to content

CompatHelper: bump compat for FastBroadcast to 1, (keep existing compat)#192

Open
github-actions[bot] wants to merge 2 commits intomainfrom
compathelper/new_version/2026-03-31-00-44-53-208-03761993723
Open

CompatHelper: bump compat for FastBroadcast to 1, (keep existing compat)#192
github-actions[bot] wants to merge 2 commits intomainfrom
compathelper/new_version/2026-03-31-00-44-53-208-03761993723

Conversation

@github-actions
Copy link
Copy Markdown
Contributor

This pull request changes the compat entry for the FastBroadcast package from 0.3.5 to 0.3.5, 1.
This keeps the compat entries for earlier versions.

Note: I have not tested your package with this new compat entry.
It is your responsibility to make sure that your package tests pass before you merge this pull request.

@SKopecz SKopecz force-pushed the compathelper/new_version/2026-03-31-00-44-53-208-03761993723 branch from 4df2438 to f2b9de0 Compare March 31, 2026 00:44
@codecov
Copy link
Copy Markdown

codecov bot commented Mar 31, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

@coveralls
Copy link
Copy Markdown

coveralls commented Mar 31, 2026

Coverage Report for CI Build 23895902213

Coverage remained the same at 97.305%

Details

  • Coverage remained the same as the base build.
  • Patch coverage: No coverable lines changed in this PR.
  • No coverage regressions found.

Uncovered Changes

No uncovered changes found.

Coverage Regressions

No coverage regressions found.


Coverage Stats

Coverage Status
Relevant Lines: 1707
Covered Lines: 1661
Line Coverage: 97.31%
Coverage Strength: 336427679.32 hits per line

💛 - Coveralls

Copy link
Copy Markdown
Member

@ranocha ranocha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JoshuaLampert
Copy link
Copy Markdown
Member

This requires oxfordcontrol/Clarabel.jl#218.

@JoshuaLampert
Copy link
Copy Markdown
Member

CI is using FastBroadcast.jl v3 now. However, there are a few failing tests. They do not seem to be related to the update of FastBroadcast.jl though, but rather look like something changed in the OrdinaryDiffEq*.jl packages.

@JoshuaLampert
Copy link
Copy Markdown
Member

I have reduced the issue (for at least three of the five test failures) to the following MWE:

using PositiveIntegrators, OrdinaryDiffEqLowOrderRK
prod1! = (P, u, p, t) -> begin
    P[1, 1] = 0
    P[1, 2] = u[2]
    P[2, 1] = u[1]
    P[2, 2] = 0
    return nothing
end
dest1! = (D, u, p, t) -> begin
    fill!(D, 0)
    return nothing
end
u0 = [1.0, 0.0]
tspan = (0.0, 1.0)
prob_default = PDSProblem(prod1!, dest1!, u0, tspan)
solve(prob_default, Euler(); dt = 0.1)

With OrdinaryDiffEqCore.jl v3.11 this returns

retcode: Success
Interpolation: 3rd order Hermite
t: 11-element Vector{Float64}:
 0.0
 0.1
 0.2
 0.30000000000000004
 0.4
 0.5
 0.6
 0.7
 0.7999999999999999
 0.8999999999999999
 1.0
u: 11-element Vector{Vector{Float64}}:
 [1.0, 0.0]
 [0.9, 0.1]
 [0.8200000000000001, 0.18000000000000002]
 [0.756, 0.24400000000000002]
 [0.7048, 0.2952]
 [0.66384, 0.33616]
 [0.631072, 0.36892800000000003]
 [0.6048576, 0.3951424]
 [0.58388608, 0.41611392]
 [0.5671088639999999, 0.432891136]
 [0.5536870911999999, 0.4463129088]

and with OrdinaryDiffEqCore.jl v3.12 and newer it returns

retcode: Success
Interpolation: 3rd order Hermite
t: 12-element Vector{Float64}:
 0.0
 0.1
 0.2
 0.30000000000000004
 0.4
 0.5
 0.6
 0.7
 0.7999999999999999
 0.8999999999999999
 0.9999999999999999
 1.0
u: 12-element Vector{Vector{Float64}}:
 [1.0, 0.0]
 [0.9, 0.1]
 [0.8200000000000001, 0.18000000000000002]
 [0.756, 0.24400000000000002]
 [0.7048, 0.2952]
 [0.66384, 0.33616]
 [0.631072, 0.36892800000000003]
 [0.6048576, 0.3951424]
 [0.58388608, 0.41611392]
 [0.5671088639999999, 0.432891136]
 [0.5536870911999999, 0.4463129088]
 [0.5536870911999999, 0.4463129088]

i.e., there is an additional (unexpected) time step at 0.9999999999999999. Since OrdinaryDiffEqCore.jl is part of a monorepo, which does not have tags and releases for the sublibraries it is not so easy (I don't know of an easy way) to see the diff between OrdinaryDiffEqCore.jl v3.11 and v3.12 or the PRs, which were part of the release v3.12. The downstream tests in OrdinaryDiffEq.jl for PositiveIntegrators.jl are also not really useful because they usually don't run because of incompatibilities (like https://github.com/SciML/OrdinaryDiffEq.jl/actions/runs/24119695833/job/70370910063#step:6:19).
So to conclude the test failures are independent of this PR and only surfaced now because incompatibilities were resolved, such that for the first time we run tests with newer versions of OrdinaryDiffEqCore.jl.
@ChrisRackauckas, can you help here?

@ChrisRackauckas
Copy link
Copy Markdown
Contributor

oh, that's probably related to SciML/OrdinaryDiffEq.jl#2869. We made a change in order to make sure we very clearly hit every floating point exactly for superdense time, so multiple tstops at the same time and eps apart are handled well (this is required for some types of multi-event scenarios). But yeah this looks like an odd side effect of that. I think the best solution is to simply expand tstops to tspan[1]:dt:tspan[2] here, since Julia's range semantics has some complex logic that makes that take into account floating point error and give accurate tstops. We were relying on a bit of a hack before that tspan[1] + dt + dt + dt + dt + ... if it gets within 100eps(tstop) that it would snap, but that hack was removed (and there are ways to break it, just by doing this beyond 1e6 steps), so this should be strictly more robust. Let me turn this into a test case and get this in. I think this wasn't caught because there's tests that sol.t[end] is floating point exact, but not that there's a small step in the end.

ChrisRackauckas-Claude pushed a commit to ChrisRackauckas-Claude/OrdinaryDiffEq.jl that referenced this pull request Apr 15, 2026
When solving with a fixed-dt method (e.g. `solve(prob, Euler(); dt = 0.1)`),
the accumulated `t + dt + dt + ...` drifts and used to produce a spurious
trailing step at `1 - eps` followed by `1.0`. PR SciML#2869 removed the
`100eps(tstop)` snap hack that previously masked this in
`fixed_t_for_floatingpoint_error!`.

Two coordinated changes restore the expected 11-step result without
reintroducing the old hack:

1. `_ode_init` expands `tstops` to `tspan[1]:dt:tspan[end]` when the user
   specifies `dt` for a fixed-time method and did not supply their own
   `tstops`/`d_discontinuities`. Julia's range semantics use TwicePrecision
   internally, so each range element is the exact floating-point
   representative of `k*dt` and lands on `tspan[end]` cleanly. The
   expansion is skipped whenever the user supplied any tstops, which
   preserves the existing "continue at dtcache between user tstops"
   behavior that tests like `sol.t == [0, 1/3, 1/2, 5/6, 1]` depend on.

2. `modify_dt_for_tstops!` compares `dt` against `distance_to_tstop` with
   a small floating-point tolerance (`100 * eps(max(t, tstop))`). Without
   this tolerance, `dtcache = 0.1` reads as strictly less than a
   `distance_to_tstop` of `0.10000000000000009` and the integrator takes
   a full step that overshoots the tstop by one ulp, then takes a
   matching tiny corrective step. The tolerance guard is gated on
   `isfinite(tdir_tstop) && isfinite(integrator.t)` so that semi-infinite
   `tspan = (0.0, Inf)` integrations still work.

Adds a regression test covering forward, reverse, and non-evenly-dividing
`dt` on `tspan = (0.0, 1.0)`.

Fixes the PositiveIntegrators.jl CI failure noted in
NumericalMathematics/PositiveIntegrators.jl#192 where
`solve(prob, Euler(); dt = 0.1)` began returning 12 steps instead of 11.

Co-Authored-By: Chris Rackauckas <accounts@chrisrackauckas.com>
ChrisRackauckas-Claude pushed a commit to ChrisRackauckas-Claude/OrdinaryDiffEq.jl that referenced this pull request Apr 16, 2026
When solving with a fixed-dt method (e.g. `solve(prob, Euler(); dt = 0.1)`),
the accumulated `t + dt + dt + ...` drifts past `tspan[end]` by one ulp,
producing a spurious trailing micro-step. PR SciML#2869 removed the
`100eps(tstop)` snap hack that previously masked this.

Fix: for non-adaptive, dtchangeable algorithms with no user-supplied
tstops / d_discontinuities / callbacks, expand `tstops` to the range
`tspan[1]:dt:tspan[end]` whose TwicePrecision arithmetic gives exact
floating-point tstops, and inflate `dt` by 10 ulps so that
`modify_dt_for_tstops!` always takes the tstop branch and snaps `t`
exactly via `fixed_t_for_tstop_error!`.

The expansion is intentionally skipped for adaptive algorithms (even
when used with `adaptive=false`), `CompositeAlgorithm`, non-finite
tspans, and any solve with callbacks — preserving all existing
stepping semantics in those cases.

Adds a regression test covering forward, reverse, and non-evenly-
dividing `dt` on `tspan = (0.0, 1.0)`.

Fixes the PositiveIntegrators.jl CI failure noted in
NumericalMathematics/PositiveIntegrators.jl#192.

Co-Authored-By: Chris Rackauckas <accounts@chrisrackauckas.com>
ChrisRackauckas-Claude pushed a commit to ChrisRackauckas-Claude/OrdinaryDiffEq.jl that referenced this pull request Apr 16, 2026
When solving with a fixed-dt method (e.g. `solve(prob, Euler(); dt = 0.1)`),
the accumulated `t + dt + dt + ...` drifts past `tspan[end]` by one ulp,
producing a spurious trailing micro-step.  PR SciML#2869 removed the old
`100eps(tstop)` snap in `fixed_t_for_floatingpoint_error!` that masked this.

Fix: add a floating-point tolerance (`100 * eps(max(|t|, |tstop|))`) to
the `dt < distance_to_tstop` comparison in `modify_dt_for_tstops!`.
When `dt ≈ distance` within rounding the integrator now takes the tstop
branch, and `fixed_t_for_tstop_error!` snaps `t` to the exact tstop
value — eliminating the extra step.

Adds a regression test covering forward, reverse, and non-evenly-
dividing `dt` on `tspan = (0.0, 1.0)`.

Fixes the PositiveIntegrators.jl CI failure noted in
NumericalMathematics/PositiveIntegrators.jl#192.

Co-Authored-By: Chris Rackauckas <accounts@chrisrackauckas.com>
ChrisRackauckas added a commit to SciML/OrdinaryDiffEq.jl that referenced this pull request Apr 16, 2026
When solving with a fixed-dt method (e.g. `solve(prob, Euler(); dt = 0.1)`),
the accumulated `t + dt + dt + ...` drifts past `tspan[end]` by one ulp,
producing a spurious trailing micro-step.  PR #2869 removed the old
`100eps(tstop)` snap in `fixed_t_for_floatingpoint_error!` that masked this.

Fix: add a floating-point tolerance (`100 * eps(max(|t|, |tstop|))`) to
the `dt < distance_to_tstop` comparison in `modify_dt_for_tstops!`.
When `dt ≈ distance` within rounding the integrator now takes the tstop
branch, and `fixed_t_for_tstop_error!` snaps `t` to the exact tstop
value — eliminating the extra step.

Adds a regression test covering forward, reverse, and non-evenly-
dividing `dt` on `tspan = (0.0, 1.0)`.

Fixes the PositiveIntegrators.jl CI failure noted in
NumericalMathematics/PositiveIntegrators.jl#192.

Co-authored-by: ChrisRackauckas-Claude <accounts@chrisrackauckas.com>
@JoshuaLampert
Copy link
Copy Markdown
Member

Thanks @ChrisRackauckas for the fix in OrdinaryDiffEqCore.jl, which fixed 3 of the 5 test failures here. For the two remaining ones here is an MWE:

using PositiveIntegrators
using OrdinaryDiffEqTsit5
using Test

prob = prob_pds_npzd
alg = Tsit5()

dt = (last(prob.tspan) - first(prob.tspan)) / 1e4
sol = solve(prob, alg; dt, isoutofdomain = isnegative) # use explicit f
sol2 = solve(ConservativePDSProblem(prob.f.p, prob.u0, prob.tspan), alg; dt,
             isoutofdomain = isnegative) # use p and d to compute f

@test sol.t  sol2.t

This test passes until OrdinaryDiffEqCore.jl v3.9 and fails with OrdinaryDiffEqCore.jl v3.10 and up. Any ideas what might cause this and how to fix it? A similar test for other time integrators is also successful with newer versions of OrdinaryDiffEqCore.jl, but it is the version change from v3.9 to v3.10 in OrdinaryDiffEqCore.jl, which makes the Tsit5() one fail.
The following gives the same as sol:

sol3 = solve(ODEProblem(prob.f.std_rhs, prob.u0, prob.tspan), alg; dt,
                             isoutofdomain = isnegative) # use f to create ODEProblem

as expected. So it is sol2, which unexpectedly gives another result.

@ChrisRackauckas
Copy link
Copy Markdown
Contributor

Okay this case is pretty understood, though I don't know what you want to do with it so I'll just give you the information. Your two problems are not identical, your fs are not identical. Problem 1 computes via dpn + dzn + ddn - dnp ≈ ((dpn + dzn) + ddn) - dnp while Problem 2 uses (((-dnp) + dpn) + dzn) + ddn with @fastmath @inbounds @simd (which allows reordering). Because floating point is not associative, these differ by 2.22e-16, so for the stepper EEst1 = 7.012e-17 and EEst2 = 6.858e-17. That leads to different time steps. So in a strict sense, your test of "are they floating point the same" is simply false because your f is not floating point the same.

So then your question will be, why were they same before? Good question. Before the qmax acceleration PR, the maximum q was always 10, i.e. dt_new = q*dt had a maximum of 10 per step. So basically what would happen on this problem, because it is trivial enough that the ODE solver solves it to effectively floating point tolerance (i.e. EEst < eps(Float64)), you effectively have 0 error in each step because the solver is hitting the analytical solution, and so it grows by 10 each step. Because they match to 16 digits, dt would then always be the same because they would match to effectively floating point accuracy + 1 digit, and multiplying by 10 shifts the digit, so it is exactly the same. But then with qmax acceleration, the first step now is able to be larger, to 10_000. That shifts by 5 digits, and so because they are the same to 17 digits (the absolute highest possible in 64-bit floats), this means dt is the same to 12 digits, meaning that difference of EEst1 = 7.012e-17 and EEst2 = 6.858e-17 shows up in the 12th digit.

So you could pass in controller = NewPIController(Float64, alg; qmax_first_step = 10) to remove qmax acceleration (and then update it in v7 to just PIController) and that would give you the old behavior. Though I'd argue that the real issue here is that you have a test that things are floating point exact, when it's not the ODE solver that is the cause of the difference but your problem definitions, because they are not accumulating in the same order. So I'd either relax that to an approximate equality, understanding that they will differ in the 12th digit due to a fundamental accuracy limit in floating point accumulation, or flip the definition of one of the problems so that they associate in the same order and thus are actually floating point the same. But this mixture of them not being floating point the same but expecting the ODE solver to give floating point the same time steps is not going to be very robust for what I hope is clear reasons.

@JoshuaLampert
Copy link
Copy Markdown
Member

Thanks for the explanation! What do you think, @ranocha, @SKopecz?

@JoshuaLampert
Copy link
Copy Markdown
Member

JoshuaLampert commented Apr 19, 2026

So just to be clear two clarifications:

So I'd either relax that to an approximate equality

We do that already (we use as you see from the MWE I posted above).

understanding that they will differ in the 12th digit due to a fundamental accuracy limit in floating point accumulation

We are not talking about differences in the 12th digit, but differences of the order of 1e-4 to 1e-2.
From the MWE above:

julia> sol.t .- sol2.t
421-element Vector{Float64}:
  0.0
  0.0
 -0.0002446514639678071
 -0.0005306752357510658
 -0.0006660263933044863
 -0.0008960739422696484
  0.009118037912073884
  0.009250671172130298
  0.00860234178977426
  0.0032171897383779235
  0.0021411554781194386
  0.0016643597373147134
  0.0006729809782541896
 -2.0721123352496207e-5
  0.00032009719766801226
  4.709706773042832e-5
  0.00017986037279871248
  9.638448074444916e-5
  
  0.006162175210169707
  0.006898746292788083
  0.00779818506312413
  0.008905355941839943
  0.010282000114095524
  0.012011926086444191
  0.014213548895940953
  0.017046988673076413
  0.020728832292429722
  0.025517463366548476
  0.031671296450147324
  0.039366079933075504
  0.04866429536678396
  0.059558828030842115
  0.07190310256362853
  0.08538681438158413
  0.09985857665201259
  0.0

@ChrisRackauckas
Copy link
Copy Markdown
Contributor

The first step is 1e-12 different, but yes over 1e4 steps that will compound until you have a point where adaptivity takes a different branch.

... Is there a reason to not just fix f if that's what you're trying to test?

@JoshuaLampert
Copy link
Copy Markdown
Member

JoshuaLampert commented Apr 19, 2026

I didn't write this test. I'm just trying to help debugging. So I'll wait for @ranocha or @SKopecz to chime in.
The time steps (if that is what you mean with "over 1e4 steps") aren't of the order of 1e4 though (the whole time span is only (0, 10)) and we have 421 time steps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants