Skip to content

feat: add/fix PETSc KSP solver wrapper#894

Merged
ChrisRackauckas merged 22 commits into
SciML:mainfrom
Iddingsite:main
Feb 24, 2026
Merged

feat: add/fix PETSc KSP solver wrapper#894
ChrisRackauckas merged 22 commits into
SciML:mainfrom
Iddingsite:main

Conversation

@Iddingsite
Copy link
Copy Markdown
Contributor

@Iddingsite Iddingsite commented Feb 20, 2026

Add ksp() support from PETSc.jl with the new version of the wrapper (0.4.x, JuliaParallel/PETSc.jl#215). Closes #8. I believe Claude was mostly hallucinating, because this PR #873 was not working. On Master, this is failing:

using PETSc, MPI, SparseArrays, LinearSolve, LinearAlgebra

A = sprand(100, 100, 0.1) + 10I
b = rand(100)
prob = LinearProblem(A, b)

# Basic GMRES
sol = solve(prob, PETScAlgorithm(:gmres))

# CG with ILU preconditioner
sol = solve(prob, PETScAlgorithm(:cg; pc_type = :ilu))

but works with this PR.

What is currently supported

  • All PETSc KSP types (:gmres, :cg, :bcgs, :richardson, :preonly, …)
  • All PETSc preconditioners (:jacobi, :ilu, :lu, :gamg, :hypre, …)
  • Serial and MPI-distributed solves via the comm keyword
  • Null-space specification (:constant or :custom) for singular systems
  • Transposed solves via transposed = true
  • Arbitrary PETSc Options Database flags via ksp_options::NamedTuple (for instance ksp_options = (ksp_monitor = "", ksp_view = "")).

What I am not sure/not happy with:

  • With PETSc.jl, there is a need to explicitly free the cache after use for the memory. Right now, I recommend to use PETScExt.cleanup_petsc_cache!(cache) to clear it after usage (works on solve() or integ() object). Most people will clear the memory by closing the Julia session, but I think it makes sense to expose such a function.
  • Currently, the MPI implementation is a bit clunky/naive because every rank holds the full Julia matrix in memory and inserts its locally owned rows into PETSc. I could not come up with a better way because of the way LinearSolve.jl expect the input from the users. I am happy to remove the MPI support if that is not satisfying enough. Also needs some proper benchmarks. I have removed the MPI support for more processes than 1 because the PR was getting too big.
  • The use of PETSc in general is mostly based on Sparse matrix for large system. Because of that, the handling of for dense matrices is a bit clunky and I could not make the reuse of ksp() work with how it is communicating with PETSc, and it currently has to rebuild it everytime solve!() is called again. However, this works correctly for sparse matrix. Could potentially be improved in another PR. This works well for dense matrix now
  • Not sure where to put the test files. There is petsctests_mpi.jl and petsctests.jl right now. Removed petsctests_mpi.jl for now

Checklist

  • Appropriate tests were added
  • Any code changes were done in a way that does not break public API
  • All documentation related to code changes were updated
  • The new code follows the
    contributor guidelines, in particular the SciML Style Guide and
    COLPRAC.
  • Any new documentation only uses public API

AI Disclosure: Claude was used to help writting the code and the comments + docstrings

@Iddingsite Iddingsite marked this pull request as draft February 22, 2026 14:48
@Iddingsite Iddingsite marked this pull request as ready for review February 24, 2026 13:56
@Iddingsite
Copy link
Copy Markdown
Contributor Author

@ChrisRackauckas I think this is ready for review but nothing urgent. Everything works for dense and sparse matrices with all the solvers and preconditionners, and with support for null-space. I dropped the MPI support for now because the PR was getting too big to review, so it only works with 1 process for now. I am not sure anyway LinearSolve.jl is fit for that.

The big limitation currently is that PETSc expect sparse matrices with CSR format, whereas Julia is using CSC. So, when reusing the cache for large scale problems, the time is spent on converting from CSR to CSC format. That is also what is done currently in the PETSc.jl package when initialising the first time using (see here). I could not come up with something cleaner, but happy to take suggestions.

That currently makes it hard to use it as a stand alone for a non-linear solver where we want to reuse the same cache, and people should just use SNES for that I guess.

The PR needs JuliaParallel/PETSc.jl#227 so we need at least 0.4.6 for PETSc.jl.

@ChrisRackauckas
Copy link
Copy Markdown
Member

dropped the MPI support for now because the PR was getting too big to review, so it only works with 1 process for now. I am not sure anyway LinearSolve.jl is fit for that.

It is. We have support in HYPRE https://github.com/SciML/LinearSolve.jl/blob/main/test/hypretests_mpi.jl, but agreed it probably works better as a separate PR as this is getting big.

The big limitation currently is that PETSc expect sparse matrices with CSR format, whereas Julia is using CSC. So, when reusing the cache for large scale problems, the time is spent on converting from CSR to CSC format. That is also what is done currently in the PETSc.jl package when initialising the first time using (see here). I could not come up with something cleaner, but happy to take suggestions.

Is there no SparseMatrixCSR that a user could define? I guess not. For now, let's get this in for correctness first. But yeah for benchmarking we probably want to have both versions of A and benchmark using the preferred form.

That currently makes it hard to use it as a stand alone for a non-linear solver where we want to reuse the same cache, and people should just use SNES for that I guess.

SNES benchmarks pretty poorly though 😅 hence why we really wanted the linear solvers directly.

@ChrisRackauckas
Copy link
Copy Markdown
Member

Yeah oops haha, I had saw "oh all the tests passed, wild" but I guess that's because I forgot to add a test group for it 😅 😅 😅 😅 😅 😅 😅 😅 😅 😅 😅 😅 😅 😅

@Iddingsite
Copy link
Copy Markdown
Contributor Author

Alright, that explains how the original PR was merged haha

The big limitation currently is that PETSc expect sparse matrices with CSR format, whereas Julia is using CSC. So, when reusing the cache for large scale problems, the time is spent on converting from CSR to CSC format. That is also what is done currently in the PETSc.jl package when initialising the first time using (see here). I could not come up with something cleaner, but happy to take suggestions.

Is there no SparseMatrixCSR that a user could define? I guess not. For now, let's get this in for correctness first. But yeah for benchmarking we probably want to have both versions of A and benchmark using the preferred form.

There is no default SparseMatrixCSR no. There is https://github.com/gridap/SparseMatricesCSR.jl, which I believe was developed specifically to work with PETSc for https://github.com/gridap/GridapPETSc.jl. Maybe we could force the users to use that one, or at least dispatch on it to allow its usage.

@ChrisRackauckas
Copy link
Copy Markdown
Member

We can at least start here.

There is no default SparseMatrixCSR no. There is https://github.com/gridap/SparseMatricesCSR.jl, which I believe was developed specifically to work with PETSc for https://github.com/gridap/GridapPETSc.jl. Maybe we could force the users to use that one, or at least dispatch on it to allow its usage.

It might at least be good to support that as an input, so the benchmarking is more fair.

@ChrisRackauckas ChrisRackauckas merged commit 735ab4d into SciML:main Feb 24, 2026
162 of 165 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

PETSc wrappers

2 participants