Skip to content

Commit e1f7d6f

Browse files
author
Sebastien Loisel
committed
Update dependency HPCLinearAlgebra → HPCSparseArrays
1 parent 372fec6 commit e1f7d6f

58 files changed

Lines changed: 178 additions & 178 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

Project.toml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
88
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
99
CUDSS_jll = "4889d778-9329-5762-9fec-0578a5d30366"
1010
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
11-
HPCLinearAlgebra = "537374f1-5608-4525-82fb-641dce542540"
11+
HPCSparseArrays = "537374f1-5608-4525-82fb-641dce542540"
1212
MPI = "da04e1cc-30fd-572f-bb4f-1f8673147195"
1313
MPIPreferences = "3da0fdf6-3ccc-4f1b-acd9-58baa6c99267"
1414
Metal = "dde4c033-4e86-420c-a63e-0dd931031962"
@@ -20,13 +20,13 @@ StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"
2020
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
2121

2222
[sources]
23-
HPCLinearAlgebra = {path = "../HPCLinearAlgebra.jl"}
23+
HPCSparseArrays = {path = "../HPCSparseArrays.jl"}
2424

2525
[compat]
2626
BenchmarkTools = "1.6"
2727
CUDA = "5.9.6"
2828
CUDSS_jll = "0.7.1"
29-
HPCLinearAlgebra = "0.1"
29+
HPCSparseArrays = "0.1"
3030
MPI = "0.20"
3131
MPIPreferences = "0.1.11"
3232
Metal = "1.9.1"

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,17 +7,17 @@
77

88
**Author:** S. Loisel
99

10-
A Julia package that bridges MultiGridBarrier.jl and HPCLinearAlgebra.jl for distributed multigrid barrier computations using native MPI types.
10+
A Julia package that bridges MultiGridBarrier.jl and HPCSparseArrays.jl for distributed multigrid barrier computations using native MPI types.
1111

1212
## Overview
1313

14-
MultiGridBarrierMPI.jl extends the MultiGridBarrier.jl package to work with HPCLinearAlgebra.jl's distributed matrix and vector types. This enables efficient parallel computation of multigrid barrier methods across multiple MPI ranks without requiring PETSc.
14+
MultiGridBarrierMPI.jl extends the MultiGridBarrier.jl package to work with HPCSparseArrays.jl's distributed matrix and vector types. This enables efficient parallel computation of multigrid barrier methods across multiple MPI ranks without requiring PETSc.
1515

1616
## Key Features
1717

1818
- **1D, 2D, and 3D Support**: Full support for 1D, 2D triangular, and 3D hexahedral finite elements
1919
- **Seamless Integration**: Drop-in replacement for MultiGridBarrier's native types
20-
- **Pure Julia MPI**: Uses HPCLinearAlgebra.jl for distributed linear algebra (no external libraries required)
20+
- **Pure Julia MPI**: Uses HPCSparseArrays.jl for distributed linear algebra (no external libraries required)
2121
- **Type Conversion**: Easy conversion between native Julia arrays and MPI distributed types
2222
- **MPI-Aware**: All operations correctly handle MPI collective requirements
2323
- **MUMPS Solver**: Uses MUMPS direct solver for accurate Newton iterations
@@ -31,7 +31,7 @@ using MPI
3131
MPI.Init()
3232

3333
using MultiGridBarrierMPI
34-
using HPCLinearAlgebra
34+
using HPCSparseArrays
3535

3636
# Solve with MPI distributed types (L=3 refinement levels)
3737
sol_mpi = fem2d_mpi_solve(Float64; L=3, p=1.0, verbose=false)
@@ -92,7 +92,7 @@ julia --project make.jl
9292
This package is part of a larger ecosystem:
9393

9494
- **[MultiGridBarrier.jl](https://github.com/sloisel/MultiGridBarrier.jl)**: Core multigrid barrier method implementation
95-
- **[HPCLinearAlgebra.jl](https://github.com/sloisel/HPCLinearAlgebra.jl)**: Pure Julia distributed linear algebra with MPI
95+
- **[HPCSparseArrays.jl](https://github.com/sloisel/HPCSparseArrays.jl)**: Pure Julia distributed linear algebra with MPI
9696
- **MPI.jl**: Julia MPI bindings for distributed computing
9797

9898
## Requirements

docs/Project.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
[deps]
22
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
33
DocumenterTools = "35a29f4d-8980-5a13-9543-d66fff28ecb8"
4-
HPCLinearAlgebra = "537374f1-5608-4525-82fb-641dce542540"
4+
HPCSparseArrays = "537374f1-5608-4525-82fb-641dce542540"
55
MultiGridBarrier = "9e2c1f1d-9131-4ad4-b32f-bd2a0b0ecd1e"
66
MultiGridBarrierMPI = "abf18f27-d12f-4566-94ed-07bf0c385f70"
77

docs/src/api.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -104,12 +104,12 @@ The `AMGBSOL` type from MultiGridBarrier contains the complete solution:
104104

105105
## MPI and IO Utilities
106106

107-
### HPCLinearAlgebra.io0()
107+
### HPCSparseArrays.io0()
108108

109109
Returns an IO stream that only writes on rank 0:
110110

111111
```julia
112-
using HPCLinearAlgebra
112+
using HPCSparseArrays
113113

114114
println(io0(), "This prints once from rank 0")
115115
```
@@ -132,7 +132,7 @@ using MPI
132132
MPI.Init()
133133

134134
using MultiGridBarrierMPI
135-
using HPCLinearAlgebra
135+
using HPCSparseArrays
136136
using MultiGridBarrier
137137
using LinearAlgebra
138138

docs/src/guide.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ using MPI
3030
MPI.Init()
3131

3232
using MultiGridBarrierMPI
33-
using HPCLinearAlgebra
33+
using HPCSparseArrays
3434
using MultiGridBarrier
3535

3636
# Step 1: Solve with MPI distributed types
@@ -135,7 +135,7 @@ using MPI
135135
MPI.Init()
136136

137137
using MultiGridBarrierMPI
138-
using HPCLinearAlgebra
138+
using HPCSparseArrays
139139
using MultiGridBarrier
140140

141141
# 1. Create native geometry with specific parameters
@@ -168,7 +168,7 @@ using MPI
168168
MPI.Init()
169169

170170
using MultiGridBarrierMPI
171-
using HPCLinearAlgebra
171+
using HPCSparseArrays
172172
using MultiGridBarrier
173173
using LinearAlgebra
174174

@@ -193,10 +193,10 @@ end
193193

194194
### Printing from One Rank
195195

196-
Use `io0()` from HPCLinearAlgebra to print from rank 0 only:
196+
Use `io0()` from HPCSparseArrays to print from rank 0 only:
197197

198198
```julia
199-
using HPCLinearAlgebra
199+
using HPCSparseArrays
200200

201201
# This prints once (from rank 0)
202202
println(io0(), "Hello from rank 0!")
@@ -268,7 +268,7 @@ using MPI
268268
MPI.Init()
269269

270270
using MultiGridBarrierMPI
271-
using HPCLinearAlgebra
271+
using HPCSparseArrays
272272

273273
# Solve a 1D problem with 4 multigrid levels (2^4 = 16 elements)
274274
sol = fem1d_mpi_solve(Float64; L=4, p=1.0, verbose=true)
@@ -297,7 +297,7 @@ using MPI
297297
MPI.Init()
298298

299299
using MultiGridBarrierMPI
300-
using HPCLinearAlgebra
300+
using HPCSparseArrays
301301

302302
# Solve a 2D problem
303303
sol = fem2d_mpi_solve(Float64; L=2, p=1.0, verbose=true)
@@ -327,7 +327,7 @@ using MPI
327327
MPI.Init()
328328

329329
using MultiGridBarrierMPI
330-
using HPCLinearAlgebra
330+
using HPCSparseArrays
331331

332332
# Solve a 3D problem with Q3 elements and 2 multigrid levels
333333
sol = fem3d_mpi_solve(Float64; L=2, k=3, p=1.0, verbose=true)
@@ -357,7 +357,7 @@ using MPI
357357
MPI.Init()
358358

359359
using MultiGridBarrierMPI
360-
using HPCLinearAlgebra
360+
using HPCSparseArrays
361361
using MultiGridBarrier
362362

363363
# Create MPI geometry
@@ -393,7 +393,7 @@ using MPI
393393
MPI.Init()
394394

395395
using MultiGridBarrierMPI
396-
using HPCLinearAlgebra
396+
using HPCSparseArrays
397397

398398
sol = fem2d_mpi_solve(Float64; L=3, p=1.0)
399399
sol_native = mpi_to_native(sol)

docs/src/index.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -10,17 +10,17 @@ v = string(pkgversion(MultiGridBarrierMPI))
1010
md"# MultiGridBarrierMPI.jl $v"
1111
```
1212

13-
**A Julia package that bridges MultiGridBarrier.jl and HPCLinearAlgebra.jl for distributed multigrid barrier computations.**
13+
**A Julia package that bridges MultiGridBarrier.jl and HPCSparseArrays.jl for distributed multigrid barrier computations.**
1414

1515
## Overview
1616

17-
MultiGridBarrierMPI.jl extends the MultiGridBarrier.jl package to work with HPCLinearAlgebra.jl's distributed matrix and vector types. This enables efficient parallel computation of multigrid barrier methods across multiple MPI ranks using pure Julia distributed types (no PETSc required).
17+
MultiGridBarrierMPI.jl extends the MultiGridBarrier.jl package to work with HPCSparseArrays.jl's distributed matrix and vector types. This enables efficient parallel computation of multigrid barrier methods across multiple MPI ranks using pure Julia distributed types (no PETSc required).
1818

1919
## Key Features
2020

2121
- **1D, 2D, and 3D Support**: Full support for 1D elements, 2D triangular, and 3D hexahedral finite elements
2222
- **Seamless Integration**: Drop-in replacement for MultiGridBarrier's native types
23-
- **Pure Julia MPI**: Uses HPCLinearAlgebra.jl for distributed linear algebra
23+
- **Pure Julia MPI**: Uses HPCSparseArrays.jl for distributed linear algebra
2424
- **Type Conversion**: Easy conversion between native Julia arrays and MPI distributed types
2525
- **MPI-Aware**: All operations correctly handle MPI collective requirements
2626
- **MUMPS Solver**: Uses MUMPS direct solver for accurate Newton iterations
@@ -34,7 +34,7 @@ using MPI
3434
MPI.Init()
3535

3636
using MultiGridBarrierMPI
37-
using HPCLinearAlgebra
37+
using HPCSparseArrays
3838
using MultiGridBarrier
3939

4040
# Solve with MPI distributed types (L=3 refinement levels)
@@ -71,7 +71,7 @@ Depth = 2
7171
This package is part of a larger ecosystem:
7272

7373
- **[MultiGridBarrier.jl](https://github.com/sloisel/MultiGridBarrier.jl)**: Core multigrid barrier method implementation (1D, 2D, and 3D)
74-
- **[HPCLinearAlgebra.jl](https://github.com/sloisel/HPCLinearAlgebra.jl)**: Pure Julia distributed linear algebra with MPI
74+
- **[HPCSparseArrays.jl](https://github.com/sloisel/HPCSparseArrays.jl)**: Pure Julia distributed linear algebra with MPI
7575
- **MPI.jl**: Julia MPI bindings for distributed computing
7676

7777
## Requirements

docs/src/installation.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ For HPC environments, you may want to configure MPI.jl to use your system's MPI
1010

1111
### MUMPS
1212

13-
The package uses MUMPS for sparse direct solves through HPCLinearAlgebra.jl. MUMPS is typically available through your system's package manager or HPC module system.
13+
The package uses MUMPS for sparse direct solves through HPCSparseArrays.jl. MUMPS is typically available through your system's package manager or HPC module system.
1414

1515
## Package Installation
1616

@@ -75,7 +75,7 @@ using MPI
7575
MPI.Init()
7676

7777
using MultiGridBarrierMPI
78-
using HPCLinearAlgebra
78+
using HPCSparseArrays
7979

8080
# Your parallel code here
8181
sol = fem2d_mpi_solve(Float64; L=3, p=1.0)
@@ -89,7 +89,7 @@ mpiexec -n 4 julia --project my_program.jl
8989
```
9090

9191
!!! tip "Output from Rank 0 Only"
92-
Use `io0()` from HPCLinearAlgebra for output to avoid duplicate messages:
92+
Use `io0()` from HPCSparseArrays for output to avoid duplicate messages:
9393
```julia
9494
println(io0(), "This prints once from rank 0")
9595
```
@@ -106,7 +106,7 @@ using Pkg; Pkg.build("MPI")
106106

107107
### MUMPS Issues
108108

109-
If MUMPS fails to load, ensure it's properly installed on your system and that HPCLinearAlgebra.jl can find it.
109+
If MUMPS fails to load, ensure it's properly installed on your system and that HPCSparseArrays.jl can find it.
110110

111111
### Test Failures
112112

examples/basic_solve.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ using MPI
1515
MPI.Init()
1616

1717
using MultiGridBarrierMPI
18-
using HPCLinearAlgebra
18+
using HPCSparseArrays
1919
using MultiGridBarrier
2020
using LinearAlgebra
2121

examples/roundtrip_conversion.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ using MPI
1616
MPI.Init()
1717

1818
using MultiGridBarrierMPI
19-
using HPCLinearAlgebra
19+
using HPCSparseArrays
2020
using MultiGridBarrier
2121
using LinearAlgebra
2222
using SparseArrays

src/MultiGridBarrierMPI.jl

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
MultiGridBarrierMPI
33
44
A module that provides a convenient interface for using MultiGridBarrier with MPI
5-
distributed types through HPCLinearAlgebra.
5+
distributed types through HPCSparseArrays.
66
77
# Exports
88
- `fem1d_mpi`: Creates an MPI-based Geometry from fem1d parameters
@@ -44,24 +44,24 @@ sol_native = mpi_to_native(sol)
4444
module MultiGridBarrierMPI
4545

4646
using MPI
47-
using HPCLinearAlgebra
48-
using HPCLinearAlgebra: HPCVector, HPCMatrix, HPCSparseMatrix, io0
49-
using HPCLinearAlgebra: HPCVector_local, HPCMatrix_local, HPCSparseMatrix_local
50-
using HPCLinearAlgebra: HPCBackend, backend_cpu_mpi, eltype_backend, indextype_backend
47+
using HPCSparseArrays
48+
using HPCSparseArrays: HPCVector, HPCMatrix, HPCSparseMatrix, io0
49+
using HPCSparseArrays: HPCVector_local, HPCMatrix_local, HPCSparseMatrix_local
50+
using HPCSparseArrays: HPCBackend, backend_cpu_mpi, eltype_backend, indextype_backend
5151
using LinearAlgebra
5252
using SparseArrays
5353
using MultiGridBarrier
5454
using MultiGridBarrier: Geometry, AMGBSOL, ParabolicSOL, fem1d, FEM1D, fem3d, FEM3D, parabolic_solve, amgb
5555
using PrecompileTools
5656

5757
# ============================================================================
58-
# MultiGridBarrier API Implementation for HPCLinearAlgebra Types
58+
# MultiGridBarrier API Implementation for HPCSparseArrays Types
5959
# ============================================================================
6060

6161
# Import the functions we need to extend
6262
import MultiGridBarrier: amgb_zeros, amgb_all_isfinite, amgb_diag, amgb_blockdiag, map_rows, map_rows_gpu, vertex_indices, _raw_array, _to_cpu_array, _rows_to_svectors
6363

64-
# amgb_zeros: Create distributed zero matrices/vectors using Base.zeros from HPCLinearAlgebra
64+
# amgb_zeros: Create distributed zero matrices/vectors using Base.zeros from HPCSparseArrays
6565
# New API: zeros(T, Ti, HPCSparseMatrix, backend, m, n) - extract backend from existing matrix
6666
MultiGridBarrier.amgb_zeros(A::HPCSparseMatrix{T,Ti,B}, m, n) where {T,Ti,B} =
6767
zeros(T, Ti, HPCSparseMatrix, A.backend, m, n)
@@ -77,7 +77,7 @@ MultiGridBarrier.amgb_zeros(A::LinearAlgebra.Adjoint{T, <:HPCMatrix{T,B}}, m, n)
7777
# amgb_zeros for vectors (used in multigrid coarsening)
7878
# New API: zeros(T, HPCVector, backend, m) - need to map backend type to instance
7979
# This is a bit hacky but works for the known backend types
80-
using HPCLinearAlgebra: backend_cpu_serial
80+
using HPCSparseArrays: backend_cpu_serial
8181

8282
# Cache for GPU backend instances (created lazily when needed)
8383
# Key: (T, Ti) -> backend instance
@@ -89,23 +89,23 @@ function _backend_instance_from_type(::Type{B}) where B
8989
Ti = B.parameters[2]
9090
device_type = B.parameters[3]
9191

92-
if device_type === HPCLinearAlgebra.DeviceCPU
92+
if device_type === HPCSparseArrays.DeviceCPU
9393
comm_type = B.parameters[4]
94-
if comm_type === HPCLinearAlgebra.CommSerial
94+
if comm_type === HPCSparseArrays.CommSerial
9595
return backend_cpu_serial(T, Ti)
9696
else
9797
return backend_cpu_mpi(T, Ti)
9898
end
99-
elseif device_type === HPCLinearAlgebra.DeviceCUDA
99+
elseif device_type === HPCSparseArrays.DeviceCUDA
100100
cache_key = (T, Ti, device_type)
101101
if !haskey(_GPU_BACKEND_CACHE, cache_key)
102-
_GPU_BACKEND_CACHE[cache_key] = HPCLinearAlgebra.backend_cuda_mpi(T, Ti)
102+
_GPU_BACKEND_CACHE[cache_key] = HPCSparseArrays.backend_cuda_mpi(T, Ti)
103103
end
104104
return _GPU_BACKEND_CACHE[cache_key]
105-
elseif device_type === HPCLinearAlgebra.DeviceMetal
105+
elseif device_type === HPCSparseArrays.DeviceMetal
106106
cache_key = (T, Ti, device_type)
107107
if !haskey(_GPU_BACKEND_CACHE, cache_key)
108-
_GPU_BACKEND_CACHE[cache_key] = HPCLinearAlgebra.backend_metal_mpi(T, Ti)
108+
_GPU_BACKEND_CACHE[cache_key] = HPCSparseArrays.backend_metal_mpi(T, Ti)
109109
end
110110
return _GPU_BACKEND_CACHE[cache_key]
111111
else
@@ -149,24 +149,24 @@ MultiGridBarrier.amgb_diag(A::HPCMatrix{T,B}, z::Vector{T}, m=length(z), n=lengt
149149
# amgb_blockdiag: Block diagonal concatenation
150150
MultiGridBarrier.amgb_blockdiag(args::HPCSparseMatrix{T,Ti,AV}...) where {T,Ti,AV} = blockdiag(args...)
151151

152-
# map_rows and map_rows_gpu: Delegate to HPCLinearAlgebra implementations
152+
# map_rows and map_rows_gpu: Delegate to HPCSparseArrays implementations
153153
# Use AbstractHPCVector/AbstractHPCMatrix union types for clean dispatch
154154

155155
const AbstractHPCVector = HPCVector
156156
const AbstractHPCMatrix = Union{HPCMatrix, HPCSparseMatrix}
157157
const AnyMPI = Union{HPCVector, HPCMatrix, HPCSparseMatrix}
158158

159159
# map_rows: single method that handles all MPI combinations
160-
# The key insight: if ANY argument is an MPI type, delegate to HPCLinearAlgebra
160+
# The key insight: if ANY argument is an MPI type, delegate to HPCSparseArrays
161161
function MultiGridBarrier.map_rows(f, A::AnyMPI, args...)
162-
HPCLinearAlgebra.map_rows(f, A, args...)
162+
HPCSparseArrays.map_rows(f, A, args...)
163163
end
164164

165-
# map_rows_gpu: True GPU execution via HPCLinearAlgebra.map_rows_gpu
165+
# map_rows_gpu: True GPU execution via HPCSparseArrays.map_rows_gpu
166166
# Barrier functions now receive row data via broadcasting (no scalar indexing)
167167
# thanks to Q.args being splatted here. This enables true GPU kernel execution.
168168
function MultiGridBarrier.map_rows_gpu(f, A::AnyMPI, args...)
169-
HPCLinearAlgebra.map_rows_gpu(f, A, args...) # True GPU path
169+
HPCSparseArrays.map_rows_gpu(f, A, args...) # True GPU path
170170
end
171171

172172
# _raw_array: Extract raw array from MPI wrappers
@@ -188,8 +188,8 @@ MultiGridBarrier._to_cpu_array(x::HPCMatrix) = Array(x.A)
188188
MultiGridBarrier._to_cpu_array(x::HPCVector) = Array(x.v)
189189

190190
# vertex_indices for MPI types
191-
MultiGridBarrier.vertex_indices(A::HPCVector) = HPCLinearAlgebra.vertex_indices(A)
192-
MultiGridBarrier.vertex_indices(A::HPCMatrix) = HPCLinearAlgebra.vertex_indices(A)
191+
MultiGridBarrier.vertex_indices(A::HPCVector) = HPCSparseArrays.vertex_indices(A)
192+
MultiGridBarrier.vertex_indices(A::HPCMatrix) = HPCSparseArrays.vertex_indices(A)
193193

194194
# ============================================================================
195195
# Type Conversion

0 commit comments

Comments
 (0)