You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,17 +7,17 @@
7
7
8
8
**Author:** S. Loisel
9
9
10
-
A Julia package that bridges MultiGridBarrier.jl and HPCLinearAlgebra.jl for distributed multigrid barrier computations using native MPI types.
10
+
A Julia package that bridges MultiGridBarrier.jl and HPCSparseArrays.jl for distributed multigrid barrier computations using native MPI types.
11
11
12
12
## Overview
13
13
14
-
MultiGridBarrierMPI.jl extends the MultiGridBarrier.jl package to work with HPCLinearAlgebra.jl's distributed matrix and vector types. This enables efficient parallel computation of multigrid barrier methods across multiple MPI ranks without requiring PETSc.
14
+
MultiGridBarrierMPI.jl extends the MultiGridBarrier.jl package to work with HPCSparseArrays.jl's distributed matrix and vector types. This enables efficient parallel computation of multigrid barrier methods across multiple MPI ranks without requiring PETSc.
15
15
16
16
## Key Features
17
17
18
18
-**1D, 2D, and 3D Support**: Full support for 1D, 2D triangular, and 3D hexahedral finite elements
19
19
-**Seamless Integration**: Drop-in replacement for MultiGridBarrier's native types
20
-
-**Pure Julia MPI**: Uses HPCLinearAlgebra.jl for distributed linear algebra (no external libraries required)
20
+
-**Pure Julia MPI**: Uses HPCSparseArrays.jl for distributed linear algebra (no external libraries required)
21
21
-**Type Conversion**: Easy conversion between native Julia arrays and MPI distributed types
22
22
-**MPI-Aware**: All operations correctly handle MPI collective requirements
23
23
-**MUMPS Solver**: Uses MUMPS direct solver for accurate Newton iterations
@@ -31,7 +31,7 @@ using MPI
31
31
MPI.Init()
32
32
33
33
using MultiGridBarrierMPI
34
-
usingHPCLinearAlgebra
34
+
usingHPCSparseArrays
35
35
36
36
# Solve with MPI distributed types (L=3 refinement levels)
Copy file name to clipboardExpand all lines: docs/src/index.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,17 +10,17 @@ v = string(pkgversion(MultiGridBarrierMPI))
10
10
md"# MultiGridBarrierMPI.jl $v"
11
11
```
12
12
13
-
**A Julia package that bridges MultiGridBarrier.jl and HPCLinearAlgebra.jl for distributed multigrid barrier computations.**
13
+
**A Julia package that bridges MultiGridBarrier.jl and HPCSparseArrays.jl for distributed multigrid barrier computations.**
14
14
15
15
## Overview
16
16
17
-
MultiGridBarrierMPI.jl extends the MultiGridBarrier.jl package to work with HPCLinearAlgebra.jl's distributed matrix and vector types. This enables efficient parallel computation of multigrid barrier methods across multiple MPI ranks using pure Julia distributed types (no PETSc required).
17
+
MultiGridBarrierMPI.jl extends the MultiGridBarrier.jl package to work with HPCSparseArrays.jl's distributed matrix and vector types. This enables efficient parallel computation of multigrid barrier methods across multiple MPI ranks using pure Julia distributed types (no PETSc required).
18
18
19
19
## Key Features
20
20
21
21
-**1D, 2D, and 3D Support**: Full support for 1D elements, 2D triangular, and 3D hexahedral finite elements
22
22
-**Seamless Integration**: Drop-in replacement for MultiGridBarrier's native types
23
-
-**Pure Julia MPI**: Uses HPCLinearAlgebra.jl for distributed linear algebra
23
+
-**Pure Julia MPI**: Uses HPCSparseArrays.jl for distributed linear algebra
24
24
-**Type Conversion**: Easy conversion between native Julia arrays and MPI distributed types
25
25
-**MPI-Aware**: All operations correctly handle MPI collective requirements
26
26
-**MUMPS Solver**: Uses MUMPS direct solver for accurate Newton iterations
@@ -34,7 +34,7 @@ using MPI
34
34
MPI.Init()
35
35
36
36
using MultiGridBarrierMPI
37
-
usingHPCLinearAlgebra
37
+
usingHPCSparseArrays
38
38
using MultiGridBarrier
39
39
40
40
# Solve with MPI distributed types (L=3 refinement levels)
@@ -71,7 +71,7 @@ Depth = 2
71
71
This package is part of a larger ecosystem:
72
72
73
73
-**[MultiGridBarrier.jl](https://github.com/sloisel/MultiGridBarrier.jl)**: Core multigrid barrier method implementation (1D, 2D, and 3D)
74
-
-**[HPCLinearAlgebra.jl](https://github.com/sloisel/HPCLinearAlgebra.jl)**: Pure Julia distributed linear algebra with MPI
74
+
-**[HPCSparseArrays.jl](https://github.com/sloisel/HPCSparseArrays.jl)**: Pure Julia distributed linear algebra with MPI
75
75
-**MPI.jl**: Julia MPI bindings for distributed computing
Copy file name to clipboardExpand all lines: docs/src/installation.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ For HPC environments, you may want to configure MPI.jl to use your system's MPI
10
10
11
11
### MUMPS
12
12
13
-
The package uses MUMPS for sparse direct solves through HPCLinearAlgebra.jl. MUMPS is typically available through your system's package manager or HPC module system.
13
+
The package uses MUMPS for sparse direct solves through HPCSparseArrays.jl. MUMPS is typically available through your system's package manager or HPC module system.
14
14
15
15
## Package Installation
16
16
@@ -75,7 +75,7 @@ using MPI
75
75
MPI.Init()
76
76
77
77
using MultiGridBarrierMPI
78
-
usingHPCLinearAlgebra
78
+
usingHPCSparseArrays
79
79
80
80
# Your parallel code here
81
81
sol =fem2d_mpi_solve(Float64; L=3, p=1.0)
@@ -89,7 +89,7 @@ mpiexec -n 4 julia --project my_program.jl
89
89
```
90
90
91
91
!!! tip "Output from Rank 0 Only"
92
-
Use `io0()` from HPCLinearAlgebra for output to avoid duplicate messages:
92
+
Use `io0()` from HPCSparseArrays for output to avoid duplicate messages:
93
93
```julia
94
94
println(io0(), "This prints once from rank 0")
95
95
```
@@ -106,7 +106,7 @@ using Pkg; Pkg.build("MPI")
106
106
107
107
### MUMPS Issues
108
108
109
-
If MUMPS fails to load, ensure it's properly installed on your system and that HPCLinearAlgebra.jl can find it.
109
+
If MUMPS fails to load, ensure it's properly installed on your system and that HPCSparseArrays.jl can find it.
0 commit comments