You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A Julia package that bridges MultiGridBarrier.jl and HPCLinearAlgebra.jl for distributed multigrid barrier computations using native MPI types.
10
+
A Julia package that bridges MultiGridBarrier.jl and HPCSparseArrays.jl for distributed multigrid barrier computations using native MPI types.
11
11
12
12
## Overview
13
13
14
-
MultiGridBarrierMPI.jl extends the MultiGridBarrier.jl package to work with HPCLinearAlgebra.jl's distributed matrix and vector types. This enables efficient parallel computation of multigrid barrier methods across multiple MPI ranks without requiring PETSc.
14
+
HPCMultiGridBarrier.jl extends the MultiGridBarrier.jl package to work with HPCSparseArrays.jl's distributed matrix and vector types. This enables efficient parallel computation of multigrid barrier methods across multiple MPI ranks without requiring PETSc.
15
15
16
16
## Key Features
17
17
18
18
-**1D, 2D, and 3D Support**: Full support for 1D, 2D triangular, and 3D hexahedral finite elements
19
19
-**Seamless Integration**: Drop-in replacement for MultiGridBarrier's native types
20
-
-**Pure Julia MPI**: Uses HPCLinearAlgebra.jl for distributed linear algebra (no external libraries required)
20
+
-**Pure Julia MPI**: Uses HPCSparseArrays.jl for distributed linear algebra (no external libraries required)
21
21
-**Type Conversion**: Easy conversion between native Julia arrays and MPI distributed types
22
22
-**MPI-Aware**: All operations correctly handle MPI collective requirements
23
23
-**MUMPS Solver**: Uses MUMPS direct solver for accurate Newton iterations
@@ -30,14 +30,14 @@ Solve a 2D p-Laplace problem with distributed MPI types:
30
30
using MPI
31
31
MPI.Init()
32
32
33
-
usingMultiGridBarrierMPI
34
-
usingHPCLinearAlgebra
33
+
usingHPCMultiGridBarrier
34
+
usingHPCSparseArrays
35
35
36
36
# Solve with MPI distributed types (L=3 refinement levels)
Copy file name to clipboardExpand all lines: docs/src/api.md
+21-21Lines changed: 21 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# API Reference
2
2
3
-
This page provides detailed documentation for all exported functions in MultiGridBarrierMPI.jl.
3
+
This page provides detailed documentation for all exported functions in HPCMultiGridBarrier.jl.
4
4
5
5
!!! note "All Functions Are Collective"
6
6
All functions documented here are **MPI collective operations**. Every MPI rank must call these functions together with the same parameters. Failure to do so will result in deadlock.
@@ -12,32 +12,32 @@ These functions provide the simplest interface for solving problems with MPI typ
12
12
### 1D Problems
13
13
14
14
```@docs
15
-
fem1d_mpi
16
-
fem1d_mpi_solve
15
+
fem1d_hpc
16
+
fem1d_hpc_solve
17
17
```
18
18
19
19
### 2D Problems
20
20
21
21
```@docs
22
-
fem2d_mpi
23
-
fem2d_mpi_solve
22
+
fem2d_hpc
23
+
fem2d_hpc_solve
24
24
```
25
25
26
26
### 3D Problems
27
27
28
28
```@docs
29
-
fem3d_mpi
30
-
fem3d_mpi_solve
29
+
fem3d_hpc
30
+
fem3d_hpc_solve
31
31
```
32
32
33
33
## Type Conversion API
34
34
35
35
These functions convert between native Julia types and MPI distributed types.
36
-
The `mpi_to_native` function dispatches on type, handling `Geometry`, `AMGBSOL`, and `ParabolicSOL` objects.
36
+
The `hpc_to_native` function dispatches on type, handling `Geometry`, `AMGBSOL`, and `ParabolicSOL` objects.
37
37
38
38
```@docs
39
-
native_to_mpi
40
-
mpi_to_native
39
+
native_to_hpc
40
+
hpc_to_native
41
41
```
42
42
43
43
## Type Mappings Reference
@@ -104,12 +104,12 @@ The `AMGBSOL` type from MultiGridBarrier contains the complete solution:
104
104
105
105
## MPI and IO Utilities
106
106
107
-
### HPCLinearAlgebra.io0()
107
+
### HPCSparseArrays.io0()
108
108
109
109
Returns an IO stream that only writes on rank 0:
110
110
111
111
```julia
112
-
usingHPCLinearAlgebra
112
+
usingHPCSparseArrays
113
113
114
114
println(io0(), "This prints once from rank 0")
115
115
```
@@ -131,23 +131,23 @@ nranks = MPI.Comm_size(MPI.COMM_WORLD) # Total number of ranks
0 commit comments