Skip to content

Commit a54d1b2

Browse files
author
Sebastien Loisel
committed
Update dependency HPCLinearAlgebra → HPCSparseArrays
1 parent aa55cef commit a54d1b2

61 files changed

Lines changed: 805 additions & 804 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

Project.toml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
name = "MultiGridBarrierMPI"
1+
name = "HPCMultiGridBarrier"
22
uuid = "abf18f27-d12f-4566-94ed-07bf0c385f70"
33
authors = ["S. Loisel"]
44
version = "0.1.1"
@@ -8,7 +8,7 @@ BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
88
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
99
CUDSS_jll = "4889d778-9329-5762-9fec-0578a5d30366"
1010
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
11-
HPCLinearAlgebra = "537374f1-5608-4525-82fb-641dce542540"
11+
HPCSparseArrays = "537374f1-5608-4525-82fb-641dce542540"
1212
MPI = "da04e1cc-30fd-572f-bb4f-1f8673147195"
1313
MPIPreferences = "3da0fdf6-3ccc-4f1b-acd9-58baa6c99267"
1414
Metal = "dde4c033-4e86-420c-a63e-0dd931031962"
@@ -20,13 +20,13 @@ StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"
2020
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
2121

2222
[sources]
23-
HPCLinearAlgebra = {path = "../HPCLinearAlgebra.jl"}
23+
HPCSparseArrays = {path = "../HPCSparseArrays.jl"}
2424

2525
[compat]
2626
BenchmarkTools = "1.6"
2727
CUDA = "5.9.6"
2828
CUDSS_jll = "0.7.1"
29-
HPCLinearAlgebra = "0.1"
29+
HPCSparseArrays = "0.1"
3030
MPI = "0.20"
3131
MPIPreferences = "0.1.11"
3232
Metal = "1.9.1"

README.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,23 @@
1-
# MultiGridBarrierMPI.jl
1+
# HPCMultiGridBarrier.jl
22

3-
[![Stable](https://img.shields.io/badge/docs-stable-blue.svg)](https://sloisel.github.io/MultiGridBarrierMPI.jl/stable/)
4-
[![Dev](https://img.shields.io/badge/docs-dev-blue.svg)](https://sloisel.github.io/MultiGridBarrierMPI.jl/dev/)
5-
[![Build Status](https://github.com/sloisel/MultiGridBarrierMPI.jl/actions/workflows/CI.yml/badge.svg?branch=main)](https://github.com/sloisel/MultiGridBarrierMPI.jl/actions/workflows/CI.yml?query=branch%3Amain)
6-
[![codecov](https://codecov.io/gh/sloisel/MultiGridBarrierMPI.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/sloisel/MultiGridBarrierMPI.jl)
3+
[![Stable](https://img.shields.io/badge/docs-stable-blue.svg)](https://sloisel.github.io/HPCMultiGridBarrier.jl/stable/)
4+
[![Dev](https://img.shields.io/badge/docs-dev-blue.svg)](https://sloisel.github.io/HPCMultiGridBarrier.jl/dev/)
5+
[![Build Status](https://github.com/sloisel/HPCMultiGridBarrier.jl/actions/workflows/CI.yml/badge.svg?branch=main)](https://github.com/sloisel/HPCMultiGridBarrier.jl/actions/workflows/CI.yml?query=branch%3Amain)
6+
[![codecov](https://codecov.io/gh/sloisel/HPCMultiGridBarrier.jl/branch/main/graph/badge.svg)](https://codecov.io/gh/sloisel/HPCMultiGridBarrier.jl)
77

88
**Author:** S. Loisel
99

10-
A Julia package that bridges MultiGridBarrier.jl and HPCLinearAlgebra.jl for distributed multigrid barrier computations using native MPI types.
10+
A Julia package that bridges MultiGridBarrier.jl and HPCSparseArrays.jl for distributed multigrid barrier computations using native MPI types.
1111

1212
## Overview
1313

14-
MultiGridBarrierMPI.jl extends the MultiGridBarrier.jl package to work with HPCLinearAlgebra.jl's distributed matrix and vector types. This enables efficient parallel computation of multigrid barrier methods across multiple MPI ranks without requiring PETSc.
14+
HPCMultiGridBarrier.jl extends the MultiGridBarrier.jl package to work with HPCSparseArrays.jl's distributed matrix and vector types. This enables efficient parallel computation of multigrid barrier methods across multiple MPI ranks without requiring PETSc.
1515

1616
## Key Features
1717

1818
- **1D, 2D, and 3D Support**: Full support for 1D, 2D triangular, and 3D hexahedral finite elements
1919
- **Seamless Integration**: Drop-in replacement for MultiGridBarrier's native types
20-
- **Pure Julia MPI**: Uses HPCLinearAlgebra.jl for distributed linear algebra (no external libraries required)
20+
- **Pure Julia MPI**: Uses HPCSparseArrays.jl for distributed linear algebra (no external libraries required)
2121
- **Type Conversion**: Easy conversion between native Julia arrays and MPI distributed types
2222
- **MPI-Aware**: All operations correctly handle MPI collective requirements
2323
- **MUMPS Solver**: Uses MUMPS direct solver for accurate Newton iterations
@@ -30,14 +30,14 @@ Solve a 2D p-Laplace problem with distributed MPI types:
3030
using MPI
3131
MPI.Init()
3232

33-
using MultiGridBarrierMPI
34-
using HPCLinearAlgebra
33+
using HPCMultiGridBarrier
34+
using HPCSparseArrays
3535

3636
# Solve with MPI distributed types (L=3 refinement levels)
37-
sol_mpi = fem2d_mpi_solve(Float64; L=3, p=1.0, verbose=false)
37+
sol_hpc = fem2d_hpc_solve(Float64; L=3, p=1.0, verbose=false)
3838

3939
# Convert to native types for visualization
40-
sol_native = mpi_to_native(sol_mpi)
40+
sol_native = hpc_to_native(sol_hpc)
4141

4242
# Only rank 0 creates the plot
4343
rank = MPI.Comm_rank(MPI.COMM_WORLD)
@@ -60,21 +60,21 @@ mpiexec -n 4 julia --project example.jl
6060

6161
```julia
6262
using Pkg
63-
Pkg.add(url="https://github.com/sloisel/MultiGridBarrierMPI.jl")
63+
Pkg.add(url="https://github.com/sloisel/HPCMultiGridBarrier.jl")
6464
```
6565

6666
Or for development:
6767

6868
```bash
69-
git clone https://github.com/sloisel/MultiGridBarrierMPI.jl
70-
cd MultiGridBarrierMPI.jl
69+
git clone https://github.com/sloisel/HPCMultiGridBarrier.jl
70+
cd HPCMultiGridBarrier.jl
7171
julia --project -e 'using Pkg; Pkg.instantiate()'
7272
```
7373

7474
## Running Tests
7575

7676
```bash
77-
cd MultiGridBarrierMPI.jl
77+
cd HPCMultiGridBarrier.jl
7878
mpiexec -n 2 julia --project test/runtests.jl
7979
```
8080

@@ -92,7 +92,7 @@ julia --project make.jl
9292
This package is part of a larger ecosystem:
9393

9494
- **[MultiGridBarrier.jl](https://github.com/sloisel/MultiGridBarrier.jl)**: Core multigrid barrier method implementation
95-
- **[HPCLinearAlgebra.jl](https://github.com/sloisel/HPCLinearAlgebra.jl)**: Pure Julia distributed linear algebra with MPI
95+
- **[HPCSparseArrays.jl](https://github.com/sloisel/HPCSparseArrays.jl)**: Pure Julia distributed linear algebra with MPI
9696
- **MPI.jl**: Julia MPI bindings for distributed computing
9797

9898
## Requirements

docs/Project.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
[deps]
22
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
33
DocumenterTools = "35a29f4d-8980-5a13-9543-d66fff28ecb8"
4-
HPCLinearAlgebra = "537374f1-5608-4525-82fb-641dce542540"
4+
HPCSparseArrays = "537374f1-5608-4525-82fb-641dce542540"
55
MultiGridBarrier = "9e2c1f1d-9131-4ad4-b32f-bd2a0b0ecd1e"
6-
MultiGridBarrierMPI = "abf18f27-d12f-4566-94ed-07bf0c385f70"
6+
HPCMultiGridBarrier = "abf18f27-d12f-4566-94ed-07bf0c385f70"
77

docs/make.jl

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,18 @@
11
using Documenter
2-
using MultiGridBarrierMPI
2+
using HPCMultiGridBarrier
33
using Pkg
44

55
# Compute version dynamically
6-
version = string(pkgversion(MultiGridBarrierMPI))
6+
version = string(pkgversion(HPCMultiGridBarrier))
77

88
makedocs(;
9-
modules=[MultiGridBarrierMPI],
9+
modules=[HPCMultiGridBarrier],
1010
authors="Sebastien Loisel and contributors",
11-
sitename="MultiGridBarrierMPI.jl $version",
11+
sitename="HPCMultiGridBarrier.jl $version",
1212
format=Documenter.HTML(;
1313
prettyurls=get(ENV, "CI", "false") == "true",
14-
canonical="https://sloisel.github.io/MultiGridBarrierMPI.jl",
15-
repolink="https://github.com/sloisel/MultiGridBarrierMPI.jl",
14+
canonical="https://sloisel.github.io/HPCMultiGridBarrier.jl",
15+
repolink="https://github.com/sloisel/HPCMultiGridBarrier.jl",
1616
assets=String[],
1717
),
1818
pages=[
@@ -21,11 +21,11 @@ makedocs(;
2121
"User Guide" => "guide.md",
2222
"API Reference" => "api.md",
2323
],
24-
repo=Documenter.Remotes.GitHub("sloisel", "MultiGridBarrierMPI.jl"),
24+
repo=Documenter.Remotes.GitHub("sloisel", "HPCMultiGridBarrier.jl"),
2525
warnonly=true, # Don't fail on warnings during development
2626
)
2727

2828
deploydocs(;
29-
repo="github.com/sloisel/MultiGridBarrierMPI.jl",
29+
repo="github.com/sloisel/HPCMultiGridBarrier.jl",
3030
devbranch="main",
3131
)

docs/src/api.md

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# API Reference
22

3-
This page provides detailed documentation for all exported functions in MultiGridBarrierMPI.jl.
3+
This page provides detailed documentation for all exported functions in HPCMultiGridBarrier.jl.
44

55
!!! note "All Functions Are Collective"
66
All functions documented here are **MPI collective operations**. Every MPI rank must call these functions together with the same parameters. Failure to do so will result in deadlock.
@@ -12,32 +12,32 @@ These functions provide the simplest interface for solving problems with MPI typ
1212
### 1D Problems
1313

1414
```@docs
15-
fem1d_mpi
16-
fem1d_mpi_solve
15+
fem1d_hpc
16+
fem1d_hpc_solve
1717
```
1818

1919
### 2D Problems
2020

2121
```@docs
22-
fem2d_mpi
23-
fem2d_mpi_solve
22+
fem2d_hpc
23+
fem2d_hpc_solve
2424
```
2525

2626
### 3D Problems
2727

2828
```@docs
29-
fem3d_mpi
30-
fem3d_mpi_solve
29+
fem3d_hpc
30+
fem3d_hpc_solve
3131
```
3232

3333
## Type Conversion API
3434

3535
These functions convert between native Julia types and MPI distributed types.
36-
The `mpi_to_native` function dispatches on type, handling `Geometry`, `AMGBSOL`, and `ParabolicSOL` objects.
36+
The `hpc_to_native` function dispatches on type, handling `Geometry`, `AMGBSOL`, and `ParabolicSOL` objects.
3737

3838
```@docs
39-
native_to_mpi
40-
mpi_to_native
39+
native_to_hpc
40+
hpc_to_native
4141
```
4242

4343
## Type Mappings Reference
@@ -104,12 +104,12 @@ The `AMGBSOL` type from MultiGridBarrier contains the complete solution:
104104

105105
## MPI and IO Utilities
106106

107-
### HPCLinearAlgebra.io0()
107+
### HPCSparseArrays.io0()
108108

109109
Returns an IO stream that only writes on rank 0:
110110

111111
```julia
112-
using HPCLinearAlgebra
112+
using HPCSparseArrays
113113

114114
println(io0(), "This prints once from rank 0")
115115
```
@@ -131,23 +131,23 @@ nranks = MPI.Comm_size(MPI.COMM_WORLD) # Total number of ranks
131131
using MPI
132132
MPI.Init()
133133

134-
using MultiGridBarrierMPI
135-
using HPCLinearAlgebra
134+
using HPCMultiGridBarrier
135+
using HPCSparseArrays
136136
using MultiGridBarrier
137137
using LinearAlgebra
138138

139139
# Create native geometry
140140
g_native = fem2d(; L=2)
141141

142142
# Convert to MPI
143-
g_mpi = native_to_mpi(g_native)
143+
g_hpc = native_to_hpc(g_native)
144144

145145
# Solve with MPI types
146-
sol_mpi = amgb(g_mpi; p=2.0)
146+
sol_hpc = amgb(g_hpc; p=2.0)
147147

148148
# Convert back to native
149-
sol_native = mpi_to_native(sol_mpi)
150-
g_back = mpi_to_native(g_mpi)
149+
sol_native = hpc_to_native(sol_hpc)
150+
g_back = hpc_to_native(g_hpc)
151151

152152
# Verify round-trip accuracy
153153
@assert norm(g_native.x - g_back.x) < 1e-10
@@ -162,8 +162,8 @@ g_native = fem2d(; L=2)
162162
id_native = g_native.operators[:id] # SparseMatrixCSC
163163

164164
# MPI geometry
165-
g_mpi = native_to_mpi(g_native)
166-
id_mpi = g_mpi.operators[:id] # HPCSparseMatrix
165+
g_hpc = native_to_hpc(g_native)
166+
id_mpi = g_hpc.operators[:id] # HPCSparseMatrix
167167

168168
# Convert back if needed
169169
id_back = SparseMatrixCSC(id_mpi) # SparseMatrixCSC
@@ -177,7 +177,7 @@ All MultiGridBarrier functions work seamlessly with MPI types:
177177
using MultiGridBarrier: amgb
178178

179179
# Create MPI geometry
180-
g = fem2d_mpi(Float64; L=3)
180+
g = fem2d_hpc(Float64; L=3)
181181

182182
# Use MultiGridBarrier functions directly
183183
sol = amgb(g; p=1.0, verbose=true)

0 commit comments

Comments
 (0)