Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,11 @@ makedocs(
# wants to run the linkchecks locally they can set the environment variable `CI=true`.
linkcheck = get(ENV, "CI", "false") == "true",
linkcheck_useragent = nothing,
linkcheck_ignore = [
"https://mooseframework.inl.gov/help/troubleshooting.html",
r"https://docs\.open-mpi\.org/.*",
r"https://mpi4py\.readthedocs\.io/.*",
],
)

deploydocs(
Expand Down
4 changes: 2 additions & 2 deletions docs/src/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,11 +51,11 @@ This is the recommended way to use MPI.jl. By default, MPI.jl will use
`MPICH_jll` as jll MPI backend.

You can select from four different jll MPI binaries:
- [`MPICH_jll`](http://www.mpich.org/), the default
- [`MPICH_jll`](https://www.mpich.org/), the default
- [`OpenMPI_jll`](https://www.open-mpi.org/), an alternative to MPICH
- [`MPItrampoline_jll`](https://github.com/eschnett/MPItrampoline), a
forwarding MPI implementation that uses another MPI implementation
- [`MicrosoftMPI_jll`](https://docs.microsoft.com/en-us/message-passing-interface/microsoft-mpi)
- [`MicrosoftMPI_jll`](https://learn.microsoft.com/en-us/message-passing-interface/microsoft-mpi)
for Windows

For example, to switch to OpenMPI, you would first use MPIPreferenes.jl to switch:
Expand Down
8 changes: 4 additions & 4 deletions docs/src/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,19 +97,19 @@ send and receive buffers for point-to-point and collective operations (they may

### CUDA

Successfully running the [alltoall\_test\_cuda.jl](../examples/alltoall_test_cuda.jl)
Successfully running the [alltoall\_test\_cuda.jl](https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/alltoall_test_cuda.jl)
should confirm your MPI implementation to have the CUDA support enabled. Moreover, successfully running the
[alltoall\_test\_cuda\_multigpu.jl](../examples/alltoall_test_cuda_multigpu.jl) should confirm
[alltoall\_test\_cuda\_multigpu.jl](https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/alltoall_test_cuda_multigpu.jl) should confirm
your CUDA-aware MPI implementation to use multiple Nvidia GPUs (one GPU per rank).

If using OpenMPI, the status of CUDA support can be checked via the
[`MPI.has_cuda()`](@ref) function.

### ROCm

Successfully running the [alltoall\_test\_rocm.jl](../examples/alltoall_test_rocm.jl)
Successfully running the [alltoall\_test\_rocm.jl](https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/alltoall_test_rocm.jl)
should confirm your MPI implementation to have the ROCm support (AMDGPU) enabled. Moreover, successfully running the
[alltoall\_test\_rocm\_multigpu.jl](../examples/alltoall_test_rocm_multigpu.jl) should confirm
[alltoall\_test\_rocm\_multigpu.jl](https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/alltoall_test_rocm_multigpu.jl) should confirm
your ROCm-aware MPI implementation to use multiple AMD GPUs (one GPU per rank).

If using OpenMPI, the status of ROCm support can be checked via the
Expand Down
Loading