Skip to content

Commit 1c4b978

Browse files
authored
Merge pull request #937
Fix documentation build
2 parents a4ad6e7 + 7f27957 commit 1c4b978

3 files changed

Lines changed: 11 additions & 6 deletions

File tree

docs/make.jl

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -82,6 +82,11 @@ makedocs(
8282
# wants to run the linkchecks locally they can set the environment variable `CI=true`.
8383
linkcheck = get(ENV, "CI", "false") == "true",
8484
linkcheck_useragent = nothing,
85+
linkcheck_ignore = [
86+
"https://mooseframework.inl.gov/help/troubleshooting.html",
87+
r"https://docs\.open-mpi\.org/.*",
88+
r"https://mpi4py\.readthedocs\.io/.*",
89+
],
8590
)
8691

8792
deploydocs(

docs/src/configuration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,11 +51,11 @@ This is the recommended way to use MPI.jl. By default, MPI.jl will use
5151
`MPICH_jll` as jll MPI backend.
5252

5353
You can select from four different jll MPI binaries:
54-
- [`MPICH_jll`](http://www.mpich.org/), the default
54+
- [`MPICH_jll`](https://www.mpich.org/), the default
5555
- [`OpenMPI_jll`](https://www.open-mpi.org/), an alternative to MPICH
5656
- [`MPItrampoline_jll`](https://github.com/eschnett/MPItrampoline), a
5757
forwarding MPI implementation that uses another MPI implementation
58-
- [`MicrosoftMPI_jll`](https://docs.microsoft.com/en-us/message-passing-interface/microsoft-mpi)
58+
- [`MicrosoftMPI_jll`](https://learn.microsoft.com/en-us/message-passing-interface/microsoft-mpi)
5959
for Windows
6060

6161
For example, to switch to OpenMPI, you would first use MPIPreferenes.jl to switch:

docs/src/usage.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -97,19 +97,19 @@ send and receive buffers for point-to-point and collective operations (they may
9797

9898
### CUDA
9999

100-
Successfully running the [alltoall\_test\_cuda.jl](../examples/alltoall_test_cuda.jl)
100+
Successfully running the [alltoall\_test\_cuda.jl](https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/alltoall_test_cuda.jl)
101101
should confirm your MPI implementation to have the CUDA support enabled. Moreover, successfully running the
102-
[alltoall\_test\_cuda\_multigpu.jl](../examples/alltoall_test_cuda_multigpu.jl) should confirm
102+
[alltoall\_test\_cuda\_multigpu.jl](https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/alltoall_test_cuda_multigpu.jl) should confirm
103103
your CUDA-aware MPI implementation to use multiple Nvidia GPUs (one GPU per rank).
104104

105105
If using OpenMPI, the status of CUDA support can be checked via the
106106
[`MPI.has_cuda()`](@ref) function.
107107

108108
### ROCm
109109

110-
Successfully running the [alltoall\_test\_rocm.jl](../examples/alltoall_test_rocm.jl)
110+
Successfully running the [alltoall\_test\_rocm.jl](https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/alltoall_test_rocm.jl)
111111
should confirm your MPI implementation to have the ROCm support (AMDGPU) enabled. Moreover, successfully running the
112-
[alltoall\_test\_rocm\_multigpu.jl](../examples/alltoall_test_rocm_multigpu.jl) should confirm
112+
[alltoall\_test\_rocm\_multigpu.jl](https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/alltoall_test_rocm_multigpu.jl) should confirm
113113
your ROCm-aware MPI implementation to use multiple AMD GPUs (one GPU per rank).
114114

115115
If using OpenMPI, the status of ROCm support can be checked via the

0 commit comments

Comments
 (0)