File tree Expand file tree Collapse file tree
Expand file tree Collapse file tree Original file line number Diff line number Diff line change @@ -82,6 +82,11 @@ makedocs(
8282 # wants to run the linkchecks locally they can set the environment variable `CI=true`.
8383 linkcheck = get (ENV , " CI" , " false" ) == " true" ,
8484 linkcheck_useragent = nothing ,
85+ linkcheck_ignore = [
86+ " https://mooseframework.inl.gov/help/troubleshooting.html" ,
87+ r" https://docs\. open-mpi\. org/.*" ,
88+ r" https://mpi4py\. readthedocs\. io/.*" ,
89+ ],
8590)
8691
8792deploydocs (
Original file line number Diff line number Diff line change @@ -51,11 +51,11 @@ This is the recommended way to use MPI.jl. By default, MPI.jl will use
5151` MPICH_jll ` as jll MPI backend.
5252
5353You can select from four different jll MPI binaries:
54- - [ ` MPICH_jll ` ] ( http ://www.mpich.org/) , the default
54+ - [ ` MPICH_jll ` ] ( https ://www.mpich.org/) , the default
5555- [ ` OpenMPI_jll ` ] ( https://www.open-mpi.org/ ) , an alternative to MPICH
5656- [ ` MPItrampoline_jll ` ] ( https://github.com/eschnett/MPItrampoline ) , a
5757 forwarding MPI implementation that uses another MPI implementation
58- - [ ` MicrosoftMPI_jll ` ] ( https://docs .microsoft.com/en-us/message-passing-interface/microsoft-mpi )
58+ - [ ` MicrosoftMPI_jll ` ] ( https://learn .microsoft.com/en-us/message-passing-interface/microsoft-mpi )
5959 for Windows
6060
6161For example, to switch to OpenMPI, you would first use MPIPreferenes.jl to switch:
Original file line number Diff line number Diff line change @@ -97,19 +97,19 @@ send and receive buffers for point-to-point and collective operations (they may
9797
9898### CUDA
9999
100- Successfully running the [ alltoall\_ test\_ cuda.jl] ( .. /examples/alltoall_test_cuda.jl)
100+ Successfully running the [ alltoall\_ test\_ cuda.jl] ( https://github.com/JuliaParallel/MPI.jl/blob/master/docs /examples/alltoall_test_cuda.jl)
101101should confirm your MPI implementation to have the CUDA support enabled. Moreover, successfully running the
102- [ alltoall\_ test\_ cuda\_ multigpu.jl] ( .. /examples/alltoall_test_cuda_multigpu.jl) should confirm
102+ [ alltoall\_ test\_ cuda\_ multigpu.jl] ( https://github.com/JuliaParallel/MPI.jl/blob/master/docs /examples/alltoall_test_cuda_multigpu.jl) should confirm
103103your CUDA-aware MPI implementation to use multiple Nvidia GPUs (one GPU per rank).
104104
105105If using OpenMPI, the status of CUDA support can be checked via the
106106[ ` MPI.has_cuda() ` ] ( @ref ) function.
107107
108108### ROCm
109109
110- Successfully running the [ alltoall\_ test\_ rocm.jl] ( .. /examples/alltoall_test_rocm.jl)
110+ Successfully running the [ alltoall\_ test\_ rocm.jl] ( https://github.com/JuliaParallel/MPI.jl/blob/master/docs /examples/alltoall_test_rocm.jl)
111111should confirm your MPI implementation to have the ROCm support (AMDGPU) enabled. Moreover, successfully running the
112- [ alltoall\_ test\_ rocm\_ multigpu.jl] ( .. /examples/alltoall_test_rocm_multigpu.jl) should confirm
112+ [ alltoall\_ test\_ rocm\_ multigpu.jl] ( https://github.com/JuliaParallel/MPI.jl/blob/master/docs /examples/alltoall_test_rocm_multigpu.jl) should confirm
113113your ROCm-aware MPI implementation to use multiple AMD GPUs (one GPU per rank).
114114
115115If using OpenMPI, the status of ROCm support can be checked via the
You can’t perform that action at this time.
0 commit comments