diff --git a/docs/make.jl b/docs/make.jl index 9a1d84fe5..a13b8f367 100644 --- a/docs/make.jl +++ b/docs/make.jl @@ -82,6 +82,11 @@ makedocs( # wants to run the linkchecks locally they can set the environment variable `CI=true`. linkcheck = get(ENV, "CI", "false") == "true", linkcheck_useragent = nothing, + linkcheck_ignore = [ + "https://mooseframework.inl.gov/help/troubleshooting.html", + r"https://docs\.open-mpi\.org/.*", + r"https://mpi4py\.readthedocs\.io/.*", + ], ) deploydocs( diff --git a/docs/src/configuration.md b/docs/src/configuration.md index 61e0fd83a..125a20bcc 100644 --- a/docs/src/configuration.md +++ b/docs/src/configuration.md @@ -51,11 +51,11 @@ This is the recommended way to use MPI.jl. By default, MPI.jl will use `MPICH_jll` as jll MPI backend. You can select from four different jll MPI binaries: -- [`MPICH_jll`](http://www.mpich.org/), the default +- [`MPICH_jll`](https://www.mpich.org/), the default - [`OpenMPI_jll`](https://www.open-mpi.org/), an alternative to MPICH - [`MPItrampoline_jll`](https://github.com/eschnett/MPItrampoline), a forwarding MPI implementation that uses another MPI implementation -- [`MicrosoftMPI_jll`](https://docs.microsoft.com/en-us/message-passing-interface/microsoft-mpi) +- [`MicrosoftMPI_jll`](https://learn.microsoft.com/en-us/message-passing-interface/microsoft-mpi) for Windows For example, to switch to OpenMPI, you would first use MPIPreferenes.jl to switch: diff --git a/docs/src/usage.md b/docs/src/usage.md index a9e1876a1..ee14cc31e 100644 --- a/docs/src/usage.md +++ b/docs/src/usage.md @@ -97,9 +97,9 @@ send and receive buffers for point-to-point and collective operations (they may ### CUDA -Successfully running the [alltoall\_test\_cuda.jl](../examples/alltoall_test_cuda.jl) +Successfully running the [alltoall\_test\_cuda.jl](https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/alltoall_test_cuda.jl) should confirm your MPI implementation to have the CUDA support enabled. Moreover, successfully running the -[alltoall\_test\_cuda\_multigpu.jl](../examples/alltoall_test_cuda_multigpu.jl) should confirm +[alltoall\_test\_cuda\_multigpu.jl](https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/alltoall_test_cuda_multigpu.jl) should confirm your CUDA-aware MPI implementation to use multiple Nvidia GPUs (one GPU per rank). If using OpenMPI, the status of CUDA support can be checked via the @@ -107,9 +107,9 @@ If using OpenMPI, the status of CUDA support can be checked via the ### ROCm -Successfully running the [alltoall\_test\_rocm.jl](../examples/alltoall_test_rocm.jl) +Successfully running the [alltoall\_test\_rocm.jl](https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/alltoall_test_rocm.jl) should confirm your MPI implementation to have the ROCm support (AMDGPU) enabled. Moreover, successfully running the -[alltoall\_test\_rocm\_multigpu.jl](../examples/alltoall_test_rocm_multigpu.jl) should confirm +[alltoall\_test\_rocm\_multigpu.jl](https://github.com/JuliaParallel/MPI.jl/blob/master/docs/examples/alltoall_test_rocm_multigpu.jl) should confirm your ROCm-aware MPI implementation to use multiple AMD GPUs (one GPU per rank). If using OpenMPI, the status of ROCm support can be checked via the