MPI for 2D parabolic system on conforming P4est mesh #2886
MPI for 2D parabolic system on conforming P4est mesh #2886ranocha merged 29 commits intotrixi-framework:mainfrom
Conversation
Review checklistThis checklist is meant to assist creators of PRs (to let them know what reviewers will typically look for) and reviewers (to guide them in a structured review process). Items do not need to be checked explicitly for a PR to be eligible for merging. Purpose and scope
Code quality
Documentation
Testing
Performance
Verification
Created with ❤️ by the Trixi.jl community. |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2886 +/- ##
========================================
Coverage 97.08% 97.08%
========================================
Files 621 622 +1
Lines 48045 48222 +177
========================================
+ Hits 46642 46816 +174
- Misses 1403 1406 +3
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
Mmm, the tests seem to fail due to excessive allocations. Looking at the allocations from the Analysis output, I would say the MPI allocations are similar on the parabolic and hyperbolic sides (with the parabolic side, of course, needing more communication). Thus, I would increase the acceptance threshold for these cases, but I may be misunderstanding the issue, so I am happy to receive a second opinion. Section ncalls time %tot avg alloc %tot avg |
ranocha
left a comment
There was a problem hiding this comment.
Thanks a lot for this contribution! We will review the implementation in detail later.
Please consider adding your name to https://github.com/trixi-framework/Trixi.jl/blob/main/AUTHORS.md.
…erbolic communication, now just 3 x communication)
DanielDoehring
left a comment
There was a problem hiding this comment.
Looks already quite good!
So you can reuse the "hyperbolic" MPI Interfaces, i.e., need no new datastructures? That would be very nice.
Did you test this implementation on a genuine distributed memory system, i.e., on a cluster with really different sockets requested?
Removed dispatch to unsupoorted T8codeMeshParallel Co-authored-by: Daniel Doehring <doehringd2@gmail.com>
Yes it reuses the "hyperbolic" MPI Interfaces. I have tested the 3D version already on multi-node simulations and it works but I still need to add a speed-up test to ensure it is working somewhat okay. I can do that for one of the testcases. |
ranocha
left a comment
There was a problem hiding this comment.
Thanks a lot for your contribution!
adhere to mpi internal functions Co-authored-by: Hendrik Ranocha <ranocha@users.noreply.github.com>
|
Thanks a lot for this detailed investigation! Can you try one rank per node(that would be the configuration with the lowest amount of communication) and multiple ranks (say maybe 2, 4, 8) ? |
|
To clarify you want to see N=R (R/N=1) with T fixed at lets say T=16. Then very again N from 1 to 12. This way I would only saturate ~12.5% of the available corse per node. Further I would keep the problem size the same. To better understand your question, what would be the information you hope to gain from this comparison? I presume the idea is to differentiate between memory limits and compute limits more clearly? |
|
So I do not see any good reason why I would in a practical simulation have more than one rank per shared memory unit (node). So I am interested in that particular case. About problem size: I guess keeping problem size for the moment (strong scaling) is fine, although you can also try increasing problem size (weak scaling) - but maybe start with something smaller then in the first place :) |
|
Okay I am a bit supprised. Coming from pure MPI parallelization, I would always try to saturate the CPUs (or close to all). Looking at the R/N=8 compared to R/N=16 suggest that R/N=16 is always faster than R/N=8 and from the PID slopes I would not expect R/N=8 to ever catch up with the other one. But I will try to see if lower CPU utilization is offset by better communication for these cases. |
|
Yeah if you only have MPI parallelism there is no way around having more than one rank per node. Concerning saturating a node: Unless you request exclude node access, you should only be billed for the cores/threads you request, right? So in that sense it seems natural to me to request more nodes. As a disclaimer: I did not run any simulations beyond 4 ranks or so with trixi, would be good to get some info from @sloede on this matter. |
|
I'll try to take a look at this as soon as possible next week (though I have a proposal deadline on Thursday, but I'll try my best) |
sloede
left a comment
There was a problem hiding this comment.
As far as I have checked, the code looks good to me, thanks a lot! This is a very nice extension indeed, and sets up nicely for 3D support as well 💪
Kudos also on the MPI cache reuse!
One additional question: Have you run also a comparison of at least one non-trivial setup in parallel and verified that you get exactly the same results as in the serial case, i.e., binary-identical error norms? IIRC, this should be the case at least for the hyperbolic MPI implementation, and probably also the BR1 implementation (since it's symmetric), but does it also work for LDG?
If you haven't, it would be good to at least run one of the simulations a bit longer than in the tests to ensure that there is no funny business going on that will manifest itself only after more than just a few time steps.
| start_mpi_receive!(cache.mpi_cache) | ||
| end | ||
|
|
||
| @assert isempty(eachmortar(dg, cache)) "Nonconforming meshes are not yet supported on MPI parallel P4estMesh." |
There was a problem hiding this comment.
Is this check sufficient to guarantee no broken simulations? What if there are only mortars at an MPI interface - would this also be caught?
Note that if it is not, I think it would be OK to merge this anyways, but should be addressed in the next PR then (which I asssume #2888 will do)
There was a problem hiding this comment.
If the mortars are not on the mpi interface the code should still work as it reuses the same serial or threaded mortar treatment. As the mpi-mortars are already working in #2888 I would also not worry to much.
I have not run this 2D version for longer yet. But the implementation mirrors the 3D implementation from #2880 . For the 3D cases I simulated the Taylor-Green-Vortex and compared K and epsilon which matched perfectly back then:
DNS data was from Zirwes et al. (2023) which I had lying around is just for reference and plausibility. Second row shows total dissipation (-dK/dT), while the third show resolved and numerical dissipation. The curves match what I would expect for an implicit LES and more importantly both curves match perfectly. In regards to the LDG method I did not try that one. As I am not very familar with the DG method could you suggest one of the examples for which both exist than I can try it. Maybe I can rerun the lid-driven cavity?
There was a problem hiding this comment.
If the mortars are not on the mpi interface the code should still work as it reuses the same serial or threaded mortar treatment. As the mpi-mortars are already working in #2888 I would also not worry to much.
I agree!
There was a problem hiding this comment.
I have not run this 2D version for longer yet.
Could you maybe just run a comparison of, e.g., the 2D lid-driven cavity case: once in serial, for 100 time steps, once with MPI in parallel, and then compare the L2/Linf errors? IMHO they should be identical
There was a problem hiding this comment.
(but even if they're not, this is not a merge stopper for me)
There was a problem hiding this comment.
Awesome! This is great to see that the results are still invariant to the MPI & multithreading parallelizations!
|
@TJP-Karpowski Please also add your name to the authors list in |
|
I had seen that in #2361 and #2284 there were discussions on the scaling of Trixi on a single node. I also timed the lid-driven cavity testcases (with less refinement than for the multi-node scaling) again on a single node and got this scaling plot:
I utilize 8cpus per Rank in these simulations, which corresponds to the numa domains size. The drop from 1T to 8T is similar to the plots in #2361 and #2284 . Increasing the Node utilization further improves the scaling with MPI. Near the full utilization it drops again slightly, but to me that seems quite acceptable compared to the threading only approach. I presume that for the case of fewer cores I could also utilze e.g. 4 Ranks with 2 Threads each to get the scaling curve closer to linear. |
| return nothing | ||
| end | ||
|
|
||
| function calc_gradient_local!(gradients, u_transformed, t, |
There was a problem hiding this comment.
I really like this singled-out function. I also thought whether it makes sense to do the same also for the prolong2 and calc_flux building blocks. Maybe not in this PR, but something we could consider when the 3D version is added - would save us some lines of code and make the important building blocks clearer.
There was a problem hiding this comment.
Can you please open an issue for this (if Jeremy or you do not want to work on this immediately in the next PR)?
DanielDoehring
left a comment
There was a problem hiding this comment.
Thanks again, cool stuff!





MPI for 2D parabolic system on conforming P4est mesh
The PR adds MPI support for the parabolic rhs for conforming 2D P4est Meshes. Contrary to #2880 the cache_parabolic is not extended. The hyperbolic cache is reused.
Multiple existing testcases are repeated within the MPI tests and return the same results. Most notably, the surface integrals of the
elixir_navierstokes_NACA0012airfoil_mach08testcases return the same values as the local version. The analysis of surface integrals is extended to include an MPI reduce on parallel P4est (and T8code) grids, which enabled this analysis.Based on this PR and #2881 the method will be extended to allow for AMR and MPI mortars.
Disclaimer
LLMs have been used to aid in the PR.
Funding Statement
This work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project Number 237267381 – TRR 150.