Parallel HDF5: Try fixing strict collective requirements of HDF5 >= 2.0#1862
Parallel HDF5: Try fixing strict collective requirements of HDF5 >= 2.0#1862franzpoeschel wants to merge 107 commits intoopenPMD:devfrom
Conversation
11ba7aa to
b6ea4b4
Compare
a5c1985 to
bcb55ef
Compare
| { | ||
| template <typename T, typename RecordType> | ||
| PostProcessConvertedAttributeImpl<T, RecordType>:: | ||
| PostProcessConvertedAttributeImpl(RecordType record_in, handler_t reader_in) |
Check notice
Code scanning / CodeQL
Large object passed by value Note
| REQUIRE(r.numAttributes() == 0); | ||
| // TODO: unitSI | ||
| // REQUIRE(r["x"].numAttributes() == 0); | ||
| // REQUIRE(r["y"].numAttributes() == 0); |
Check notice
Code scanning / CodeQL
Commented-out code Note test
48b4c2a to
07fb558
Compare
There was a problem hiding this comment.
CodeQL found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.
4abee4d to
38d3852
Compare
test/ParallelIOTest.cpp
Outdated
Check notice
Code scanning / CodeQL
Unused static function Note test
test/ParallelIOTest.cpp
Outdated
Check notice
Code scanning / CodeQL
Unused static function Note test
6b2454b to
4fcb328
Compare
This makes it easier to keep MPI processes in sync
These were unnecessary, but they snuck WRITE_ATT tasks into the skeleton flush.
not so great, but lets keep that for now
This reverts commit 6a5c9f5.
This reverts commit 246609ff5fbe5edb68119b7f3b29a40e7bf23d2d.
This reverts commit 2df195749b53b0a54832585899bd469d35f81d6d.
This reverts commit 07fb558.
4fcb328 to
3812746
Compare
| { | ||
| defaultAttribute(*this, "gridUnitSI") | ||
| .template withSetter<Mesh>(1.0, &Mesh::setGridUnitSI) | ||
| .withReader(float_types, require_type<std::vector<double>>())(wor); |
Check notice
Code scanning / CodeQL
Commented-out code Note
| // if (access::write(IOHandler()->m_frontendAccess)) | ||
| // { | ||
| // commitStructuralSetup(); | ||
| // } |
Check notice
Code scanning / CodeQL
Commented-out code Note
It seems that HDF5 has become quite a bit pickier about metadata definitions in parallel setups with versions 2.0 and 2.1, leading to hangups.
Earlier, it was enough to define them consistently across ranks, now we apparently have to keep the exact same order of operations.
This is bad for the Span API which runs internal flushes for structure setup.
resetDataset()flushParticlesPathandflushMeshesPathfunctions, these unnecessarily leaked attribute flushes into the structure setupresetDataset(). Best idea: add the new logic to a new API callcommitStructuralSetup()or so.defer_typecommitStructuralSetup()