Reduced memory footprint of spectral discretizations via block-diagonal sparse operators#565
Merged
Conversation
Member
|
How large are these matrices in general? |
Contributor
Author
|
The matrix size is number of degrees of freedom squared. For instance, in 3D RBC, you have $(5 \times N^3)^2$ for 5 solution components and $N^3$ grid points, which is very large. But a block diagonal matrix (diagonal in the components) has only $5\timesN^3^2$ entries, so a fifth. Now, since the matrices are sparse, I don’t understand why you need less memory when using different scipy functions for generating the same sparse block diagonal matrix. But that is the case in my tests.On 5. Jul 2025, at 11:26, Robert Speck ***@***.***> wrote:pancetta left a comment (Parallel-in-Time/pySDC#565)
How large are these matrices in general?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Member
|
Allocation of memory in Python is not straightforward (and not efficient if done manually). Relying on libraries like scipy is strongly recommended over lists (or so). |
brownbaerchen
added a commit
to brownbaerchen/pySDC
that referenced
this pull request
Jan 20, 2026
#!!!!!! WARNING: RUFF FAILED !!!!!!: #F811 Redefinition of unused `RBC2DG4R4ResARa1e5` from line 524 # --> pySDC/projects/RayleighBenard/RBC3D_configs.py:529:7 # | Parallel-in-Time#529 | class RBC2DG4R4ResARa1e5(RBC2DResA): # | ^^^^^^^^^^^^^^^^^^ `RBC2DG4R4ResARa1e5` redefined here Parallel-in-Time#530 | Tend = 100 Parallel-in-Time#531 | res = 64 # | # ::: pySDC/projects/RayleighBenard/RBC3D_configs.py:524:7 # | Parallel-in-Time#524 | class RBC2DG4R4ResARa1e5(RBC2DResA): # | ------------------ previous definition of `RBC2DG4R4ResARa1e5` here Parallel-in-Time#525 | Tend = 100 Parallel-in-Time#526 | res = 64 # | #help: Remove definition: `RBC2DG4R4ResARa1e5` # #F811 Redefinition of unused `RBC2DG4R4SDC23A10Ra1e6` from line 565 # --> pySDC/projects/RayleighBenard/RBC3D_configs.py:578:7 # | Parallel-in-Time#578 | class RBC2DG4R4SDC23A10Ra1e6(RBC2DM2K3A): # | ^^^^^^^^^^^^^^^^^^^^^^ `RBC2DG4R4SDC23A10Ra1e6` redefined here Parallel-in-Time#579 | Tend = 50 Parallel-in-Time#580 | res = 128 # | # ::: pySDC/projects/RayleighBenard/RBC3D_configs.py:565:7 # | Parallel-in-Time#565 | class RBC2DG4R4SDC23A10Ra1e6(RBC2DM2K3A): # | ---------------------- previous definition of `RBC2DG4R4SDC23A10Ra1e6` here Parallel-in-Time#566 | Tend = 100 Parallel-in-Time#567 | res = 64 # | #help: Remove definition: `RBC2DG4R4SDC23A10Ra1e6` # #Found 2 errors.:
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The spectral discretizations require a few block-diagonal matrices among all degrees of freedom. Constructing them as such actually reduces the memory requirements quite a bit compared to constructing general block matrices with only the diagonal blocks.
This is independent of the larger refactor, which is why I did a separate PR.
I added a test that the resulting operators are the same and that indeed memory is reduced.