Conversation
|
After the build completes, the updated documentation will be available here |
|
Wow, this is really nice, thanks Lander! I can give some opinions right off the bat:
I will take a proper look tomorrow and add some more comments, but this definitely is a massive improvement. |
@pbrehmer I don't know the details yet, but I remember that |
This was related to the fact that the gauge fixing routine in I would still be in favor of removing |
If we can make sure the sign matrices are real at the end of the gauge fixing procedure for real environments, we should be able to use |
pbrehmer
left a comment
There was a problem hiding this comment.
Again I'll have to say that I really like this! This is a big step forward in cleaning up the codebase and making things more consistent. I only have some minor comments. One thing I'm still unsure about is how we should best handle the capitalization of the algorithms since I do feel a bit torn about mixing capitalized MAK-style algorithms with lowercase ones.
|
It seems the updated As far as I can tell this was a perfectly working test before, so the new pullback somehow gives |
This is the case where we artificially create a degenerate spectrum so it does make sense that the Sylvester solver is struggling but it even seems to struggle in the presence of Lorentzian broadening (the second error), so that is odd to me. I'm not sure why the previous Sylvester solver would produce a converging result and the current one doesn't. |
|
It seems like the new This probably means that the way we disable the broadening and cutoff in the SVD wrapper test PEPSKit.jl/test/utility/svd_wrapper.jl Line 118 in 5a103dd now has some unintended consequences in the actual pullback implementation. |
|
Hahahah yes, we should pick a reasonable value there, which probably fixes the issues. I'm assuming that was there since before we had issues if this was chosen too small, but I think this might be better now? |
|
It turns out that the I updated the choice of degeneracies to be nicely commensurate with the truncation space, which gives sensible results for me locally. |
|
I opened some issues on discussion points that aren't directly related to decomposition handling. I think this is ready for another round of review. |
|
Just a small note that the actual gauge dependence warning, when using svd_trunc_pullback, is about the components of the adjoint variable in the subspaces of the kept degenerate eigenvalues, which are subsequently put to zero by the rule, i.e. it is about the part that is being ignored/thrown away. The components of the cotangents that were mixing the kept versus discarded singular values of a degenerate pair cannot be assessed by That is different for (Just a clarification as I myself miscommunicated this to Lander earlier today). |
pbrehmer
left a comment
There was a problem hiding this comment.
This is looking good to me now if all the tests run through! I only have a minor comment (you can call @set on nested structs) but that's just cosmetics really.
|
Will merge this as-is and deal with the actions later, to keep things moving. |
Updates the handling of decompositions after the updates of MatrixAlgebraKit.jl v0.6.5. The main purpose was to remove all explicit
LAPACK_specifications from the decomposition algorithms, now that the decomposition algorithms themselves have been decoupled from the storagetype in MatrixAlgebraKit.jl. In the process, I also tried to streamline the way we use decompositions a bit, attempting to bring them more in line with MatrixAlgebraKit.jl while still allowing to hook into the implementation and switch between different reverse rule algorithms.Decomposition algorithms & selection
The main change is an update in how decomposition algorithms are selected.
All of the MatrixAlgebraKit.jl decompositions are now selected through symbols that exactly match their algorithm name, e.g.
alg = :DivideAndConquer.:DefaultAlgorithmwhenever we can, which seems like the most suitable option.The specification of the iterative algorithms is kept the same as it was before.
I updated the docstrings of the decomposition algorithm structs
SVDAdjoint,EighAdjoint,QRAdjointto reflect all of these changes and also link directly to the relevant docstrings so users can easily find wat keyword algorithms they can provide.I removed the
rrule_algspecification from the user-facing interface inleading_boundaryandfixedpoint.decomposition_algkwarg ofleading_boundaryorCTMRGAlgorithmis interpreted as a forward algorithm. Specifying a particular rrule algorithm as well can now only be done by passing a full decomposition struct, e.g. an instantiatedSVDAdjoint, and is considered expert usage.Decomposition implementation
I updated the decomposition implementation to remove the intermediate
_svd_trunc!,_eigh_trunc!and_left_orth!methods, and only use the MatrixAlgebraKit.jl methods directly.The call signature for
svd_trunc!,eigh_trunc!andleft_orth!usingSVDAdjoint,EighAdjointandQRAdjointalgorithms respectively now just matches the MatrixAlgebraKit.jl signature.infostruct for the truncated decompositions, but only return the truncation error. Additional info such as condition numbers is just computed manually afterwards and added directly to the CTMRG info. In particular, the full decomposition is no longer returned explicitly (see also below).truncspecification to the actual decomposition structs, since MatrixAlgebraKit doesn't allow mixing instantiated algorithm structs with keyword arguments. In the forward pass, the MatrixAlgebraKit implementation is then called directly using aTruncatedAlgorithm(alg.fwd_alg, alg.trunc), which seemed like the cleanest way to do things.The reverse rules are still handled by manual interception at the moment, even though they just directly call the MAtrixAlgebraKit.jl implementations.
SVDAdjoint,EighAdjointandQRAdjointstructs altogether. Then we only have to handle reverse rules for the decompositions defined directly in PEPSKit.jl, i.e. the sparse iterative ones (until they are properly implemented elsewhere of course).Gauge fixing and fixed-point differentiation
I updated the implementation of the fixed iteration scheme to not use fixed precomputed decompositions, but rather fix relative and absolute phases after the core iteration using signs precomputed in the gauge fixing of the environment itself.
I managed to remove
FixedSVD,FixedEighandFixedQRcompletely, which in turn means we never have to return, store or gauge-fix full untruncated decompositions anymore.info_ctmrgfields inleading_boundary#354 since the full decompositions are only ever stored when using differentiation, completely removing them from the regular forward pass.To make this work properly, I had to ensure that running the same decomposition on the same input twice actually gave the same result, otherwise we end up gauge fixing with the wrong signs.
MatrixAlgebraKit.gaugefix!(...)call when computing the decomposition in each block.ones(...)rather thanrandnto generate the start vector.I updated the settings in the tests and examples to no longer use
iterscheme = :diffgauge, and also reduced its usage in the gradient tests directly. Things are quite a bit faster now.iterscheme = :diffgaugeis so much slower in practice, maybe we should just consider removing it altogether. In practice, it's sometimes just unusable, while all of the initial stability issues withiterscheme = :fixedseem to be resolved now.Further questions
Things to address before we can merge (and hopefully tag a release):
:DivideAndConquerfor specifying an SVD algorithm versus:gmresfor specifying a gradient linear solver. I would suggest to:alg = :DivideAndConquerworks directly in the MatrixAlgebraKit.jl interface itself, so it makes sense to just use the same. However, this mean changing some current values (e.g.:gmres -> :GMRES,:lbfgs -> :LBFGS. Very breaking, but this seems like a good time to break.Any other comments and questions are very welcome.