Skip to content

Commit 5bef397

Browse files
committed
Merge branch 'main' into jh/truncerr
2 parents aec6c91 + b825314 commit 5bef397

25 files changed

Lines changed: 349 additions & 156 deletions

.github/workflows/PR_comment.yml

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
name: Docs preview comment
2+
on:
3+
pull_request:
4+
types: [labeled]
5+
6+
permissions:
7+
pull-requests: write
8+
jobs:
9+
pr_comment:
10+
runs-on: ubuntu-latest
11+
steps:
12+
- name: Create PR comment
13+
if: github.event_name == 'pull_request' && github.repository == github.event.pull_request.head.repo.full_name && github.event.label.name == 'documentation'
14+
uses: thollander/actions-comment-pull-request@v3
15+
with:
16+
message: 'After the build completes, the updated documentation will be available [here](https://quantumkithub.github.io/MatrixAlgebraKit.jl/previews/PR${{ github.event.number }}/)'
17+
comment-tag: 'preview-doc'

docs/make.jl

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,7 @@ makedocs(;
5252
"user_interface/compositions.md",
5353
"user_interface/decompositions.md",
5454
"user_interface/truncations.md",
55+
"user_interface/properties.md",
5556
"user_interface/matrix_functions.md",
5657
],
5758
"Developer Interface" => "dev_interface.md",

docs/src/index.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ These operations typically follow some common skeleton, and here we go into a li
2626

2727
```@contents
2828
Pages = ["user_interface/compositions.md", "user_interface/decompositions.md",
29-
"user_interface/truncations.md", "user_interface/matrix_functions.md"]
29+
"user_interface/truncations.md", "user_interface/properties.md",
30+
"user_interface/matrix_functions.md"]
3031
Depth = 2
3132
```

docs/src/user_interface/decompositions.md

Lines changed: 14 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,7 @@ Not all matrices can be diagonalized, and some real matrices can only be diagona
5555
In particular, the resulting decomposition can only guaranteed to be real for real symmetric inputs `A`.
5656
Therefore, we provide `eig_` and `eigh_` variants, where `eig` always results in complex-valued `V` and `D`, while `eigh` requires symmetric inputs but retains the scalartype of the input.
5757

58+
The full set of eigenvalues and eigenvectors can be computed using the [`eig_full`](@ref) and [`eigh_full`](@ref) functions.
5859
If only the eigenvalues are required, the [`eig_vals`](@ref) and [`eigh_vals`](@ref) functions can be used.
5960
These functions return the diagonal elements of `D` in a vector.
6061

@@ -99,7 +100,7 @@ Filter = t -> t isa Type && t <: MatrixAlgebraKit.LAPACK_EigAlgorithm
99100

100101
The [Schur decomposition](https://en.wikipedia.org/wiki/Schur_decomposition) transforms a complex square matrix `A` into a product `Q * T * Qᴴ`, where `Q` is unitary and `T` is upper triangular.
101102
It rewrites an arbitrary complex square matrix as unitarily similar to an upper triangular matrix whose diagonal elements are the eigenvalues of `A`.
102-
For real matrices, the same decomposition can be achieved with `T` being quasi-upper triangular, ie triangular with blocks of size `(1, 1)` and `(2, 2)` on the diagonal.
103+
For real matrices, the same decomposition can be achieved in real arithmetic by allowing `T` to be quasi-upper triangular, i.e. triangular with blocks of size `(1, 1)` and `(2, 2)` on the diagonal.
103104

104105
This decomposition is also useful for computing the eigenvalues of a matrix, which is exposed through the [`schur_vals`](@ref) function.
105106

@@ -117,12 +118,12 @@ Filter = t -> t isa Type && t <: MatrixAlgebraKit.LAPACK_EigAlgorithm
117118

118119
## Singular Value Decomposition
119120

120-
The [Singular Value Decomposition](https://en.wikipedia.org/wiki/Singular_value_decomposition) transforms a matrix `A` into a product `U * Σ * Vᴴ`, where `U` and `V` are orthogonal, and `Σ` is diagonal, real and non-negative.
121-
For a square matrix `A`, both `U` and `V` are unitary, and if the singular values are distinct, the decomposition is unique.
121+
The [Singular Value Decomposition](https://en.wikipedia.org/wiki/Singular_value_decomposition) transforms a matrix `A` into a product `U * Σ * Vᴴ`, where `U` and `Vᴴ` are unitary, and `Σ` is diagonal, real and non-negative.
122+
For a square matrix `A`, both `U` and `Vᴴ` are unitary, and if the singular values are distinct, the decomposition is unique.
122123

123124
For rectangular matrices `A` of size `(m, n)`, there are two modes of operation, [`svd_full`](@ref) and [`svd_compact`](@ref).
124-
The former ensures that the resulting `U`, and `V` remain square unitary matrices, of size `(m, m)` and `(n, n)`, with rectangular `Σ` of size `(m, n)`.
125-
The latter creates an isometric `U` of size `(m, min(m, n))`, and `V` of size `(n, min(m, n))`, with a square `Σ` of size `(min(m, n), min(m, n))`.
125+
The former ensures that the resulting `U`, and `Vᴴ` remain square unitary matrices, of size `(m, m)` and `(n, n)`, with rectangular `Σ` of size `(m, n)`.
126+
The latter creates an isometric `U` of size `(m, min(m, n))`, and `V = (Vᴴ)'` of size `(n, min(m, n))`, with a square `Σ` of size `(min(m, n), min(m, n))`.
126127

127128
It is also possible to compute the singular values only, using the [`svd_vals`](@ref) function.
128129
This then returns a vector of the values on the diagonal of `Σ`.
@@ -147,21 +148,23 @@ Filter = t -> t isa Type && t <: MatrixAlgebraKit.LAPACK_SVDAlgorithm
147148

148149
The [Polar Decomposition](https://en.wikipedia.org/wiki/Polar_decomposition) of a matrix `A` is a factorization `A = W * P`, where `W` is unitary and `P` is positive semi-definite.
149150
If `A` is invertible (and therefore square), the polar decomposition always exists and is unique.
150-
For non-square matrices, the polar decomposition is not unique, but `P` is.
151-
In particular, the polar decomposition is unique if `A` is full rank.
151+
For non-square matrices `A` of size `(m, n)`, the decomposition `A = W * P` with `P` positive semi-definite of size `(n, n)` and `W` isometric of size `(m, n)` exists only if `m >= n`, and is unique if `A` and thus `P` is full rank.
152+
For `m <= n`, we can analoguously decompose `A` as `A = P * Wᴴ` with `P` positive semi-definite of size `(m, m)` and `Wᴴ` of size `(m, n)` such that `W = (Wᴴ)'` is isometric. Only in the case `m = n` do both decompositions exist.
152153

153-
This decomposition can be computed for both sides, resulting in the [`left_polar`](@ref) and [`right_polar`](@ref) functions.
154+
The decompositions `A = W * P` or `A = P * Wᴴ` can be computed with the [`left_polar`](@ref) and [`right_polar`](@ref) functions, respectively.
154155

155156
```@docs; canonical=false
156157
left_polar
157158
right_polar
158159
```
159160

160-
These functions are implemented by first computing a singular value decomposition, and then constructing the polar decomposition from the singular values and vectors.
161-
Therefore, the relevant LAPACK-based implementation is the one for the SVD:
161+
These functions can be implemented by first computing a singular value decomposition, and then constructing the polar decomposition from the singular values and vectors. Alternatively, the polar decomposition can be computed using an
162+
iterative method based on Newton's method, that can be more efficient for large matrices, especially if they are
163+
close to being isometric already.
162164

163165
```@docs; canonical=false
164166
PolarViaSVD
167+
PolarNewton
165168
```
166169

167170
## Orthogonal Subspaces
@@ -179,7 +182,7 @@ right_orth
179182
## Null Spaces
180183

181184
Similarly, it can be convenient to obtain an orthogonal basis for the kernel or cokernel of a matrix.
182-
These are the compliments of the image and coimage, and can be computed using the [`left_null`](@ref) and [`right_null`](@ref) functions.
185+
These are the compliments of the coimage and image, respectively, and can be computed using the [`left_null`](@ref) and [`right_null`](@ref) functions.
183186
Again, this is typically implemented through a combination of the decompositions mentioned above, and serves as a convenient interface to these operations.
184187

185188
```@docs; canonical=false
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
```@meta
2+
CurrentModule = MatrixAlgebraKit
3+
CollapsedDocStrings = true
4+
```
5+
6+
# Matrix Properties
7+
8+
MatrixAlgebraKit.jl provides a number of methods to check various properties of matrices.
9+
10+
```@docs; canonical=false
11+
isisometric
12+
isunitary
13+
ishermitian
14+
isantihermitian
15+
```
16+
17+
Furthermore, there are also methods to project a matrix onto the nearest matrix (in 2-norm distance) with a given property.
18+
19+
```@docs; canonical=false
20+
project_isometric
21+
project_hermitian
22+
project_antihermitian
23+
```

docs/src/user_interface/truncations.md

Lines changed: 110 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,112 @@ CollapsedDocStrings = true
55

66
# Truncations
77

8-
Currently, truncations are supported through the following different methods:
8+
Truncation strategies allow you to control which eigenvalues or singular values to keep when computing partial or truncated decompositions. These strategies are used in the functions [`eigh_trunc`](@ref), [`eig_trunc`](@ref), and [`svd_trunc`](@ref) to reduce the size of the decomposition while retaining the most important information.
9+
10+
## Using Truncations in Decompositions
11+
12+
Truncation strategies can be used with truncated decomposition functions in two ways, as illustrated below.
13+
For concreteness, we use the following matrix as an example:
14+
15+
```jldoctest truncations
16+
using MatrixAlgebraKit
17+
using MatrixAlgebraKit: diagview
18+
19+
A = [2 1 0; 1 3 1; 0 1 4];
20+
D, V = eigh_full(A);
21+
22+
diagview(D) ≈ [3 - √3, 3, 3 + √3]
23+
24+
# output
25+
26+
true
27+
```
28+
29+
### 1. Using the `trunc` keyword with a `NamedTuple`
30+
31+
The simplest approach is to pass a `NamedTuple` with the truncation parameters.
32+
For example, keeping only the largest 2 eigenvalues:
33+
34+
```jldoctest truncations
35+
Dtrunc, Vtrunc = eigh_trunc(A; trunc = (maxrank = 2,));
36+
size(Dtrunc, 1) <= 2
37+
38+
# output
39+
40+
true
41+
```
42+
43+
Note however that there are no guarantees on the order of the output values:
44+
45+
```jldoctest truncations
46+
diagview(Dtrunc) ≈ diagview(D)[[3, 2]]
47+
48+
# output
49+
50+
true
51+
```
52+
53+
You can also use tolerance-based truncation or combine multiple criteria:
54+
55+
```jldoctest truncations
56+
Dtrunc, Vtrunc = eigh_trunc(A; trunc = (atol = 2.9,));
57+
all(>(2.9), diagview(Dtrunc))
58+
59+
# output
60+
61+
true
62+
```
63+
64+
```jldoctest truncations
65+
Dtrunc, Vtrunc = eigh_trunc(A; trunc = (maxrank = 2, atol = 2.9));
66+
size(Dtrunc, 1) <= 2 && all(>(2.9), diagview(Dtrunc))
67+
68+
# output
69+
true
70+
```
71+
72+
In general, the keyword arguments that are supported can be found in the `TruncationStrategy` docstring:
73+
74+
```@docs; canonical = false
75+
TruncationStrategy
76+
```
77+
78+
79+
### 2. Using explicit `TruncationStrategy` objects
80+
81+
For more control, you can construct [`TruncationStrategy`](@ref) objects directly.
82+
This is also what the previous syntax will end up calling.
83+
84+
```jldoctest truncations
85+
Dtrunc, Vtrunc = eigh_trunc(A; trunc = truncrank(2))
86+
size(Dtrunc, 1) <= 2
87+
88+
# output
89+
90+
true
91+
```
92+
93+
```jldoctest truncations
94+
Dtrunc, Vtrunc = eigh_trunc(A; trunc = truncrank(2) & trunctol(; atol = 2.9))
95+
size(Dtrunc, 1) <= 2 && all(>(2.9), diagview(Dtrunc))
96+
97+
# output
98+
true
99+
```
100+
101+
## Truncation with SVD vs Eigenvalue Decompositions
102+
103+
When using truncations with different decomposition types, keep in mind:
104+
105+
- **`svd_trunc`**: Singular values are always real and non-negative, sorted in descending order. Truncation by value typically keeps the largest singular values.
106+
107+
- **`eigh_trunc`**: Eigenvalues are real but can be negative for symmetric matrices. By default, `truncrank` sorts by absolute value, so `truncrank(k)` keeps the `k` eigenvalues with largest magnitude (positive or negative).
108+
109+
- **`eig_trunc`**: For general (non-symmetric) matrices, eigenvalues can be complex. Truncation by absolute value considers the complex magnitude.
110+
111+
## Truncation Strategies
112+
113+
MatrixAlgebraKit provides several built-in truncation strategies:
9114

10115
```@docs; canonical=false
11116
notrunc
@@ -15,11 +120,10 @@ truncfilter
15120
truncerror
16121
```
17122

18-
It is additionally possible to combine truncation strategies by making use of the `&` operator.
19-
For example, truncating to a maximal dimension `10`, and discarding all values below `1e-6` would be achieved by:
123+
Truncation strategies can be combined using the `&` operator to create intersection-based truncation.
124+
When strategies are combined, only the values that satisfy all conditions are kept.
20125

21126
```julia
22-
maxdim = 10
23-
atol = 1e-6
24-
combined_trunc = truncrank(maxdim) & trunctol(; atol)
127+
combined_trunc = truncrank(10) & trunctol(; atol = 1e-6);
25128
```
129+

ext/MatrixAlgebraKitChainRulesCoreExt.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ MatrixAlgebraKit.iszerotangent(::AbstractZero) = true
1414
@non_differentiable MatrixAlgebraKit.select_algorithm(args...)
1515
@non_differentiable MatrixAlgebraKit.initialize_output(args...)
1616
@non_differentiable MatrixAlgebraKit.check_input(args...)
17-
@non_differentiable MatrixAlgebraKit.isisometry(args...)
17+
@non_differentiable MatrixAlgebraKit.isisometric(args...)
1818
@non_differentiable MatrixAlgebraKit.isunitary(args...)
1919

2020
function ChainRulesCore.rrule(::typeof(copy_input), f, A)

src/MatrixAlgebraKit.jl

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ using LinearAlgebra: Diagonal, diag, diagind, isdiag
99
using LinearAlgebra: UpperTriangular, LowerTriangular
1010
using LinearAlgebra: BlasFloat, BlasReal, BlasComplex, BlasInt
1111

12-
export isisometry, isunitary, ishermitian, isantihermitian
12+
export isisometric, isunitary, ishermitian, isantihermitian
1313

1414
export project_hermitian, project_antihermitian, project_isometric
1515
export project_hermitian!, project_antihermitian!, project_isometric!
@@ -65,6 +65,7 @@ export notrunc, truncrank, trunctol, truncerror, truncfilter
6565
:svd_pullback!, :svd_trunc_pullback!
6666
)
6767
)
68+
eval(Expr(:public, :is_left_isometric, :is_right_isometric))
6869
end
6970

7071
include("common/defaults.jl")

src/common/matrixproperties.jl

Lines changed: 20 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
"""
2-
isisometry(A; side=:left, isapprox_kwargs...) -> Bool
2+
isisometric(A; side=:left, isapprox_kwargs...) -> Bool
33
44
Test whether a linear map is an isometry, where the type of isometry is controlled by `kind`:
55
@@ -8,13 +8,14 @@ Test whether a linear map is an isometry, where the type of isometry is controll
88
99
The `isapprox_kwargs` are passed on to `isapprox` to control the tolerances.
1010
11-
New specializations should overload [`is_left_isometry`](@ref) and [`is_right_isometry`](@ref).
11+
New specializations should overload [`MatrixAlgebraKit.is_left_isometric`](@ref) and
12+
[`MatrixAlgebraKit.is_right_isometric`](@ref).
1213
1314
See also [`isunitary`](@ref).
1415
"""
15-
function isisometry(A; side::Symbol = :left, isapprox_kwargs...)
16-
side === :left && return is_left_isometry(A; isapprox_kwargs...)
17-
side === :right && return is_right_isometry(A; isapprox_kwargs...)
16+
function isisometric(A; side::Symbol = :left, isapprox_kwargs...)
17+
side === :left && return is_left_isometric(A; isapprox_kwargs...)
18+
side === :right && return is_right_isometric(A; isapprox_kwargs...)
1819

1920
throw(ArgumentError(lazy"Invalid isometry side: $side"))
2021
end
@@ -25,48 +26,42 @@ end
2526
Test whether a linear map is unitary, i.e. `A * A' ≈ I ≈ A' * A`.
2627
The `isapprox_kwargs` are passed on to `isapprox` to control the tolerances.
2728
28-
See also [`isisometry`](@ref).
29+
See also [`isisometric`](@ref).
2930
"""
3031
function isunitary(A; isapprox_kwargs...)
31-
return is_left_isometry(A; isapprox_kwargs...) &&
32-
is_right_isometry(A; isapprox_kwargs...)
32+
return is_left_isometric(A; isapprox_kwargs...) &&
33+
is_right_isometric(A; isapprox_kwargs...)
3334
end
3435
function isunitary(A::AbstractMatrix; isapprox_kwargs...)
3536
size(A, 1) == size(A, 2) || return false
36-
return is_left_isometry(A; isapprox_kwargs...)
37+
return is_left_isometric(A; isapprox_kwargs...)
3738
end
3839

3940
@doc """
40-
is_left_isometry(A; isapprox_kwargs...) -> Bool
41+
is_left_isometric(A; isapprox_kwargs...) -> Bool
4142
42-
Test whether a linear map is a left isometry, i.e. `A' * A ≈ I`.
43+
Test whether a linear map is a (left) isometry, i.e. `A' * A ≈ I`.
4344
The `isapprox_kwargs` can be used to control the tolerances of the equality.
4445
45-
See also [`isisometry`](@ref) and [`is_right_isometry`](@ref).
46-
""" is_left_isometry
46+
See also [`isisometric`](@ref) and [`MatrixAlgebraKit.is_right_isometric`](@ref).
47+
""" is_left_isometric
4748

48-
function is_left_isometry(A::AbstractMatrix; atol::Real = 0, rtol::Real = defaulttol(A), norm = LinearAlgebra.norm)
49+
function is_left_isometric(A::AbstractMatrix; atol::Real = 0, rtol::Real = defaulttol(A), norm = LinearAlgebra.norm)
4950
P = A' * A
5051
nP = norm(P) # isapprox would use `rtol * max(norm(P), norm(I))`
5152
diagview(P) .-= 1
5253
return norm(P) <= max(atol, rtol * nP) # assume that the norm of I is `sqrt(n)`
5354
end
5455

5556
@doc """
56-
is_right_isometry(A; isapprox_kwargs...) -> Bool
57+
is_right_isometric(A; isapprox_kwargs...) -> Bool
5758
58-
Test whether a linear map is a right isometry, i.e. `A * A' ≈ I`.
59+
Test whether a linear map is a (right) isometry, i.e. `A * A' ≈ I`.
5960
The `isapprox_kwargs` can be used to control the tolerances of the equality.
6061
61-
See also [`isisometry`](@ref) and [`is_left_isometry`](@ref).
62-
""" is_right_isometry
63-
64-
function is_right_isometry(A::AbstractMatrix; atol::Real = 0, rtol::Real = defaulttol(A), norm = LinearAlgebra.norm)
65-
P = A * A'
66-
nP = norm(P) # isapprox would use `rtol * max(norm(P), norm(I))`
67-
diagview(P) .-= 1
68-
return norm(P) <= max(atol, rtol * nP) # assume that the norm of I is `sqrt(n)`
69-
end
62+
See also [`isisometric`](@ref) and [`MatrixAlgebraKit.is_left_isometric`](@ref).
63+
""" is_right_isometric
64+
is_right_isometric(A; kwargs...) = is_left_isometric(A'; kwargs...)
7065

7166
"""
7267
ishermitian(A; isapprox_kwargs...)

0 commit comments

Comments
 (0)