Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NDTensors] [ITensors] Implement optional rank reduction for QR/RQ/QL/LQ decompositions #1099

Open
wants to merge 118 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 99 commits
Commits
Show all changes
118 commits
Select commit Hold shift + click to select a range
f10e2d3
Refactor: Breakout index fixing in qr() for empty left/right indices
JanReimers Nov 13, 2022
cd88e80
qr decomp, switch to using combiners for gathering indices.
JanReimers Nov 14, 2022
9de2ff6
Handle empty index collections on Q or R.
JanReimers Nov 16, 2022
dfb66e0
Fix known block sparse failing use case
JanReimers Nov 16, 2022
9752128
Add attribution for Niklas
JanReimers Nov 16, 2022
253b5ec
Add qr test on autoMPO generated block sparse tensors
JanReimers Nov 16, 2022
c43814f
Move block sparse QR into NDTensors layer where it belongs.
JanReimers Nov 16, 2022
24c2ff7
Run the formatter
JanReimers Nov 16, 2022
78c556f
Fix qr overload disambiguation problem in julia 1.6
JanReimers Nov 17, 2022
401eef3
Add flux checks for block sparse qr tests
JanReimers Nov 17, 2022
c6486eb
Implement RQ decomposition.
JanReimers Nov 18, 2022
6e84eb6
Add rq decomp for block sparse matrices
JanReimers Nov 18, 2022
3424827
Add test for positive=true and rq decomp
JanReimers Nov 18, 2022
ecd8931
Implement QL/LQ decompositions
JanReimers Nov 18, 2022
b75f40d
Add tests for ComplexF64 and upper /lower checks for R/L
JanReimers Nov 19, 2022
93c391f
Fix flakey fail of NDTensors unit tests
JanReimers Nov 19, 2022
fd8546c
Endless struggle to get julia to see symbols I *already* put in expor…
JanReimers Nov 19, 2022
3b4962e
Removed unused version qr()
JanReimers Nov 19, 2022
3464d7f
Implement rank reducing QR/RQ/QL/LQ
JanReimers Nov 20, 2022
2a2391e
Pass through kwargs for LQ/QL functions
JanReimers Nov 20, 2022
b9dbda7
Run the formatter
JanReimers Nov 20, 2022
0f8c092
Try and fix lq symbol clash exposed in julia 1.8
JanReimers Nov 21, 2022
53fc8a0
Merge branch 'RQQLLQ' into QXRankReduction
JanReimers Nov 21, 2022
d0989cc
Merge from RQQLLQ branch
JanReimers Nov 21, 2022
4b6ed1a
Merge branch 'ITensor:main' into main
JanReimers Nov 26, 2022
d14fb6d
Merge branch 'main' into QXRankReduction
JanReimers Nov 27, 2022
990f8ca
Run formatter on Rank Revial code
JanReimers Nov 29, 2022
67d3594
Augment unit tests to check all index directions are QR decomp
JanReimers Nov 29, 2022
ba0bfdd
Fix QN direction of qx link for rq decomp.
JanReimers Nov 30, 2022
79adbd0
Merge branch 'RQQLLQ' into QXRankReduction
JanReimers Nov 30, 2022
46b7571
Fix flux tests to reflect new QN dir fix for rq decomp
JanReimers Nov 30, 2022
ac1e53c
Merge branch 'ITensor:main' into main
JanReimers Nov 30, 2022
3b49821
Merge branch 'main' into QXRankReduction
JanReimers Nov 30, 2022
ce7512c
Fix flux tests to reflect new QN dir fix for rq decomp
JanReimers Dec 2, 2022
ad726dc
Remove direction tests
JanReimers Dec 2, 2022
35690ec
Merge branch 'main' into RQQLLQ
mtfishman Dec 7, 2022
82cef17
Merge branch 'ITensor:main' into main
JanReimers Dec 7, 2022
d05220d
Improvements based on Matt's code review
JanReimers Dec 7, 2022
06bf618
Merge branch 'main' into QXRankReduction
JanReimers Dec 7, 2022
48c5164
Merge branch 'RQQLLQ' into QXRankReduction
JanReimers Dec 8, 2022
ce72646
Merge branch 'ITensor:main' into main
JanReimers Jan 11, 2023
5d7efd3
Merge branch 'main' into QXRankReduction
JanReimers Jan 11, 2023
7059446
Use map for testing zero rows.
JanReimers Jan 14, 2023
405014e
Change from using epsrr to rr_cutoff for zero row threshold.
JanReimers Jan 14, 2023
cf7ef37
Merge remote-tracking branch 'origin/main' into BlocksparseQR
JanReimers Mar 2, 2023
2fe0fa3
Merge branch 'BlocksparseQR' into RQQLLQ
JanReimers Mar 2, 2023
e937732
Handle changes to similar() function interface.
JanReimers Mar 2, 2023
01d86f4
Add QR/RQ code to ensure all flux is moved onto R
JanReimers Mar 2, 2023
3ac4267
Implement all but one of Matts code review recommendations
JanReimers Mar 2, 2023
0d0d120
Run the formatter
JanReimers Mar 2, 2023
2a93b6e
Remove NDTensors. qualifiers
JanReimers Mar 2, 2023
1e8830d
Put keyword arguments directly in signatures
JanReimers Mar 2, 2023
84a6f20
Fix flux requirements at the NDTensors level.
JanReimers Mar 3, 2023
cf722ef
Format
JanReimers Mar 4, 2023
7e39a55
Fix double swap of Q & L in the ql() function
JanReimers Mar 4, 2023
eb955be
Use generic function for most of the qr/ql operations.
JanReimers Mar 6, 2023
df6c07f
Implement core qx functions accepting both Linds and Rinds
JanReimers Mar 7, 2023
9223523
Format
JanReimers Mar 7, 2023
462ffbf
Stri[ out extra indices in Linds, not in A.
JanReimers Mar 7, 2023
9f80aa2
Move Heisenberg and Hubbards MPO QR tests over legacy area.
JanReimers Mar 7, 2023
f931cbe
Format
JanReimers Mar 7, 2023
a31fb5c
Strip out extra indices in Linds, not in A.
JanReimers Mar 7, 2023
492443c
Move Heisenberg and Hubbards MPO QR tests over legacy area.
JanReimers Mar 7, 2023
35e560f
Format
JanReimers Mar 7, 2023
9128de9
NDTensors unit, switch rq to ql decomp.
JanReimers Mar 8, 2023
82c18f1
Merge remote-tracking branch 'origin/RQQLLQ' into RQQLLQ
JanReimers Mar 8, 2023
9e86676
Merge branch 'RQQLLQ' into QXRankReduction
JanReimers Mar 8, 2023
cb1f0d1
Finish merge and get all decomp unit test working.
JanReimers Mar 8, 2023
32972cc
Use more likely to pass a String rather than a tagset
JanReimers Mar 8, 2023
aa29914
Format
JanReimers Mar 8, 2023
ca39f9c
Merge remote-tracking branch 'origin/main' into RQQLLQ
JanReimers Mar 8, 2023
d4709c4
Merge branch 'RQQLLQ' into QXRankReduction
JanReimers Mar 8, 2023
858e070
Fix bug in ql_positive routine
JanReimers Mar 8, 2023
885dfe9
Remove some debug code.
JanReimers Mar 9, 2023
5ee47bf
Fix: UndefVarError: tensor not defined
JanReimers Mar 9, 2023
420241d
Qualify tensor()
JanReimers Mar 9, 2023
8596e72
Merge branch 'RQQLLQ' into QXRankReduction
JanReimers Mar 9, 2023
5e72caa
Stop using Printf at the NDTensors level.
JanReimers Mar 10, 2023
2b10908
Enhance unit tests for qr/ql decomp
JanReimers Mar 11, 2023
3b5072d
Format
JanReimers Mar 12, 2023
59542df
Don't assume lapack qr/ql returns reals on the R/L diagonals
JanReimers Mar 12, 2023
042578b
Avoid type piracy in unit test code
JanReimers Mar 15, 2023
e6ff0e6
Remove unnessecary usage of where {ElT}
JanReimers Mar 15, 2023
af73844
Pass tags as a keyward argument. And format.
JanReimers Mar 15, 2023
e191c1d
Merge remote-tracking branch 'origin/main' into RQQLLQ
JanReimers Mar 15, 2023
6d3f6b3
Use new randomTensor(ElT,tuple) interface
JanReimers Mar 15, 2023
4a16f3d
Eliminate where{IndsT} for dense qr/ql
JanReimers Mar 15, 2023
11d43a4
Merge remote-tracking branch 'origin/main' into QXRankReduction
JanReimers Mar 15, 2023
49df0be
Merge remote-tracking branch 'origin/RQQLLQ' into QXRankReduction
JanReimers Mar 16, 2023
e53c85b
Handle zero pivots gracefully with the new sign(diag) code.
JanReimers Mar 16, 2023
04cccd7
Remove where {T} for low level ql routine.
JanReimers Mar 16, 2023
b22f8a3
Format
JanReimers Mar 16, 2023
43d914f
Merge branch 'RQQLLQ' into QXRankReduction
JanReimers Mar 16, 2023
3a8159c
Merge remote-tracking branch 'origin/main' into QXRankReduction
JanReimers Mar 17, 2023
0492071
Clean up and comment code.
JanReimers Mar 20, 2023
62ed9cc
Format and add rr-verbose flag for optional rank reduction output.
JanReimers Mar 20, 2023
0ec04b7
Remove the old test/decomp/jl file
JanReimers Mar 21, 2023
1961520
Set Random seed to keeps tests deterministic.
JanReimers Mar 21, 2023
491b830
Can't use Random on the NDTensors CI machine
JanReimers Mar 21, 2023
9aff488
Merge branch 'main' into QXRankReduction
mtfishman Apr 3, 2023
f8976f9
Merge remote-tracking branch 'origin/main' into QXRankReduction
JanReimers Apr 13, 2023
fdce8fe
Fix names cutoff and verbose
JanReimers Apr 13, 2023
543d355
Unify positive gauge fix for qr/ql
JanReimers Apr 13, 2023
4d22400
Implement colunm pivoting with row removal
JanReimers Apr 13, 2023
1540aac
Format
JanReimers Apr 13, 2023
0cdc115
Fix some unit test fails.
JanReimers Apr 13, 2023
9da7b54
For column Pivot QR, return column permutation arrays.
JanReimers Apr 20, 2023
a954c54
Merge remote-tracking branch 'origin/main' into QXRankReduction
JanReimers Apr 20, 2023
be0074d
Support atol/rtol interface for column pivot qr
JanReimers Apr 24, 2023
a86f4cf
Support the new NoPivot, ColumnNorm types
JanReimers Apr 24, 2023
cde6dad
Support Matts proposed interface
JanReimers Apr 25, 2023
c790493
Support ql decomp using the MatrixFactorizations package
JanReimers Apr 25, 2023
9fa3dd3
Try to avoid warnings in julia 1.8.5
JanReimers Apr 26, 2023
afe67fd
Format
JanReimers Apr 26, 2023
99abd4d
Try and avoid breaking ITensorGPU
JanReimers Apr 26, 2023
6091157
Try and avoid breaking ITensorGPU
JanReimers Apr 26, 2023
e33dd47
Merge remote-tracking branch 'origin/main' into QXRankReduction
JanReimers May 22, 2023
6b747dd
Switch from returning permutations to returning Rp.
JanReimers May 24, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
77 changes: 67 additions & 10 deletions NDTensors/src/linearalgebra/linearalgebra.jl
Original file line number Diff line number Diff line change
Expand Up @@ -366,11 +366,58 @@ function LinearAlgebra.eigen(
V = complex(tensor(Dense(vec(VM)), Vinds))
return D, V, spec
end
#
# Trim out zero rows of R/X within tolerance rr_cutoff. Also trim the corresponding columns
# of Q. X = R or L
#
function trim_rows(
Q::AbstractMatrix, X::AbstractMatrix, rr_cutoff::Float64; rr_verbose=false, kwargs...
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would just name these keyword arguments cutoff and verbose (i.e. drop the rr_).

Also, cutoff has a particular meaning in the context of SVD, where it is the sum of squares of the singular values.

I see here it is used as:

zeros = map((r) -> (maximum(abs.(X[r, 1:Xnc])) <= rr_cutoff), 1:Xnr)

so is checking that the maximum value in the row is less than the cutoff value/tolerance. Is that a standard approach for reducing the rank?

Maybe we can look at https://github.com/JuliaLinearAlgebra/LowRankApprox.jl and see what conventions they use for performing the rank reduction and what they call the keyword arguments (I see rtol and atol being used, which is used elsewhere in Julia say may be good choices).

Copy link
Contributor Author

@JanReimers JanReimers Mar 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue 1) Do we need a distinct name for the rank revailing cutoff? Right now I have to code set up so that the user can just call truncate!(InfiniteMPO) without first calling orthogonalize!(InfiniteMPO). The truncate function checks if the iMPO is already orthogonalized and if not is calls orthogonalize before truncating with svd. So with this setup we need distinct control of Rank Revealing QR cutoff and the svd cutoff. rtol would work for this.
Issue 2) How to decide if a row is effectively zero? Here is what Daniel Parker says:
image
I looked in his thesis and there is no more info on this topic.
I agree that testing rr_cutoff > sqrt(sum(X[r, 1:Xnc]^2)) is closer to what svd truncation does.

Issue #3: Hornets nest: Conventional rank reducting QR does not do what Daniel did. Conventional methods do column pivoting QR, which is akin to "I am not going to solve A=QR for you, instead I am going solve AP=QR, where P is a column permutation matrix" This puts all near zero rows of R into the lower right corner. It acutally orders the diagonals of R similar singular value ordering. Maybe Parker purposely avoided this because he knew it would jumble up the MPOs and destroy his regular form. Maybe column pivoting QR and doing A = QP^T PRP^T (P^T=transpose(P)) would get us back to where we are now, but I haven't tried it. We could insist the library needs to support conventional rank reducting QR with colunm pivoting, not Daniels version , but I think users want to solve A=QR, not AP=QR, in other words they don't want to be told "Sorry I am not solving your problem, I will only solve my choice of problem". Maybe there is an easy solution and I am just not seeing it.

Perhaps I should email Daniel and get his opinion on max(abs(row)) vs. rms(row), and if column pivoting QR was tested and rejected for some reason.

Copy link
Member

@mtfishman mtfishman Mar 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Within the qx functions, there is no fundamental reason why it needs a distinct name from cutoff. If we choose cutoff, we can choose different names in the higher level functions like truncate!(::MPO) to distinguish between the qr truncation tolerence/cutoff and the svd truncation/cutoff, for example with a code pattern like:

function truncate!(::MPO; qr_cutoff, svd_cutoff)
  svd(T1; cutoff=svd_cutoff)
  qr(T2; cutoff=qr_cutoff)
end

However, I am just considering different options since the qx tolerance/cutoff seems to have a different meaning from the svd tolerance/cutoff, and it may be nice to keep consistency with packages like LowRankApprox.jl (or use that package directly for rank reducing QR).

Pivoted QR is a pretty standard technique and I think it is reasonable to support that. For example, Julia has a pivoted QR:

julia> using LinearAlgebra

julia> A = randn(4, 4);

julia> Q, R, p = qr(A, ColumnNorm())
QRPivoted{Float64, Matrix{Float64}, Vector{Float64}, Vector{Int64}}
Q factor:
4×4 LinearAlgebra.QRPackedQ{Float64, Matrix{Float64}, Vector{Float64}}:
 -0.312837  -0.842554   -0.43312   -0.0681411
  0.856948  -0.445627    0.228859   0.121162
  0.341897   0.29066    -0.853861   0.263714
 -0.225565  -0.0838845   0.175932   0.954532
R factor:
4×4 Matrix{Float64}:
 3.20644  -0.309247  -1.45869    1.8667
 0.0       2.13545    0.101833  -0.14863
 0.0       0.0        1.81113   -0.0989319
 0.0       0.0        0.0       -0.0716154
permutation:
4-element Vector{Int64}:
 1
 3
 4
 2

julia> norm(Q * R[:, invperm(p)] - A)
5.902547983198874e-16

julia> R[:, invperm(p)]
4×4 Matrix{Float64}:
 3.20644   1.8667     -0.309247  -1.45869
 0.0      -0.14863     2.13545    0.101833
 0.0      -0.0989319   0.0        1.81113
 0.0      -0.0716154   0.0        0.0

which would give better rank reduction than non-pivoted QR:

julia> Q2, R2 = qr(A)
LinearAlgebra.QRCompactWY{Float64, Matrix{Float64}, Matrix{Float64}}
Q factor:
4×4 LinearAlgebra.QRCompactWYQ{Float64, Matrix{Float64}, Matrix{Float64}}:
 -0.312837  -0.899079   0.232979   0.198774
  0.856948  -0.181496   0.48104   -0.0360516
  0.341897  -0.116376  -0.599446   0.714302
 -0.225565   0.381016   0.595807   0.670046
R factor:
4×4 Matrix{Float64}:
 3.20644   1.8667    -0.309247  -1.45869
 0.0      -0.192372   1.64988    1.01009
 0.0       0.0       -1.35575    1.06884
 0.0       0.0        0.0       -1.062

I think it would be better to use pivoting, if possible (i.e. it would be better in general, so we should have it anyway), since it is the standard way to do rank reducing QR. We can just document that if you use pivoting and/or rank reduction then R is no longer guaranteed to be upper triangular (at the ITensor level, it isn't anyway because of the combiner transformation).

I think fundamentally the question is, does the Parker et al. algorithm actually rely on the upper/lower triangular form, or can it be framed more generally that you can do any orthogonal decomposition that does some rank reduction and preserves the proper block structure of the MPO? I have a hard time believing that such a detail as the ordering of the states within the block should matter, since it just corresponds to a simple gauge transformation by a permutation matrix on the links of the MPO. Once you gauge to a center bond/site, the left and right orthogonality centers are combined in order to perform the SVD, which would destroy the upper/lower triangular structure you were trying to preserve anyway.

I believe @LHerviou said he found he could reformulate the algorithm in terms of any orthogonal decomposition of the MPO block, which makes intuitive sense to me based on other tensor network gauging algorithms I know of.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My statement was just:
The QR/QL does not seem important to me. What seems to matter is:

  • Q -> the left part is a unitary, similar to what happens in left normalized MPS (assuming we are going for left canonical for now)
  • We preserve the block structure of the canonical form of the InfiniteMPO
    1 b c
    A d
    1
    using the convention of upper triangular iMPOs in their paper.
    Depending on left or right sweep, this means, orthogonalizing b or d with respect to 1, and then doing a QR on (b, A) or (A, d).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. rr_cutoff -> cutoff
  2. rr_verbose -> verbose
  3. Switch from row removal to column pivoting
    Done

)
#
# Find and count the zero rows. Bail out if there are none.
#
Xnr, Xnc = size(X)
Qnr, Qnc = size(Q)
@assert Xnr == Qnc #Sanity check.
zeros = map((r) -> (maximum(abs.(X[r, 1:Xnc])) <= rr_cutoff), 1:Xnr)
num_zero_rows = sum(zeros)
if num_zero_rows == 0
return Q, X
end
#
# Useful output for trouble shooting.
#
if rr_verbose
println(
"Rank Reveal removing $num_zero_rows rows with log10(rr_cutoff)=$(log10(rr_cutoff))"
)
end
#
# Create new Q & X martrices with reduced size.
#
X1nr = Xnr - num_zero_rows #new dim between Q & X
T = eltype(X)
X1 = Matrix{T}(undef, X1nr, Xnc)
Q1 = Matrix{T}(undef, Qnr, X1nr)
#
# Transfer non-zero rows of X and corresponding columns of Q.
#
r1 = 1 #Row/col counter in new reduced Q & X
for r in 1:Xnr
if zeros[r] == false
X1[r1, :] = X[r, :] #transfer row
Q1[:, r1] = Q[:, r] #transfer column
r1 += 1 #next row in rank reduced matrices.
end #if zero
end #for r
return Q1, X1
end

function qr(T::DenseTensor{<:Any,2}; positive=false, kwargs...)
qxf = positive ? qr_positive : qr
return qx(qxf, T; kwargs...)
end

function ql(T::DenseTensor{<:Any,2}; positive=false, kwargs...)
qxf = positive ? ql_positive : ql
return qx(qxf, T; kwargs...)
Expand All @@ -380,17 +427,27 @@ end
# Generic function for qr and ql decomposition of dense matrix.
# The X tensor = R or L.
#
function qx(qx::Function, T::DenseTensor{<:Any,2}; kwargs...)
QM, XM = qx(matrix(T))
# Be aware that if positive==false, then typeof(QM)=LinearAlgebra.QRCompactWYQ, not Matrix
# It gets converted to matrix below.
# Make the new indices to go onto Q and R
q, r = inds(T)
q = dim(q) < dim(r) ? sim(q) : sim(r)
IndsT = indstype(T) #get the index type
function qx(qx::Function, T::DenseTensor{<:Any,2}; rr_cutoff=-1.0, kwargs...)
QM1, XM = qx(matrix(T))
# When qx=qr typeof(QM1)==LinearAlgebra.QRCompactWYQ
# When qx=ql typeof(QM1)==Matrix and this should be a no-op
QM = Matrix(QM1)
#
# Do row removal for rank revealing QR/QL. Probably not worth it to elminate the if statement
#
if rr_cutoff >= 0.0
QM, XM = trim_rows(QM, XM, rr_cutoff; kwargs...)
end
#
# Make the new indices to go onto Q and X
#
IndsT = indstype(T) #get the indices type
@assert IndsT.parameters[1] == IndsT.parameters[2] #they better be the same!
IndexT = IndsT.parameters[1] #establish the single index type.
q = IndexT(size(XM)[1]) #create the Q--X link index.
Qinds = IndsT((ind(T, 1), q))
Xinds = IndsT((q, ind(T, 2)))
Q = tensor(Dense(vec(Matrix(QM))), Qinds) #Q was strided
Q = tensor(Dense(vec(QM)), Qinds)
X = tensor(Dense(vec(XM)), Xinds)
return Q, X
end
Expand Down Expand Up @@ -449,7 +506,7 @@ function ql_positive(M::AbstractMatrix)
end

#
# Lapack replaces A with Q & R carefully packed together. So here we just copy a
# Lapack replaces A with Q & L carefully packed together. So here we just copy a
# before letting lapack overwirte it.
#
function ql(A::AbstractMatrix; kwargs...)
Expand Down
47 changes: 33 additions & 14 deletions NDTensors/test/linearalgebra.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,10 @@ using NDTensors
using LinearAlgebra
using Test

# Not available on CI machine that test NDTensors.
# using Random
# Random.seed!(314159)

@testset "random_orthog" begin
n, m = 10, 4
O1 = random_orthog(n, m)
Expand All @@ -20,58 +24,73 @@ end
@test norm(U2 * U2' - Diagonal(fill(1.0, m))) < 1E-14
end

@testset "Dense $qx decomposition, elt=$elt, positve=$positive, singular=$singular" for qx in
[
@testset "Dense $qx decomposition, elt=$elt, positve=$positive, singular=$singular, rank_reveal=$rank_reveal" for qx in
[
qr, ql
],
elt in [Float64, ComplexF64, Float32, ComplexF32],
positive in [false, true],
singular in [false, true]
singular in [false, true],
rank_reveal in [false, true],

eps = Base.eps(real(elt)) * 30 #this is set rather tight, so if you increase/change m,n you may have open up the tolerance on eps.
eps in Base.eps(real(elt)) * 30
#this is set rather tight, so if you increase/change m,n you may have open up the tolerance on eps.
rr_cutoff = rank_reveal ? eps * 1.0 : -1.0
n, m = 4, 8
Id = Diagonal(fill(1.0, min(n, m)))
#
# Wide matrix (more columns than rows)
#
A = randomTensor(elt, (n, m))
# We want to test 0.0 on the diagonal. We need make all roaw equal to gaurantee this with numerical roundoff.
# We want to test 0.0 on the diagonal. We need make all rows linearly dependent
# gaurantee this with numerical roundoff.
if singular
for i in 2:n
A[i, :] = A[1, :]
for i in 2:3
A[i, :] = A[1, :] * 1.05^n
end
end
Q, X = qx(A; positive=positive) #X is R or L.
# you can set rr_verbose=true if you want to get debug output on rank reduction.
Q, X = qx(A; positive=positive, rr_cutoff=rr_cutoff, rr_verbose=false) #X is R or L.
@test A ≈ Q * X atol = eps
@test array(Q)' * array(Q) ≈ Id atol = eps
@test array(Q) * array(Q)' ≈ Id atol = eps
@test array(Q)' * array(Q) ≈ Diagonal(fill(1.0, dim(Q, 2))) atol = eps
if dim(Q, 1) == dim(Q, 2)
@test array(Q) * array(Q)' ≈ Diagonal(fill(1.0, min(n, m))) atol = eps
end
if positive
nr, nc = size(X)
dr = qx == ql ? Base.max(0, nc - nr) : 0
diagX = diag(X[:, (1 + dr):end]) #location of diag(L) is shifted dr columns over the right.
@test all(real(diagX) .>= 0.0)
@test all(imag(diagX) .== 0.0)
end
if rr_cutoff > 0 && singular
@test dim(Q, 2) == 2 #make sure the rank revealing mechanism hacked off the columns of Q (and rows of X).
@test dim(X, 1) == 2 #Redundant?
end
#
# Tall matrix (more rows than cols)
#
A = randomTensor(elt, (m, n)) #Tall array
# We want to test 0.0 on the diagonal. We need make all rows equal to gaurantee this with numerical roundoff.
if singular
for i in 2:m
for i in 2:4
A[i, :] = A[1, :]
end
end
Q, X = qx(A; positive=positive)
Q, X = qx(A; positive=positive, rr_cutoff=rr_cutoff, rr_verbose=false)
@test A ≈ Q * X atol = eps
@test array(Q)' * array(Q) ≈ Id atol = eps
@test array(Q)' * array(Q) ≈ Diagonal(fill(1.0, dim(Q, 2))) atol = eps
#@test array(Q) * array(Q)' no such relationship for tall matrices.
if positive
nr, nc = size(X)
dr = qx == ql ? Base.max(0, nc - nr) : 0
diagX = diag(X[:, (1 + dr):end]) #location of diag(L) is shifted dr columns over the right.
@test all(real(diagX) .>= 0.0)
@test all(imag(diagX) .== 0.0)
end
if rr_cutoff > 0 && singular
@test dim(Q, 2) == 4 #make sure the rank revealing mechanism hacked off the columns of Q (and rows of X).
@test dim(X, 1) == 4 #Redundant?
end
end

nothing
60 changes: 59 additions & 1 deletion test/base/test_decomp.jl
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
using ITensors, LinearAlgebra, Test
using ITensors, LinearAlgebra, Test, Random

Random.seed!(314159)

#
# Decide if rank 2 tensor is upper triangular, i.e. all zeros below the diagonal.
Expand Down Expand Up @@ -67,6 +69,30 @@ function is_upper(A::ITensor, r::Index)::Bool
end
is_lower(A::ITensor, r::Index)::Bool = is_upper(r, A)

#
# Makes all columns lineary depenedent but scaled differently.
#
function rank_fix(A::ITensor, Linds...)
Lis = commoninds(A, (Linds...))
Ris = uniqueinds(A, Lis)
#
# Use combiners to render A down to a rank 2 tensor ready matrix QR routine.
#
CL, CR = combiner(Lis...), combiner(Ris...)
cL, cR = combinedind(CL), combinedind(CR)
AC = A * CR * CL
if inds(AC) != IndexSet(cL, cR)
AC = permute(AC, cL, cR)
end
At = NDTensors.tensor(AC)
nc = dim(At, 2)
@assert nc >= 2
for c in 2:nc
At[:, c] = At[:, 1] * 1.05^c
end
return itensor(At) * dag(CL) * dag(CR)
end

function diag_upper(l::Index, A::ITensor)
At = NDTensors.tensor(A * combiner(noncommoninds(A, l)...))
if size(At) == (1,)
Expand Down Expand Up @@ -362,6 +388,38 @@ end
@test A ≈ Q * L atol = 1e-13
end

@testset "Rank revealing QR/RQ/QL/LQ decomp on MPS dense $elt tensor" for ninds in
[1, 2, 3],
elt in [Float64, ComplexF64]

l = Index(5, "l")
s = Index(2, "s")
r = Index(5, "r")
A = randomITensor(elt, l, s, s', r)

Ainds = inds(A)
A = rank_fix(A, Ainds[1:ninds]) #make all columns linear dependent on column 1, so rank==1.
Q, R, q = qr(A, Ainds[1:ninds]; rr_cutoff=1e-12)
@test dim(q) == 1 #check that we found rank==1
@test A ≈ Q * R atol = 1e-13
@test Q * dag(prime(Q, q)) ≈ δ(Float64, q, q') atol = 1e-13

R, Q, q = rq(A, Ainds[1:ninds]; rr_cutoff=1e-12)
@test dim(q) == 1 #check that we found rank==1
@test A ≈ Q * R atol = 1e-13 #With ITensors R*Q==Q*R
@test Q * dag(prime(Q, q)) ≈ δ(Float64, q, q') atol = 1e-13

L, Q, q = lq(A, Ainds[1:ninds]; rr_cutoff=1e-12)
@test dim(q) == 1 #check that we found rank==1
@test A ≈ Q * L atol = 1e-13 #With ITensors L*Q==Q*L
@test Q * dag(prime(Q, q)) ≈ δ(Float64, q, q') atol = 1e-13

Q, L, q = ql(A, Ainds[1:ninds]; rr_cutoff=1e-12)
@test dim(q) == 1 #check that we found rank==1
@test A ≈ Q * L atol = 1e-13
@test Q * dag(prime(Q, q)) ≈ δ(Float64, q, q') atol = 1e-13
end

@testset "factorize with QR" begin
l = Index(5, "l")
s = Index(2, "s")
Expand Down