Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving infinite mpo matrix #77

Merged
merged 42 commits into from
Sep 13, 2023

Conversation

LHerviou
Copy link
Contributor

As described, some redefinitions and improvements on the form of InfiniteMPOMatrix, and modifications that follow.
The core idea:

  • Arbitrary filled InfiniteMPOMatrix are allowed
  • Instead of waiting for nrange(mpo) to evaluate the terms in the local MPO, we do not delay evaluation. T
  • Lines and columns in an InfiniteMPOMatrix have well defined indices, shared for all elements.

All of that normally significantly simplifies the manipulation and the optimization of the InfiniteMPOMatrix and the corresponding InfiniteMPO
I did not have time to fully debug, so I will keep updating in the next few days, and confirm when one can merge.

src/models/fqhe13.jl Outdated Show resolved Hide resolved
@LHerviou
Copy link
Contributor Author

All tests work on my branch without any measurable slowdown.

Probably a lot of simplification/optimization can be done. Up to you if you integrate it right now, or if we wait a bit til I can have an overview of possible changes.

src/models/models.jl Outdated Show resolved Hide resolved
src/infinitempomatrix.jl Outdated Show resolved Hide resolved
src/infinitempomatrix.jl Outdated Show resolved Hide resolved
@mtfishman
Copy link
Member

@LHerviou I'm fine waiting to merge this if you think there are simplifications or optimizations to make once you have the time, unless @JanReimers needs this for something and therefore it's useful to have merged.

@JanReimers
Copy link
Contributor

I have a branch with

  • InfiniteMPO compression outputting InfiniteCanonicalMPO
  • VUMPS suppert for InfiniteMPO
  • Simplification of VUMPS generic code
  • VUMPS suppert for InfiniteCanonicalMPO (which may require some discussion)
    I was volunteering to let Loic merge first and then I can deal with any resulting conflicts.

@LHerviou
Copy link
Contributor Author

On my computer, this runs nicely.

To tackle some missing methods in NDTensors, I explicitely defined an addition formula for EmptyITensors. I would be interested in someone having a look at it for type stability. This touches part of the Pkg I am not familiar with.

@LHerviou
Copy link
Contributor Author

LHerviou commented Jun 20, 2023 via email

@mtfishman
Copy link
Member

I don't seem to need to overload NDTensors.datatype for Matrix{ITensor} * Matrix{ITensor} to work. Is it possible another operation needed that?

@LHerviou
Copy link
Contributor Author

LHerviou commented Jun 20, 2023 via email

@mtfishman
Copy link
Member

@LHerviou the issues we discussed have been implemented in the latest version of ITensors.jl (ITensor/ITensors.jl#1135).

@LHerviou
Copy link
Contributor Author

Apparently, there is still one definition missing with how you implemented zero.

julia> T = ITensor()
ITensor ord=0
NDTensors.EmptyStorage{NDTensors.EmptyNumber, NDTensors.Dense{NDTensors.EmptyNumber, Vector{NDTensors.EmptyNumber}}}

julia> zero(T)
ERROR: MethodError: no method matching fill!(::NDTensors.NoData, ::NDTensors.EmptyNumber)

Adding "Base.fill!(::NDTensors.NoData, ::Any) = 0 " is enough to solve the issue.

LHerviou and others added 2 commits July 3, 2023 15:34
Adding the small fix. It should just be integrated in ITensors.
…icant overhead in some cases for translatecell and index access
@mtfishman
Copy link
Member

@LHerviou what do you think of changing the name of the type from InfiniteMPOMatrix to InfiniteBlockMPO? I think that is a bit more descriptive, and captures that the MPO is composed of block matrices.

I do not mind that change, as long as I don't have to change it again after that.

Ideally we would come up with a name that works well that we won't have to change going forward. Also we could do that name change in a future PR.

@LHerviou
Copy link
Contributor Author

I think InfiniteBlockMPO is a good name. It is a MPO by block in the same sense block matrices exist.
So I have no objections.

I have no idea how many people (beyond me and some of my collaborators) used the InfiniteMPOMatrix anyway.

src/infinitempo.jl Outdated Show resolved Hide resolved
src/infiniteblockmpo.jl Outdated Show resolved Hide resolved
src/infiniteblockmpo.jl Outdated Show resolved Hide resolved
@mtfishman
Copy link
Member

@LHerviou this looks good to me now. Do you think this is ready to merge?

@LHerviou
Copy link
Contributor Author

LHerviou commented Sep 5, 2023 via email

@LHerviou
Copy link
Contributor Author

LHerviou commented Sep 8, 2023

For the future: I guess I am letting @JanReimers finally do his own updates, then I will add all the iDMRG and so on using his version of compression.

@mtfishman
Copy link
Member

Thanks again!

@mtfishman mtfishman merged commit d35574b into ITensor:main Sep 13, 2023
6 checks passed
@mtfishman
Copy link
Member

@LHerviou not sure what's going on but it looks like this PR broke some tests on main (https://github.com/ITensor/ITensorInfiniteMPS.jl/commits/main), could you take a look?

@LHerviou
Copy link
Contributor Author

I will have a look either tonight or tomorrow.
The errors seem to imply something is missing in ITensors.NDTensors, but it might be something else

@mtfishman
Copy link
Member

Ah we have been changing some internals of NDTensors, and I guess this PR relied on some of those internals, so I could imagine there might be an issue there.

We are in the process of refactoring and simplifying NDTensors so hopefully it will mean some of the NDTensors overloads made in this PR can just be removed in the near future.

@LHerviou LHerviou deleted the ImprovingInfiniteMPOMatrix branch September 26, 2023 14:15
@LHerviou
Copy link
Contributor Author

LHerviou commented Sep 26, 2023

@mtfishman Ok, I found the source of the bug.

l269 and l272 of [tensor_operations/tensor_algebra.jl](https://github.com/ITensor/ITensors.jl/blob/main/src/tensor_operations/tensor_algebra.jl),
you changed the way you define the directsum_projectors! from

 D1[...] = true

to

D1[...] = one(eltype(D1))

In the end, the problem is that one(EmptyNumber) is not defined. I am testing some fixes right now.

@mtfishman
Copy link
Member

Ok interesting, thanks for looking into it. Does that mean an EmptyStorage is getting direct-summed somewhere?

@LHerviou
Copy link
Contributor Author

LHerviou commented Sep 26, 2023

Yes, when I am fusing some indices.
To be more precise: when I am moving from a big MPOMatrix to the InfiniteMPO

@LHerviou
Copy link
Contributor Author

LHerviou commented Sep 27, 2023

For info:
NDTensors.convert(::Type{NDTensors.EmptyNumber}, x::Float64) = NDTensors.EmptyNumber()
fixes the problem. Should I update ITensorInfiniteMPS or do you want to wait to fix NDTensors.

I am not sure it is a good workaround though, as the projectors are not projectors anymore, and could be used at some point later.

@mtfishman
Copy link
Member

I'm not sure that is something we want to define, since conversion shouldn't lose information (i.e. Float64 -> ComplexF64 conversion is fine since it widens the type, but ComplexF64 -> Float64 is only ok if the imaginary part is zero).

I wonder if one(::EmptyNumber) = true would fix it? I think that makes some sense as a definition anyway, since EmptyNumber is kind of a universal zero value and true is the smallest one value in Julia.

@mtfishman
Copy link
Member

We are replacing the EmptyStorage type with a better and simpler design in ITensor/ITensors.jl#1161 so hopefully things like this will be easier to deal with after that is merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants