Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ITensors] [BUG] Can't measure complex operator on real MPS in expect #79

Open
emstoudenmire opened this issue Nov 25, 2022 · 15 comments
Open
Labels
bug Something isn't working

Comments

@emstoudenmire
Copy link
Contributor

Description of bug

Using complex operator on real MPS in expect is throwing InexactError. I think the issue is coming from a function adapt which may be trying to convert the operator to a real-valued operator, which is possibly incorrect logic for that code since contraction of a complex operator with a real MPS is allowed.

Minimal code demonstrating the bug or unexpected behavior

Minimal runnable code

  N = 10
  s = siteinds("S=1/2",N)
  psi = randomMPS(s,linkdims=4)
  vals = expect(psi,"Sy")

Version information

  • Output from versioninfo():
julia> versioninfo()
Julia Version 1.8.2
Commit 36034abf260 (2022-09-29 15:21 UTC)
Platform Info:
  OS: macOS (arm64-apple-darwin21.3.0)
  CPU: 10 × Apple M1 Max
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-13.0.1 (ORCJIT, apple-m1)
  Threads: 1 on 8 virtual cores
Environment:
  JULIA_DIR = /Applications/Julia-1.8.app/Contents/Resources/julia
  • Output from using Pkg; Pkg.status("ITensors"):
julia> using Pkg; Pkg.status("ITensors")
Status `~/.julia/environments/v1.8/Project.toml`
  [9136182c] ITensors v0.3.20 `~/.julia/dev/ITensors`
@emstoudenmire emstoudenmire added the bug Something isn't working label Nov 25, 2022
@JanReimers
Copy link

It looks like it is related to GPU support enhancements. Presumably the purpose of calling adapt is for adapting the precision of element type of the op to match precision of the MPS, but not to force Float<->Complex conversions. If this is correct, then we want this sort of behaviour:

op{ComplexF64}=adapt(ComplexF64,op{ComplexF64})
op{ComplexF64}=adapt(Float64,op{ComplexF64})
op{ComplexF32}=adapt(ComplexF32,op{ComplexF64})
op{ComplexF32}=adapt(Float32,op{ComplexF64})
op{ComplexF16}=adapt(ComplexF16,op{ComplexF64})
op{ComplexF16}=adapt(Float16,op{ComplexF64})
op{Float64}=adapt(ComplexF64,op{Float64})
op{Float64}=adapt(Float64,op{Float64})
op{Float32}=adapt(ComplexF32,op{Float64})
op{Float32}=adapt(Float32,op{Float64})
op{Float16}=adapt(ComplexF16,op{Float64})
op{Float16}=adapt(Float16,op{Float64})

@mtfishman
Copy link
Member

mtfishman commented Jan 9, 2023

@JanReimers's assessment is correct. adapt from Adapt.jl is being used to handle conversion to GPU when the MPS is on GPU, as well as element type conversion (for example if the MPS is single precision, this makes sure the op is also single precision). However, the default conversion/adaption from adapt is a bit too strict right now if the op is complex but the MPS is real.

One way to solve this would be to define a custom adaptor to handle this case, as described here: https://github.com/JuliaGPU/Adapt.jl/blob/4c07cccd182bd36a7cbb288d64c8281dd7be665d/src/Adapt.jl#L5-L39. It would mostly just do conversion of the element type, but also be less strict about adapting complex values to real values (i.e. in this case when the MPS is real but the ops are complex). The reason is that we don't want strict conversion of the element type, as Jan laid out. Maybe we can call it RealOrComplex{T}, with the interface:

adapt(RealOrComplex(Array{Float32}), ::Array{Float64})::Array{Float32}
adapt(RealOrComplex(Array{ComplexF32}), ::Array{Float64})::Array{ComplexF32}
adapt(RealOrComplex(Array{Float32}), ::Array{ComplexF64})::Array{ComplexF32} # Currently broken for `adapt(Array{Float32}, ::Array{ComplexF64})`

The name RealOrComplex would mean that the desired element type is understood to be real or complex, independent of the actual element type that is specified.

An alternative would be to first construct the ops and analyze their element types at the beginning, and convert the MPS to complex if the ops are complex. But I think it would be nice to have a more general solution based on Adapt.jl since I think this will be an issue that will come up in more places than just expect/correlation_matrix.

Ultimately, the op function should have some ability to specify the data type and element type, for example: op(Float32, "X", s, 3) or op(CuVector{Float64}, "X", s, 3). But I think that should be handled by adapt internally anyway.

@mtfishman
Copy link
Member

@emstoudenmire
Copy link
Contributor Author

I got it working, and ready to make a PR. However, there is already a RealOrComplex defined in the ITensors module (itensor.jl), with the definition

const RealOrComplex{T} = Union{T,Complex{T}}

Any thoughts about what name to use for the Adapt one? It could be RealOrComplexAdaptor but that's long. Maybe that's ok though.

@mtfishman
Copy link
Member

What about MaybeComplexAdaptor? Then it reads like adapt(MaybeComplexAdaptor(Float32), T), for example.

@emstoudenmire
Copy link
Contributor Author

I'm good with that.

One other question is about the type that goes into the adaptor. (This is something that's still a bit magical about Adapt overall to me.)

For this bug, the argument on the right will be an ITensor, formed by calling op which may or may not have complex storage. The desired behavior here is to adapt or change the storage container type (say from Vector to CuVector) or element precision without changing the "complexness" of the element type.

So for that specific example (adapting an ITensor) would we want the call to look like:

  1. adapt(MaybeComplexAdaptor(Vector{Float64}), O) <-- this is closer to what it is right now
  2. adapt(MaybeComplexAdaptor(Float64), O)

@mtfishman
Copy link
Member

I think we can support both, but that is up to us. For this case we want the version adapt(MaybeComplexAdaptor(Vector{Float64}), O) but for other cases we might want adapt(MaybeComplexAdaptor(Float64), O) as well.

@emstoudenmire
Copy link
Contributor Author

emstoudenmire commented Jan 12, 2023

Sounds good. Would those have different behavior and/or serve a different purpose?

Or would they be two ways of accomplishing the same result?

@mtfishman
Copy link
Member

mtfishman commented Jan 12, 2023

They would have different behavior in general, because adapt(MaybeComplexAdaptor(Float64), O) would only change the element type while preserving everything else about the data storage type (preserve if O is on GPU or CPU) but adapt(MaybeComplexAdaptor(Vector{Float64}), O) would have the additional behavior of moving the data to CPU if it was on GPU.

@mtfishman
Copy link
Member

mtfishman commented Jan 12, 2023

So for example adapt(MaybeComplexAdaptor(Float32), O) would be useful since it could be a way of specifying that you want single precision, without moving O on or off of GPU.

@emstoudenmire
Copy link
Contributor Author

Great. I think I get it now - very helpful. I was just a bit catching up on the intended interface and behavior of adapt. It still has some magical aspects, but is nevertheless very useful in these contexts.

@mtfishman
Copy link
Member

I think basically we can define our own "adaptors" to have any behavior, and you can think of adapt as a way to "reach down" into a nested data structure, where the adaptor works at the low level of the data structure. So for example our adaptors could work on an ITensor, Vector{ITensor}, ITensorNetwork, etc. And the point is that the adaptor only has to be defined on some low level data structure like Vector or CuVector and then it automatically works for all of those complicated nested data structures.

@mtfishman
Copy link
Member

And specifically, you overload Adapt.adapt_storage on basic data types like numbers to get your new adaptor to work for any data structure.

Adapt.adapt_structure overloads are used to define how adapt should recreate a nested type once the data storage is adapted, so those are universal for all adaptors.

@linhz0hz
Copy link

linhz0hz commented Jan 11, 2024

Is there any update on this issue? Is there a possible workaround?
I am a new learner of ITensor and it seems I am hitting this issue with very simple use case:

using ITensors
N = 20
s = siteinds("S=1/2", N; conserve_qns=false)
psi = MPS(s, "Y+")

@mtfishman
Copy link
Member

You should be able to use expect(complex(psi), "op") to circumvent this issue.

@mtfishman mtfishman transferred this issue from ITensor/ITensors.jl Oct 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants