-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clearly document the basis "endiandness" in tensor products and partial traces (and more) and make a comparison table to other julia and python tools and to kron
in basic linear algebra and tensor network toolkits
#426
Comments
I didn't check, but my guess would be the fact that Can you rewrite things so you use the built-in |
That is indeed the case. The "endiandness" of how basis vectors are enumerated in various tools is not fixed: numpy, qutip, qiskit, QuantumOptics, etc all can have differing ordering. Two examples where this comes in:
As a general design principle, all of these tools are set up to not expect the user to input raw matrices (because they are basis dependent), rather to build things up from explicitly defined basis states. In both qutip and QuantumOptics it is most convenient to do use initial defining operations like:
No matter the tool, in python or in julia, this approach makes things a bit safer. I will close the issue for now, as it is not a bug (although we can potentially document this more clearly -- PRs welcomed on that ;) Do not hesitate to post or reopen the issue if anything else is not working as expected. |
Thanks for the clarification. The discrepancy indeed vanishes when I generate the unitaries from their corresponding interaction Hamiltonians instead. Though this still feels a little misleading, for example one might simply want to use QuantumOptics to compute some tensor product of say yet
despite
and
At the very least shouldn't Likewise, if one simply wants to find the partial trace of some arbitrary operator, then shouldn't standard order also be used for their definition so that we have instead of
While we can define |
Could you elaborate what the problem is in the first example you are giving? That tensor product result seems appropriate to me. Same with the second example of a partial trace. I can easily agree that this needs to be documented more clearly to avoid confusion, so I will reopen this with a changed title. Maybe a comparison table to all the different conventions in qutip/qiskit/quantumtoolbox/etc would be nice to have. Concerning your example of finding an arbitrary operator: if one finds a matrix somewhere in the literature, but they do not know in what basis that matrix is written, then it is indeed impossible to use that result in a numerical simulation reliably. The "endiandness" of a basis is not standardized among papers and textbooks (to the extent that researchers in the pedagogy of QIS have to create surveys specifically agnostic to endiandness choices). And that comes up in classical computing as well: if you find some arbitrary binary data over the network, you do not actually know what are the bytes you have found, as the endiandness of the bytes is not specified -- this comes up all the time in BLE or UART comms. |
kron
in basic linear algebra and tensor network toolkits
Thanks for reopening the issue again.
I expected the tensor product
As for the partial trace example, if I have In the example I have and In both examples, I am simply using the common computational basis, i.e., Consider if a researcher constructed a unitary operation U from some interaction Hamiltonian in the computational basis, and they want to perform a controlled U operation on a set of input states. So naturally they use Let's look at the well known CNOT gate, where there could be no uncertainty about basis or "endianness" (It's just the computational basis that everyone uses). b = NLevelBasis(2)
pauli_x = Operator(b, [0 1; 1 0])
CNOT = tensor(dm(nlevelstate(b, 2)), pauli_x) + tensor(dm(nlevelstate(b, 1)), one(b))
Operator(dim=4x4)
basis: [NLevel(N=2) ⊗ NLevel(N=2)]
1.0+0.0im 0.0+0.0im 0.0+0.0im 0.0+0.0im
0.0+0.0im 0.0+0.0im 0.0+0.0im 1.0+0.0im
0.0+0.0im 0.0+0.0im 1.0+0.0im 0.0+0.0im
0.0+0.0im 1.0+0.0im 0.0+0.0im 0.0+0.0im This is clearly different from the standard CNOT definition in the computational basis of |
One reason numpy, QuTiP, etc. have the "conventional convention" for tensor "endianness" (sometimes I call this "tensor layout") is that multi-dimensional arrays in numpy (and C) follow this convention in memory - if we walk through the block of memory where the array is stored, the rightmost index varies fastest. This is called "row-major" ordering or "C" ordering. Julia and Fortran use the opposite "column major" (or Fortran) convention for multi-dimensional arrays - the leftmost index varies fastest. Multi-dimensional arrays come into this because they are a very convenient way of addressing a tensor-product space. We can just reshape e.g. an operator matrix for multiple subsystems into a multidim array so that indices correspond to subsystems. Then we can use operations like The sticking point is that we want the reshape to be fast. Reshaping according to the tensor layout convention of the language or library is "free" - it's just a metadata change. Using a mismatched convention for language or library requires either reimplementing array indexing and reshaping or putting up with overhead from shuffling data around in memory to match the desired convention. I agree it's unfortunate that there is this difference between numpy and Julia, and that the Julia/Fortran array layout convention doesn't match the (a?) textbook definition (or indeed the Julia definition!!) of the Kronecker product. It's however not a trivial matter to switch QO.jl to the numpy convention. I realize I am not helping to propose a solution here. Just hoping to make the situation more understandable. I agree we should prominently highlight this difference in the documentation somewhere! We could even provide a wrapper around permutedims that puts operators or states into "C"/"row-major" order for easier cross-checking? Regarding |
I mostly agree on this topic, just wanted to provide a minor clarification:
There is indeed a lot of uncertainty about basis and endiandness even in this case. Some people use the basis |
Thanks for all the clarification and for pointing out the existence of the uncertainty even with the CNOT gate. I understand more about the circumstances now. For the case of Hence I wonder if it is possible to have something similar for I'm a fairly new user and I defined the CNOT, CZ, Toffoli gates etc. simply by doing |
Yes, such a helper function would certainly be a nice addition. Happy to help with reviewing a PR about this. Going on a long tangent: We have two convenient ways to create these gates without copying over matrices. Using the small (incomplete) symbolics library we have (it can be used to express symbolic expressions in different formalisms, like state vectors or stabilizer tableaux):
Working strictly in QuantumOptics expressing CNOT and family in terms of sums of tensor products of projectors (making the meaning of the matrices more explicit).
|
I've been struggling with differing results between QuantumOptics.jl and python's QuTiP which features only simple application of unitary operators on composite quantum systems, and I'm not sure if I am misunderstanding the workings of QuantumOptics.jl.
I'm sorry if this is not a minimal example as I've been unable to reproduce the discrepancy reliably (but it's just matrix multiplication, tensor products, and partial trace).
The operation that I'm performing is on a 5 qubits system (labelled ABCDE), after which I want the state of the first qubit (labelled A):
The values of$\rho_N$ , $U_A$ , and $U_B$ are shown in the code below.
Now in QuTiP (note that the code are the same except for syntax differences):
In QuantumOptics.jl, the cross term is 0.46473, but in QuTiP it's 0.43255551.
Why are they different?
The text was updated successfully, but these errors were encountered: