-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Numerical instability in DD package for large-scale systems #575
Comments
It turns out that this issue is not limited to the matrix DD case and its not solely the matrix normalization that is at fault. OPENQASM 2.0;
include "qelib1.inc";
qreg q[81];
h q; works as expected, but, starting from The main cause for this issue is the tolerance that is being employed in the package.
So, at Two questions that pop up:
I'll try to address them in a follow-up comment. |
Ok. Just noticed, that this might, in fact, be only due to the matrix normalization. OPENQASM 2.0;
include "qelib1.inc";
qreg q[81];
qreg k[1];
h q;
h k; The reason for the circuit from the previous comment triggering the issue is that the statement h q; Is translated to a That eliminates the first question and reduces the scope for the error hunt. |
As for why the error already shows at This error could, in principle, be fixed by adopting a new normalization scheme that is similar to the vector normalization scheme and does not propagate the greatest common divisor upward. Even then, one has to be very careful as, during DD addition, all edge weights along a path to a terminal are multiplied together before the addition is carried out and the resulting edge weight is put through normalization and a unique table lookup. |
Environment information
Dates back to the very origin of the (QM)DD package.
Description
The code down below demonstrates an instability in the matrix DD normalization.
The script just creates a bunch of Hadamards and then reverses them.
Up until (including) 81 qubits, this produces results as expected. The result is equal to the identity and the top edge weight is one.
For
n=81+k
, the top edge weight will besqrt(2)^k
... Meaning that, for example, atn=128
the top edge weight is equal to roughly a million 😅The main reason for this is that the first bunch of H's generates a DD that has a top edge weight of
(1/sqrt(2))^n
untiln
hits81
where the default numerical tolerance kicks in and the value remains constant at6.4311e-13
, which is just above the default tolerance. Now each qubit above81
in the second series of H's induces an error ofsqrt(2)
.In extreme cases, this may even overflow the range of double. Particularly, for
n>=2129
,sqrt(2)^(n-81)
overflows the largest possible double precision floating point value (1.7976931348623158e+308
).Possible consequences of the above include
NaN
values after overflowsExpected behavior
Matrix normalization should be stable and not induce these kinds of errors.
This will most surely require the development of a new normalization scheme for matrix DDs (probably inspired by the one currently used for vectors), which, from past experience, might not be that easy to get right.
How to Reproduce
Run the following test
The text was updated successfully, but these errors were encountered: