Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

upgrade qml.qsvt #6520

Open
wants to merge 7 commits into
base: angle_solver_qsp
Choose a base branch
from
Open

upgrade qml.qsvt #6520

wants to merge 7 commits into from

Conversation

KetpuntoG
Copy link
Contributor

@KetpuntoG KetpuntoG commented Nov 5, 2024

This PR upgrades qml.qsvt based on this PRD . To check how the documentation is displayed -> [here]

The main changes are:

  • old qml.qsvt only worked with qml.BlockEncode. With the newqml.qsvt the user can choose the specific block encoding.
  • new function also accept Hamiltonians as input instead of just matrices.
  • In old qml.qsvt the user had to pass the appropriate angles to apply a polynomial. Now the user passes the polynomial directly

These changes aim to make qml.qsvt, the friendly (but less customizable) version to use QSVT.

[sc-72651] [sc-72650]

⚠️ the name of the function may change in the near future, but we can start reviewing with the current name

@KetpuntoG KetpuntoG added the do not merge ⚠️ Do not merge the pull request until this label is removed label Nov 5, 2024
Copy link
Contributor

github-actions bot commented Nov 5, 2024

Hello. You may have forgotten to update the changelog!
Please edit doc/releases/changelog-dev.md with:

  • A one-to-two sentence description of the change. You may include a small working example for new features.
  • A link back to this PR.
  • Your name (or GitHub username) in the contributors section.

Copy link
Contributor

@Jaybsoni Jaybsoni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, left some suggestions. Let me know once they are addressed and I am happy to review again!

Also missing change-log entry 😉

This circuit applies a polynomial transformation (:math:`Poly^{SV}`) to the singular values of
the block encoded matrix:
- ``"prepselprep"``: Embeds the Hamiltonian ``A`` using :class:`~pennylane.PrepSelPrep`. Default encoding for Hamiltonians.
- ``"qubitization"``: Embeds the Hamiltonian ``A`` using :class:`~pennylane.Qubitization`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this just prep sel prep again? What value does adding the reflection have for block encoding ..?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a fair point. In terms of efficiency it always makes more sense to use prepselprep. The reason I include it is:

  • to emphasize the idea that qubitization is a block encoding (which is not super widespread).

  • enable all the block encodings we have available in PennyLane so that the user has the power to do what he wants. Maybe you want to verify that QSVT gives the same results with one method and with another.

If you think it is still better to remove it, we can discuss it :)

Comment on lines +60 to +61
poly (tensor_like): Polynomial coefficients defining the transformation, represented in increasing order of degree.
This means the first coefficient corresponds to the constant term, the second to the linear term, and so on.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if poly isn't a fixed parity?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An error will be raise on poly_to_angles

Comment on lines +193 to +195
if block_encoding not in ["embedding", "fable", None]:
raise ValueError("block_encoding should take the value 'embedding' or 'fable'")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we use Pauli decompose on the matrix and then use the PrepSelPrep or Qubitization methods?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

answered in previous comment:

"""
By definition, one is methods for block encoding arrays and the other for block encoding hamiltonians. I think that to avoid confusion, if a user wants to mix things up, they can do it outside the function
"""

Another problem that comes to mind is that we cannot predict the number of auxiliary qubit we will need since we do not know in advance the number of pauli terms we will have given a matrix

if global_phase:
with qml.QueuingManager.stop_recording():
global_phase_op = qml.GlobalPhase(-0.5 * np.pi * (4 - global_phase), wires=wires)
if block_encoding not in ["prepselprep", "qubitization", None]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can't we use the other two methods here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By definition, one is methods for block encoding arrays and the other for block encoding hamiltonians. I think that to avoid confusion, if a user wants to mix things up, they can do it outside the function

A = qml.matrix(hamiltonian)

or

hamiltonian = qml.pauli_decompose(A)

def qsvt(A, angles, wires, convention=None):
r"""Implements the
`quantum singular value transformation <https://arxiv.org/abs/1806.01838>`__ (QSVT) circuit.
def qsvt(A, poly, encoding_wires, block_encoding=None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might be outside scope but you could probably make A also take a PauliSentence directly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could be an option :)
In that case it would be enough to update qml.PrepSelPrep to make qml.Qubitization and qsvt work. Let's leave it out of this PR

pennylane/templates/subroutines/qsvt.py Outdated Show resolved Hide resolved
Comment on lines +213 to +214
s = int(np.ceil(np.log2(max(len(A), len(A[0])))))
encoding = qml.FABLE(2**s * A, wires=encoding_wires)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear what this code is doing even with the comment. Maybe re-name the variable s and add more to the comment?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I extracted from the last cell in :

https://docs.pennylane.ai/en/stable/code/api/pennylane.FABLE.html

Basically FABLE does block encoding of A/2^n, which is not very intuitive since what we want is to do block encoding of A. Therefore, I am doing FABLE(2^n*A), so that the final result is block encoding of A.

pennylane/templates/subroutines/qsvt.py Outdated Show resolved Hide resolved
pennylane/templates/subroutines/qsvt.py Outdated Show resolved Hide resolved
pennylane/templates/subroutines/qsvt.py Show resolved Hide resolved
@KetpuntoG
Copy link
Contributor Author

Thanks for the review @Jaybsoni 🙌
I answered your questions and included your suggestions
I will add the changelog at the end when we are clear when we are going to merge it to avoid early conflicts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do not merge ⚠️ Do not merge the pull request until this label is removed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants