diff --git a/doc/releases/changelog-0.39.0.md b/doc/releases/changelog-0.39.0.md index 343976742be..489ae447183 100644 --- a/doc/releases/changelog-0.39.0.md +++ b/doc/releases/changelog-0.39.0.md @@ -11,7 +11,7 @@ [(#6419)](https://github.com/PennyLaneAI/pennylane/pull/6419)

Spin Hamiltonians 💞

- + * Function is added for generating the spin Hamiltonian for the [Kitaev](https://arxiv.org/abs/cond-mat/0506438) model on a lattice. [(#6174)](https://github.com/PennyLaneAI/pennylane/pull/6174) @@ -42,7 +42,7 @@ * `qml.matrix` now works with empty objects (such as empty tapes, `QNode`s and quantum functions that do not call operations, single operators with empty decompositions). [(#6347)](https://github.com/PennyLaneAI/pennylane/pull/6347) - + * PennyLane is now compatible with NumPy 2.0. [(#6061)](https://github.com/PennyLaneAI/pennylane/pull/6061) [(#6258)](https://github.com/PennyLaneAI/pennylane/pull/6258) @@ -58,8 +58,8 @@ when possible, based on the `pauli_rep` of the relevant observables. [(#6113)](https://github.com/PennyLaneAI/pennylane/pull/6113/) -* The `QuantumScript.copy` method now takes `operations`, `measurements`, `shots` and - `trainable_params` as keyword arguments. If any of these are passed when copying a +* The `QuantumScript.copy` method now takes `operations`, `measurements`, `shots` and + `trainable_params` as keyword arguments. If any of these are passed when copying a tape, the specified attributes will replace the copied attributes on the new tape. [(#6285)](https://github.com/PennyLaneAI/pennylane/pull/6285) [(#6363)](https://github.com/PennyLaneAI/pennylane/pull/6363) @@ -94,7 +94,7 @@

User-friendly decompositions 📠

-* `qml.transforms.decompose` is added for stepping through decompositions to a target gate set. +* `qml.transforms.decompose` is added for stepping through decompositions to a target gate set. [(#6334)](https://github.com/PennyLaneAI/pennylane/pull/6334)

Improvements 🛠

@@ -161,7 +161,7 @@ * `FermiWord` and `FermiSentence` are now compatible with JAX arrays. [(#6324)](https://github.com/PennyLaneAI/pennylane/pull/6324) - +

Quantum information measurements

* Added `process_density_matrix` implementations to 5 `StateMeasurement` subclasses: @@ -186,7 +186,7 @@ * The quantum arithmetic templates are now QJIT compatible. [(#6307)](https://github.com/PennyLaneAI/pennylane/pull/6307) - + * The `qml.Qubitization` template is now QJIT compatible. [(#6305)](https://github.com/PennyLaneAI/pennylane/pull/6305) @@ -212,8 +212,11 @@ * Module-level sandboxing added to `qml.labs` via pre-commit hooks. [(#6369)](https://github.com/PennyLaneAI/pennylane/pull/6369) -* A new class `MomentumQNGOptimizer` is added. It inherits the basic `QNGOptimizer` class and requires one additional hyperparameter (the momentum coefficient) :math:`0 \leq \rho < 1`, the default value being :math:`\rho=0.9`. For :math:`\rho=0` Momentum-QNG reduces to the basic QNG. +* A new class `MomentumQNGOptimizer` is added. It inherits the basic `QNGOptimizer` class and + requires one additional hyperparameter (the momentum coefficient) :math:`0 \leq \rho < 1`, the + default value being :math:`\rho=0.9`. For :math:`\rho=0` Momentum-QNG reduces to the basic QNG. [(#6240)](https://github.com/PennyLaneAI/pennylane/pull/6240) + [(#6471)](https://github.com/PennyLaneAI/pennylane/pull/6471) * A `has_sparse_matrix` property is added to `Operator` to indicate whether a sparse matrix is defined. [(#6278)](https://github.com/PennyLaneAI/pennylane/pull/6278) @@ -222,7 +225,7 @@ * `qml.matrix` now works with empty objects (such as empty tapes, `QNode`s and quantum functions that do not call operations, single operators with empty decompositions). [(#6347)](https://github.com/PennyLaneAI/pennylane/pull/6347) - + * PennyLane is now compatible with NumPy 2.0. [(#6061)](https://github.com/PennyLaneAI/pennylane/pull/6061) [(#6258)](https://github.com/PennyLaneAI/pennylane/pull/6258) @@ -238,8 +241,8 @@ when possible, based on the `pauli_rep` of the relevant observables. [(#6113)](https://github.com/PennyLaneAI/pennylane/pull/6113/) -* The `QuantumScript.copy` method now takes `operations`, `measurements`, `shots` and - `trainable_params` as keyword arguments. If any of these are passed when copying a +* The `QuantumScript.copy` method now takes `operations`, `measurements`, `shots` and + `trainable_params` as keyword arguments. If any of these are passed when copying a tape, the specified attributes will replace the copied attributes on the new tape. [(#6285)](https://github.com/PennyLaneAI/pennylane/pull/6285) [(#6363)](https://github.com/PennyLaneAI/pennylane/pull/6363) @@ -247,13 +250,13 @@ * The `Hermitian` operator now has a `compute_sparse_matrix` implementation. [(#6225)](https://github.com/PennyLaneAI/pennylane/pull/6225) -* When an observable is repeated on a tape, `tape.diagonalizing_gates` no longer returns the +* When an observable is repeated on a tape, `tape.diagonalizing_gates` no longer returns the diagonalizing gates for each instance of the observable. Instead, the diagonalizing gates of each observable on the tape are included just once. [(#6288)](https://github.com/PennyLaneAI/pennylane/pull/6288) -* The number of diagonalizing gates returned in `qml.specs` now follows the `level` keyword argument - regarding whether the diagonalizing gates are modified by device, instead of always counting +* The number of diagonalizing gates returned in `qml.specs` now follows the `level` keyword argument + regarding whether the diagonalizing gates are modified by device, instead of always counting unprocessed diagonalizing gates. [(#6290)](https://github.com/PennyLaneAI/pennylane/pull/6290) @@ -265,7 +268,7 @@

Breaking changes 💔

-* `AllWires` validation in `QNode.construct` has been removed. +* `AllWires` validation in `QNode.construct` has been removed. [(#6373)](https://github.com/PennyLaneAI/pennylane/pull/6373) * The `simplify` argument in `qml.Hamiltonian` and `qml.ops.LinearCombination` has been removed. @@ -397,16 +400,22 @@

Bug fixes 🐛

+* Fixes a bug where `QNSPSAOptimizer`, `QNGOptimizer` and `MomentumQNGOptimizer` calculate invalid + parameter updates if the metric tensor becomes singular. + [(#6471)](https://github.com/PennyLaneAI/pennylane/pull/6471) + * The `default.qubit` device now supports parameter broadcasting with `qml.classical_shadow` and `qml.shadow_expval`. [(#6301)](https://github.com/PennyLaneAI/pennylane/pull/6301) -* Fixes unnecessary call of `eigvals` in `qml.ops.op_math.decompositions.two_qubit_unitary.py` that was causing an error in VJP. Raises warnings to users if this essentially nondifferentiable module is used. +* Fixes unnecessary call of `eigvals` in `qml.ops.op_math.decompositions.two_qubit_unitary.py` that + was causing an error in VJP. Raises warnings to users if this essentially nondifferentiable + module is used. [(#6437)](https://github.com/PennyLaneAI/pennylane/pull/6437) * Patches the `math` module to function with autoray 0.7.0. [(#6429)](https://github.com/PennyLaneAI/pennylane/pull/6429) -* Fixes incorrect differentiation of `PrepSelPrep` when using `diff_method="parameter-shift"`. +* Fixes incorrect differentiation of `PrepSelPrep` when using `diff_method="parameter-shift"`. [(#6423)](https://github.com/PennyLaneAI/pennylane/pull/6423) * `default.tensor` can now handle mid circuit measurements via the deferred measurement principle. diff --git a/pennylane/optimize/momentum_qng.py b/pennylane/optimize/momentum_qng.py index 922548f6931..0c928e4c1e7 100644 --- a/pennylane/optimize/momentum_qng.py +++ b/pennylane/optimize/momentum_qng.py @@ -102,7 +102,7 @@ class MomentumQNGOptimizer(QNGOptimizer): """ def __init__(self, stepsize=0.01, momentum=0.9, approx="block-diag", lam=0): - super().__init__(stepsize) + super().__init__(stepsize, approx, lam) self.momentum = momentum self.accumulation = None @@ -133,7 +133,7 @@ def apply_grad(self, grad, args): if getattr(arg, "requires_grad", False): grad_flat = pnp.array(list(_flatten(grad[trained_index]))) # self.metric_tensor has already been reshaped to 2D, matching flat gradient. - qng_update = pnp.linalg.solve(metric_tensor[trained_index], grad_flat) + qng_update = pnp.linalg.pinv(metric_tensor[trained_index]) @ grad_flat self.accumulation[trained_index] *= self.momentum self.accumulation[trained_index] += self.stepsize * unflatten( diff --git a/pennylane/optimize/qng.py b/pennylane/optimize/qng.py index 08cb29157e8..a5c31e7218f 100644 --- a/pennylane/optimize/qng.py +++ b/pennylane/optimize/qng.py @@ -279,7 +279,7 @@ def apply_grad(self, grad, args): if getattr(arg, "requires_grad", False): grad_flat = pnp.array(list(_flatten(grad[trained_index]))) # self.metric_tensor has already been reshaped to 2D, matching flat gradient. - update = pnp.linalg.solve(mt[trained_index], grad_flat) + update = pnp.linalg.pinv(mt[trained_index]) @ grad_flat args_new[index] = arg - self.stepsize * unflatten(update, grad[trained_index]) trained_index += 1 diff --git a/pennylane/optimize/qnspsa.py b/pennylane/optimize/qnspsa.py index 213544697dd..2caad34ded3 100644 --- a/pennylane/optimize/qnspsa.py +++ b/pennylane/optimize/qnspsa.py @@ -325,8 +325,8 @@ def _get_next_params(self, args, gradient): params_vec = pnp.concatenate([param.reshape(-1) for param in params]) grad_vec = pnp.concatenate([grad.reshape(-1) for grad in gradient]) - new_params_vec = pnp.linalg.solve( - self.metric_tensor, + new_params_vec = pnp.matmul( + pnp.linalg.pinv(self.metric_tensor), (-self.stepsize * grad_vec + pnp.matmul(self.metric_tensor, params_vec)), ) # reshape single-vector new_params_vec into new_params, to match the input params diff --git a/tests/optimize/test_momentum_qng.py b/tests/optimize/test_momentum_qng.py index 75482f8ed31..e3153cadf3a 100644 --- a/tests/optimize/test_momentum_qng.py +++ b/tests/optimize/test_momentum_qng.py @@ -20,11 +20,35 @@ from pennylane import numpy as np +class TestBasics: + """Test basic properties of the MomentumQNGOptimizer.""" + + def test_initialization_default(self): + """Test that initializing MomentumQNGOptimizer with default values works.""" + opt = qml.MomentumQNGOptimizer() + assert opt.stepsize == 0.01 + assert opt.approx == "block-diag" + assert opt.lam == 0 + assert opt.momentum == 0.9 + assert opt.accumulation is None + assert opt.metric_tensor is None + + def test_initialization_custom_values(self): + """Test that initializing MomentumQNGOptimizer with custom values works.""" + opt = qml.MomentumQNGOptimizer(stepsize=0.05, momentum=0.8, approx="diag", lam=1e-9) + assert opt.stepsize == 0.05 + assert opt.approx == "diag" + assert opt.lam == 1e-9 + assert opt.momentum == 0.8 + assert opt.accumulation is None + assert opt.metric_tensor is None + + class TestOptimize: """Test basic optimization integration""" @pytest.mark.parametrize("rho", [0.9, 0.0]) - def test_step_and_cost_autograd(self, rho): + def test_step_and_cost(self, rho): """Test that the correct cost and step is returned after 8 optimization steps via the step_and_cost method for the MomentumQNG optimizer""" dev = qml.device("default.qubit", wires=1) @@ -126,7 +150,7 @@ def circuit(params): stepsize = 0.05 momentum = 0.7 - # Create two optimizers so that the opt.accumulation state does not + # Create multiple optimizers so that the opt.accumulation state does not # interact between tests for step_and_cost and for step. opt1 = qml.MomentumQNGOptimizer(stepsize=stepsize, momentum=momentum) opt2 = qml.MomentumQNGOptimizer(stepsize=stepsize, momentum=momentum) @@ -328,7 +352,6 @@ def gradient(params): grad = gradient(theta) dtheta *= rho dtheta += tuple(eta * g / e[0, 0] for e, g in zip(exp, grad)) - print(circuit(*theta)) assert np.allclose(dtheta, theta - theta_new) # check final cost diff --git a/tests/optimize/test_qng.py b/tests/optimize/test_qng.py index 6ab7838528a..3347fdca290 100644 --- a/tests/optimize/test_qng.py +++ b/tests/optimize/test_qng.py @@ -20,6 +20,90 @@ from pennylane import numpy as np +class TestBasics: + """Test basic properties of the QNGOptimizer.""" + + def test_initialization_default(self): + """Test that initializing QNGOptimizer with default values works.""" + opt = qml.QNGOptimizer() + assert opt.stepsize == 0.01 + assert opt.approx == "block-diag" + assert opt.lam == 0 + assert opt.metric_tensor is None + + def test_initialization_custom_values(self): + """Test that initializing QNGOptimizer with custom values works.""" + opt = qml.QNGOptimizer(stepsize=0.05, approx="diag", lam=1e-9) + assert opt.stepsize == 0.05 + assert opt.approx == "diag" + assert opt.lam == 1e-9 + assert opt.metric_tensor is None + + +class TestAttrsAffectingMetricTensor: + """Test that the attributes `approx` and `lam`, which affect the metric tensor + and its inversion, are used correctly.""" + + def test_no_approx(self): + """Test that the full metric tensor is used correctly for ``approx=None``.""" + dev = qml.device("default.qubit") + + @qml.qnode(dev) + def circuit(params): + qml.RY(eta, wires=0) + qml.RX(params[0], wires=0) + qml.RY(params[1], wires=0) + return qml.expval(qml.PauliZ(0)) + + opt = qml.QNGOptimizer(approx=None) + eta = 0.7 + params = np.array([0.11, 0.412]) + new_params_no_approx = opt.step(circuit, params) + opt_with_approx = qml.QNGOptimizer() + new_params_block_approx = opt_with_approx.step(circuit, params) + # Expected result, requires some manual calculation, compare analytic test cases page + x = params[0] + first_term = np.eye(2) / 4 + vec_potential = np.array([-0.5j * np.sin(eta), 0.5j * np.sin(x) * np.cos(eta)]) + second_term = np.real(np.outer(vec_potential.conj(), vec_potential)) + exp_mt = first_term - second_term + + assert np.allclose(opt.metric_tensor, exp_mt) + assert np.allclose(opt_with_approx.metric_tensor, np.diag(np.diag(exp_mt))) + assert not np.allclose(new_params_no_approx, new_params_block_approx) + + def test_lam(self): + """Test that the regularization ``lam`` is used correctly.""" + dev = qml.device("default.qubit") + + @qml.qnode(dev) + def circuit(params): + qml.RY(eta, wires=0) + qml.RX(params[0], wires=0) + qml.RY(params[1], wires=0) + return qml.expval(qml.PauliZ(0)) + + lam = 1e-9 + opt = qml.QNGOptimizer(lam=lam, stepsize=1.0) + eta = np.pi + params = np.array([np.pi / 2, 0.412]) + new_params_with_lam = opt.step(circuit, params) + opt_without_lam = qml.QNGOptimizer(stepsize=1.0) + new_params_without_lam = opt_without_lam.step(circuit, params) + # Expected result, requires some manual calculation, compare analytic test cases page + x, y = params + first_term = np.eye(2) / 4 + vec_potential = np.array([-0.5j * np.sin(eta), 0.5j * np.sin(x) * np.cos(eta)]) + second_term = np.real(np.outer(vec_potential.conj(), vec_potential)) + exp_mt = first_term - second_term + + assert np.allclose(opt.metric_tensor, exp_mt + np.eye(2) * lam) + assert np.allclose(opt_without_lam.metric_tensor, np.diag(np.diag(exp_mt))) + # With regularization, y can be updated. Without regularization it can not. + assert np.isclose(new_params_without_lam[1], y) + assert not np.isclose(new_params_with_lam[1], y, atol=1e-11, rtol=0.0) + + class TestExceptions: """Test exceptions are raised for incorrect usage"""