Skip to content
This repository has been archived by the owner on Feb 5, 2024. It is now read-only.

Rename qubitstatevector to stateprep #134

Merged
merged 31 commits into from
Aug 23, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
0dc6f13
rename qubitstatevector to stateprep
multiphaseCFD Aug 21, 2023
707c17d
default.qubit as bench
multiphaseCFD Aug 21, 2023
fd9e173
Trigger multi-GPU CI
multiphaseCFD Aug 21, 2023
3be2704
add changelog
multiphaseCFD Aug 21, 2023
db8690d
update pybind to 2.11.1 (test)
multiphaseCFD Aug 21, 2023
35d1da2
Fix typo in changelog.
vincentmr Aug 21, 2023
1a80268
Revert pybind11 version to match lightning. Fix couple imports. Add f…
vincentmr Aug 21, 2023
5a6535d
Deprecate python 3.8.
vincentmr Aug 21, 2023
be7787c
Build pennylane-lightning@update/nested_cmake_paths
vincentmr Aug 21, 2023
7a31b01
Build against update/nested_cmake_paths
vincentmr Aug 21, 2023
edd07cb
Revert to building against PLL@master
vincentmr Aug 21, 2023
294c477
WIP
vincentmr Aug 21, 2023
97e0b41
Change template input of templated OpsData.
vincentmr Aug 21, 2023
4343bc8
replace StateVectorRawCPU.hpp StateVectorLQubitRaw.hpp
vincentmr Aug 21, 2023
06625f5
Increase CUDA ver to 20.
vincentmr Aug 22, 2023
98e7a7c
Fix cpp tests headers and linking.
vincentmr Aug 22, 2023
47149b3
Update pennylane ver in docs.
vincentmr Aug 22, 2023
7f1204b
Update cuda ver 12.
vincentmr Aug 22, 2023
3de364e
Install devtoolset-11
vincentmr Aug 22, 2023
bef7a1d
Uninstall devtoolset-10
vincentmr Aug 22, 2023
f421cf1
Build against PLL bugfix/numbers.
vincentmr Aug 22, 2023
f7a9d6f
Build against pennylane-lightning.git@bugfix/numbers
vincentmr Aug 22, 2023
e4116ac
pull pennylane-lightning.git@bugfix/numbers
vincentmr Aug 22, 2023
cf09f02
Reintroduce QubitStateVector.
vincentmr Aug 22, 2023
da75e05
Reformat & fix pytest.raise.
vincentmr Aug 22, 2023
a2f38ac
Remove python 3.8 refs.
vincentmr Aug 22, 2023
03aa05d
Change Lightning build branch from bugfix/numbers to master.
vincentmr Aug 22, 2023
9ae8e7e
Update CHANGELOG.md
vincentmr Aug 22, 2023
3c39d88
Update devices.rst
vincentmr Aug 22, 2023
8abdced
Fix custatevec ver in reqs.
vincentmr Aug 22, 2023
dca175b
Parametrize couple parallel tests over stateprep.
vincentmr Aug 22, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion .github/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,15 @@

### Breaking changes

* Rename `QubitStateVector` to `StatePrep` in the `LightningGPU` class.
[#134] (https://github.com/PennyLaneAI/pennylane-lightning-gpu/pull/134)

* Deprecate Python 3.8.
[#134] (https://github.com/PennyLaneAI/pennylane-lightning-gpu/pull/134)

* Update PennyLane-Lightning imports following the [(#472)] (https://github.com/PennyLaneAI/pennylane-lightning/pull/472) refactoring.
[#134] (https://github.com/PennyLaneAI/pennylane-lightning-gpu/pull/134)

### Improvements

* Optimizes the single qubit rotation gate by using a single cuStateVector API call instead of separate Pauli gate applications.
Expand All @@ -23,7 +32,7 @@

This release contains contributions from (in alphabetical order):

David Clark (NVIDIA), Shuli Shu
David Clark (NVIDIA), Vincent Michaud-Rioux, Shuli Shu

---

Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/build_wheel_manylinux2014.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name: Wheel::Linux::x86_64

# **What it does**: Builds python wheels for Linux (ubuntu-latest) architecture x86_64 and store it as artifacts.
# Python versions: 3.8, 3.9, 3.10, 3.11.
# Python versions: 3.9, 3.10, 3.11.
# **Why we have it**: To build wheels for pennylane-lightning-gpu installation.
# **Who does it impact**: Wheels to be uploaded to PyPI.

Expand All @@ -27,7 +27,7 @@ jobs:
os: [ubuntu-latest]
arch: [x86_64]
cibw_build: ${{ fromJson(needs.set_wheel_build_matrix.outputs.python_version) }}
name: ${{ matrix.os }} (Python ${{ fromJson('{"cp38-*":"3.8","cp39-*":"3.9","cp310-*":"3.10","cp311-*":"3.11" }')[matrix.cibw_build] }})
name: ${{ matrix.os }} (Python ${{ fromJson('{ "cp39-*":"3.9","cp310-*":"3.10","cp311-*":"3.11" }')[matrix.cibw_build] }})
runs-on: ${{ matrix.os }}

steps:
Expand All @@ -42,7 +42,7 @@ jobs:
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.8'
python-version: '3.9'

- name: Install cibuildwheel
run: python -m pip install cibuildwheel~=2.11.0
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/format.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ jobs:
- uses: actions/setup-python@v4
name: Install Python
with:
python-version: '3.8'
python-version: '3.9'

- name: Install dependencies
run:
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/tests_linux_x86.yml
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ jobs:
id: setup_python
name: Install Python
with:
python-version: '3.8'
python-version: '3.9'

# Since the self-hosted runner can be re-used. It is best to set up all package
# installations in a virtual environment that gets cleaned at the end of each workflow run
Expand Down Expand Up @@ -215,7 +215,7 @@ jobs:
id: setup_python
name: Install Python
with:
python-version: '3.8'
python-version: '3.9'

# Since the self-hosted runner can be re-used. It is best to set up all package
# installations in a virtual environment that gets cleaned at the end of each workflow run
Expand Down Expand Up @@ -259,7 +259,7 @@ jobs:
python -m pip install pip~=22.0
python -m pip install ninja cmake custatevec-cu11 pytest pytest-mock flaky pytest-cov
# Sync with latest master branches
python -m pip install --index-url https://test.pypi.org/simple/ pennylane-lightning --pre --force-reinstall --no-deps
python -m pip install git+https://github.com/PennyLaneAI/pennylane-lightning.git@master --force-reinstall --no-deps
vincentmr marked this conversation as resolved.
Show resolved Hide resolved
- name: Build and install package
env:
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/tests_linux_x86_mpich.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ jobs:
id: setup_python
name: Install Python
with:
python-version: '3.8'
python-version: '3.9'

# Since the self-hosted runner can be re-used. It is best to set up all package
# installations in a virtual environment that gets cleaned at the end of each workflow run
Expand Down Expand Up @@ -225,7 +225,7 @@ jobs:
id: setup_python
name: Install Python
with:
python-version: '3.8'
python-version: '3.9'

# Since the self-hosted runner can be re-used. It is best to set up all package
# installations in a virtual environment that gets cleaned at the end of each workflow run
Expand Down Expand Up @@ -275,7 +275,7 @@ jobs:
python -m pip install pip~=22.0
python -m pip install ninja cmake custatevec-cu11 pytest pytest-mock flaky pytest-cov mpi4py
# Sync with latest master branches
python -m pip install --index-url https://test.pypi.org/simple/ pennylane-lightning --pre --force-reinstall --no-deps
python -m pip install git+https://github.com/PennyLaneAI/pennylane-lightning.git@master --force-reinstall --no-deps
- name: Build and install package (MPICH backend)
env:
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/tests_linux_x86_openmpi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ jobs:
id: setup_python
name: Install Python
with:
python-version: '3.8'
python-version: '3.9'

# Since the self-hosted runner can be re-used. It is best to set up all package
# installations in a virtual environment that gets cleaned at the end of each workflow run
Expand Down Expand Up @@ -197,7 +197,7 @@ jobs:
id: setup_python
name: Install Python
with:
python-version: '3.8'
python-version: '3.9'

# Since the self-hosted runner can be re-used. It is best to set up all package
# installations in a virtual environment that gets cleaned at the end of each workflow run
Expand Down Expand Up @@ -247,7 +247,7 @@ jobs:
python -m pip install pip~=22.0
python -m pip install ninja cmake custatevec-cu11 pytest pytest-mock flaky pytest-cov mpi4py
# Sync with latest master branches
python -m pip install --index-url https://test.pypi.org/simple/ pennylane-lightning --pre --force-reinstall --no-deps
python -m pip install git+https://github.com/PennyLaneAI/pennylane-lightning.git@master --force-reinstall --no-deps
- name: Build and install package (OpenMPI backend)
env:
Expand Down
14 changes: 7 additions & 7 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ set(ENABLE_KOKKOS OFF)
FetchContent_Declare(
pennylane_lightning
GIT_REPOSITORY https://github.com/PennyLaneAI/pennylane-lightning.git
GIT_TAG "${LIGHTNING_RELEASE_TAG}"
GIT_TAG master
)
FetchContent_MakeAvailable(pennylane_lightning)

Expand Down Expand Up @@ -200,12 +200,12 @@ endif()

# Create binding module
if(PLLGPU_ENABLE_PYTHON)
pybind11_add_module(lightning_gpu_qubit_ops "pennylane_lightning_gpu/src/bindings/Bindings.cpp")
target_link_libraries(lightning_gpu_qubit_ops PRIVATE pennylane_lightning_gpu)
set_target_properties(lightning_gpu_qubit_ops PROPERTIES CXX_VISIBILITY_PRESET hidden)
set_target_properties(lightning_gpu_qubit_ops PROPERTIES INSTALL_RPATH "$ORIGIN/../cuquantum/lib:$ORIGIN/../cuquantum/lib64:$ORIGIN/")
target_compile_options(lightning_gpu_qubit_ops PRIVATE "$<$<CONFIG:RELEASE>:-W>")
target_compile_definitions(lightning_gpu_qubit_ops PRIVATE VERSION_INFO=${VERSION_STRING})
pybind11_add_module(lightning_gpu_qubit_ops "pennylane_lightning_gpu/src/bindings/Bindings.cpp")
target_link_libraries(lightning_gpu_qubit_ops PRIVATE lightning_algorithms lightning_qubit lightning_qubit_algorithms pennylane_lightning_gpu)
set_target_properties(lightning_gpu_qubit_ops PROPERTIES CXX_VISIBILITY_PRESET hidden)
set_target_properties(lightning_gpu_qubit_ops PROPERTIES INSTALL_RPATH "$ORIGIN/../cuquantum/lib:$ORIGIN/../cuquantum/lib64:$ORIGIN/")
target_compile_options(lightning_gpu_qubit_ops PRIVATE "$<$<CONFIG:RELEASE>:-W>")
target_compile_definitions(lightning_gpu_qubit_ops PRIVATE VERSION_INFO=${VERSION_STRING})
if(PLLGPU_ENABLE_MPI)
option(ENABLE_MPI "Enable MPI support" ON)
target_compile_definitions(lightning_gpu_qubit_ops PRIVATE ENABLE_MPI)
Expand Down
8 changes: 4 additions & 4 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ ifndef CUQUANTUM_SDK
@test $(CUQUANTUM_SDK)
endif
ifndef PYTHON3
@echo "To install PennyLane-Lightning-GPU you must have Python 3.8+ installed."
@echo "To install PennyLane-Lightning-GPU you must have Python 3.9+ installed."
endif
$(PYTHON) setup.py build_ext --cuquantum=$(CUQUANTUM_SDK) --verbose
$(PYTHON) setup.py install
Expand Down Expand Up @@ -85,13 +85,13 @@ ifndef CUQUANTUM_SDK
@test $(CUQUANTUM_SDK)
endif
rm -rf ./BuildTests
cmake . -BBuildTests -DBUILD_TESTS=1 -DPLLGPU_BUILD_TESTS=1 -DCUQUANTUM_SDK=$(CUQUANTUM_SDK)
cmake . -BBuildTests -G Ninja -DBUILD_TESTS=1 -DPLLGPU_BUILD_TESTS=1 -DCUQUANTUM_SDK=$(CUQUANTUM_SDK)
cmake --build ./BuildTests
./BuildTests/pennylane_lightning_gpu/src/tests/runner_gpu

test-cpp-mpi:
rm -rf ./BuildTests
cmake . -BBuildTests -DBUILD_TESTS=1 -DPLLGPU_BUILD_TESTS=1 -DPLLGPU_ENABLE_MPI=On -DCUQUANTUM_SDK=$(CUQUANTUM_SDK)
cmake . -BBuildTests -G Ninja -DBUILD_TESTS=1 -DPLLGPU_BUILD_TESTS=1 -DPLLGPU_ENABLE_MPI=On -DCUQUANTUM_SDK=$(CUQUANTUM_SDK)
cmake --build ./BuildTests
$(MPILAUNCHER) -np $(NUMPROCS) ./BuildTests/pennylane_lightning_gpu/src/tests/mpi_runner

Expand Down Expand Up @@ -121,5 +121,5 @@ endif
.PHONY: check-tidy
check-tidy:
rm -rf ./Build
cmake . -BBuild -DENABLE_CLANG_TIDY=ON -DBUILD_TESTS=1
cmake . -BBuild -G Ninja -DENABLE_CLANG_TIDY=ON -DBUILD_TESTS=1
cmake --build ./Build
4 changes: 2 additions & 2 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Features
Installation
============

PennyLane-Lightning-GPU requires Python version 3.8 and above. It can be installed using ``pip``:
PennyLane-Lightning-GPU requires Python version 3.9 and above. It can be installed using ``pip``:

.. code-block:: console
Expand Down Expand Up @@ -82,7 +82,7 @@ To build using Docker, run the following from the project root directory:
docker build . -f ./docker/Dockerfile -t "lightning-gpu-wheels"
This will build a Python wheel for Python 3.8 up to 3.11 inclusive, and be manylinux2014 (glibc 2.17) compatible.
This will build a Python wheel for Python 3.9 up to 3.11 inclusive, and be manylinux2014 (glibc 2.17) compatible.
To acquire the built wheels, use:

.. code-block:: console
Expand Down
3 changes: 2 additions & 1 deletion doc/devices.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ Supported operations and observables
~pennylane.PhaseShift
~pennylane.ControlledPhaseShift
~pennylane.QubitStateVector
~pennylane.StatePrep
~pennylane.Rot
~pennylane.RX
~pennylane.RY
Expand Down Expand Up @@ -241,4 +242,4 @@ To enable the memory-optimized adjoint method with MPI support, ``batch_obs`` sh
dev = qml.device('lightning.gpu', wires= n_wires, mpi=True, batch_obs=True)
For the adjoint method, each MPI process will provide the overall simulation results.
For the adjoint method, each MPI process will provide the overall simulation results.
2 changes: 1 addition & 1 deletion doc/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,5 @@ exhale
pybind11
cmake
ninja
pennylane
git+https://github.com/PennyLaneAI/pennylane.git@master
pennylane-sphinx-theme
12 changes: 7 additions & 5 deletions mpitests/test_adjoint_jacobian.py
Original file line number Diff line number Diff line change
Expand Up @@ -174,7 +174,8 @@ def test_unsupported_hermitian_expectation(self, isBatch_obs):
@pytest.mark.parametrize("theta", np.linspace(-2 * np.pi, 2 * np.pi, 7))
@pytest.mark.parametrize("G", [qml.RX, qml.RY, qml.RZ])
@pytest.mark.parametrize("isBatch_obs", [False, True])
def test_pauli_rotation_gradient(self, G, theta, tol, isBatch_obs, request):
@pytest.mark.parametrize("stateprep", [qml.QubitStateVector, qml.StatePrep])
vincentmr marked this conversation as resolved.
Show resolved Hide resolved
def test_pauli_rotation_gradient(self, stateprep, G, theta, tol, isBatch_obs, request):
"""Tests that the automatic gradients of Pauli rotations are correct."""

num_wires = 3
Expand All @@ -188,7 +189,7 @@ def test_pauli_rotation_gradient(self, G, theta, tol, isBatch_obs, request):
dev_cpu = qml.device("default.qubit", wires=3)

with qml.tape.QuantumTape() as tape:
qml.QubitStateVector(np.array([1.0, -1.0]) / np.sqrt(2), wires=0)
stateprep(np.array([1.0, -1.0]) / np.sqrt(2), wires=0)
G(theta, wires=[0])
qml.expval(qml.PauliZ(0))

Expand All @@ -202,7 +203,8 @@ def test_pauli_rotation_gradient(self, G, theta, tol, isBatch_obs, request):
@pytest.fixture(params=[np.complex64, np.complex128])
@pytest.mark.parametrize("theta", np.linspace(-2 * np.pi, 2 * np.pi, 7))
@pytest.mark.parametrize("isBatch_obs", [False, True])
def test_Rot_gradient(self, theta, tol, isBatch_obs, request):
@pytest.mark.parametrize("stateprep", [qml.QubitStateVector, qml.StatePrep])
def test_Rot_gradient(self, stateprep, theta, tol, isBatch_obs, request):
"""Tests that the device gradient of an arbitrary Euler-angle-parameterized gate is
correct."""
num_wires = 3
Expand All @@ -218,7 +220,7 @@ def test_Rot_gradient(self, theta, tol, isBatch_obs, request):
params = np.array([theta, theta**3, np.sqrt(2) * theta])

with qml.tape.QuantumTape() as tape:
qml.QubitStateVector(np.array([1.0, -1.0]) / np.sqrt(2), wires=0)
stateprep(np.array([1.0, -1.0]) / np.sqrt(2), wires=0)
qml.Rot(*params, wires=[0])
qml.expval(qml.PauliZ(0))

Expand Down Expand Up @@ -760,7 +762,7 @@ def circuit_2(params, wires):

def circuit_ansatz(params, wires):
"""Circuit ansatz containing all the parametrized gates"""
qml.QubitStateVector(unitary_group.rvs(2**6, random_state=0)[0], wires=wires)
qml.StatePrep(unitary_group.rvs(2**6, random_state=0)[0], wires=wires)
qml.RX(params[0], wires=wires[0])
qml.RY(params[1], wires=wires[1])
qml.adjoint(qml.RX(params[2], wires=wires[2]))
Expand Down
18 changes: 9 additions & 9 deletions mpitests/test_apply.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ def apply_operation_gates_qnode_param(tol, operation, par, Wires):
)

def circuit(*params):
qml.QubitStateVector(state_vector, wires=range(num_wires))
qml.StatePrep(state_vector, wires=range(num_wires))
operation(*params, wires=Wires)
return qml.state()

Expand Down Expand Up @@ -105,7 +105,7 @@ def apply_operation_gates_apply_param(tol, operation, par, Wires):

@qml.qnode(dev_cpu)
def circuit(*params):
qml.QubitStateVector(state_vector, wires=range(num_wires))
qml.StatePrep(state_vector, wires=range(num_wires))
operation(*params, wires=Wires)
return qml.state()

Expand Down Expand Up @@ -153,7 +153,7 @@ def apply_operation_gates_qnode_nonparam(tol, operation, Wires):
)

def circuit():
qml.QubitStateVector(state_vector, wires=range(num_wires))
qml.StatePrep(state_vector, wires=range(num_wires))
operation(wires=Wires)
return qml.state()

Expand Down Expand Up @@ -186,7 +186,7 @@ def apply_operation_gates_apply_nonparam(tol, operation, Wires):

@qml.qnode(dev_cpu)
def circuit():
qml.QubitStateVector(state_vector, wires=range(num_wires))
qml.StatePrep(state_vector, wires=range(num_wires))
operation(wires=Wires)
return qml.state()

Expand Down Expand Up @@ -225,7 +225,7 @@ def expval_single_wire_no_param(tol, obs):
dev_gpumpi = qml.device("lightning.gpu", wires=num_wires, mpi=True, c_dtype=np.complex128)

def circuit():
qml.QubitStateVector(state_vector, wires=range(num_wires))
qml.StatePrep(state_vector, wires=range(num_wires))
return qml.expval(obs)

cpu_qnode = qml.QNode(circuit, dev_cpu)
Expand All @@ -252,7 +252,7 @@ def apply_probs_nonparam(tol, operation, GateWires, Wires):
dev_gpumpi = qml.device("lightning.gpu", wires=num_wires, mpi=True, c_dtype=np.complex128)

def circuit():
qml.QubitStateVector(state_vector, wires=range(num_wires))
qml.StatePrep(state_vector, wires=range(num_wires))
operation(wires=GateWires)
return qml.probs(wires=Wires)

Expand Down Expand Up @@ -292,7 +292,7 @@ def apply_probs_param(tol, operation, par, GateWires, Wires):
dev_gpumpi = qml.device("lightning.gpu", wires=num_wires, mpi=True, c_dtype=np.complex128)

def circuit():
qml.QubitStateVector(state_vector, wires=range(num_wires))
qml.StatePrep(state_vector, wires=range(num_wires))
operation(*par, wires=GateWires)
return qml.probs(wires=Wires)

Expand Down Expand Up @@ -501,7 +501,7 @@ def test_qubit_state_prep(self, tol, par, Wires):
dev_gpumpi = qml.device("lightning.gpu", wires=num_wires, mpi=True, c_dtype=np.complex128)

def circuit():
qml.QubitStateVector(par, wires=Wires)
qml.StatePrep(par, wires=Wires)
return qml.state()

cpu_qnode = qml.QNode(circuit, dev_cpu)
Expand Down Expand Up @@ -1152,7 +1152,7 @@ def test_prob_four_wire_param(self, tol, operation, par, GateWires, Wires):

def circuit_ansatz(params, wires):
"""Circuit ansatz containing all the parametrized gates"""
qml.QubitStateVector(unitary_group.rvs(2**numQubits, random_state=0)[0], wires=wires)
qml.StatePrep(unitary_group.rvs(2**numQubits, random_state=0)[0], wires=wires)
qml.RX(params[0], wires=wires[0])
qml.RY(params[1], wires=wires[1])
qml.adjoint(qml.RX(params[2], wires=wires[2]))
Expand Down
4 changes: 2 additions & 2 deletions pennylane_lightning_gpu/_serialize.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
PauliY,
PauliZ,
Identity,
QubitStateVector,
StatePrep,
Rot,
)
from pennylane.operation import Tensor
Expand Down Expand Up @@ -286,7 +286,7 @@ def _serialize_ops(
sv_py = LightningGPU_C64 if use_csingle else LightningGPU_C128

for o in tape.operations:
if isinstance(o, (BasisState, QubitStateVector)):
if isinstance(o, (BasisState, StatePrep)):
uses_stateprep = True
continue
elif isinstance(o, Rot):
Expand Down
Loading
Loading