Gradients with the primitives (#8528)

* added the gradients with the primitives

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* add run_options and supported gate

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* added unittests

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* lint

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* fix based on the comments

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* fix

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* add spsa gradient

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* simplify + async

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* added gradient variance

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* lint

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* added the run_options field

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* fix lint

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* fix based on comments

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* wip fix2

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* fix

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* lint

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* fix epsilon and doc

* lint

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* fix

* Update qiskit/algorithms/gradients/base_sampler_gradient.py

Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* Update qiskit/algorithms/gradients/base_sampler_gradient.py

Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* Update qiskit/algorithms/gradients/base_estimator_gradient.py

Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* Update qiskit/algorithms/gradients/base_sampler_gradient.py

Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* Update qiskit/algorithms/gradients/base_estimator_gradient.py

Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* Update qiskit/algorithms/gradients/base_estimator_gradient.py

Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* change epsilon error

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* Update qiskit/algorithms/gradients/estimator_gradient_result.py

Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* Update qiskit/algorithms/gradients/sampler_gradient_result.py

Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* add gradient test

* added batch size in spsa gradients

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* fix

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* lint

* Update qiskit/algorithms/gradients/lin_comb_estimator_gradient.py

Co-authored-by: Julien Gacon <gaconju@gmail.com>

* add operator tests

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* consistent name

* rewrite spsa

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>

* use algorithm job

Co-authored-by: Ikko Hamamura <ikkoham@users.noreply.github.com>
Co-authored-by: Takashi Imamichi <31178928+t-imamichi@users.noreply.github.com>
Co-authored-by: Julien Gacon <gaconju@gmail.com>
Co-authored-by: Takashi Imamichi <imamichi@jp.ibm.com>
This commit is contained in:
a-matsuo 2022-09-05 21:05:35 +09:00 committed by GitHub
parent a4cf1483c8
commit b737fb5e64
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
18 changed files with 2702 additions and 0 deletions

View File

@ -140,6 +140,17 @@ Algorithms to find factors of a number.
ShorResult
Gradients
----------
Algorithms to calculate the gradient of a quantum circuit.
.. autosummary::
:toctree: ../stubs/
gradients
Linear Solvers
--------------

View File

@ -0,0 +1,87 @@
# This code is part of Qiskit.
#
# (C) Copyright IBM 2022.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""
==============================================
Gradients (:mod:`qiskit.algorithms.gradients`)
==============================================
.. currentmodule:: qiskit.algorithms.gradients
Base Classes
============
.. autosummary::
:toctree: ../stubs/
BaseSamplerGradient
BaseEstimatorGradient
Estimator Gradients
===================
.. autosummary::
:toctree: ../stubs/
FiniteDiffEstimatorGradient
LinCombEstimatorGradient
ParamShiftEstimatorGradient
SPSAEstimatorGradient
Sampler Gradients
=================
.. autosummary::
:toctree: ../stubs/
FiniteDiffSamplerGradient
LinCombSamplerGradient
ParamShiftSamplerGradient
SPSASamplerGradient
Results
=======
.. autosummary::
:toctree: ../stubs/
EstimatorGradientResult
SamplerGradientResult
"""
from .base_estimator_gradient import BaseEstimatorGradient
from .base_sampler_gradient import BaseSamplerGradient
from .estimator_gradient_result import EstimatorGradientResult
from .finite_diff_estimator_gradient import FiniteDiffEstimatorGradient
from .finite_diff_sampler_gradient import FiniteDiffSamplerGradient
from .lin_comb_estimator_gradient import LinCombEstimatorGradient
from .lin_comb_sampler_gradient import LinCombSamplerGradient
from .param_shift_estimator_gradient import ParamShiftEstimatorGradient
from .param_shift_sampler_gradient import ParamShiftSamplerGradient
from .sampler_gradient_result import SamplerGradientResult
from .spsa_estimator_gradient import SPSAEstimatorGradient
from .spsa_sampler_gradient import SPSASamplerGradient
__all__ = [
"BaseEstimatorGradient",
"BaseSamplerGradient",
"EstimatorGradientResult",
"FiniteDiffEstimatorGradient",
"FiniteDiffSamplerGradient",
"LinCombEstimatorGradient",
"LinCombSamplerGradient",
"ParamShiftEstimatorGradient",
"ParamShiftSamplerGradient",
"SamplerGradientResult",
"SPSAEstimatorGradient",
"SPSASamplerGradient",
]

View File

@ -0,0 +1,165 @@
# This code is part of Qiskit.
#
# (C) Copyright IBM 2022.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""
Abstract base class of gradient for ``Estimator``.
"""
from __future__ import annotations
from abc import ABC, abstractmethod
from collections.abc import Sequence
from copy import copy
from qiskit.circuit import Parameter, QuantumCircuit
from qiskit.opflow import PauliSumOp
from qiskit.primitives import BaseEstimator
from qiskit.algorithms import AlgorithmJob
from qiskit.quantum_info.operators.base_operator import BaseOperator
from .estimator_gradient_result import EstimatorGradientResult
class BaseEstimatorGradient(ABC):
"""Base class for an ``EstimatorGradient`` to compute the gradients of the expectation value."""
def __init__(
self,
estimator: BaseEstimator,
**run_options,
):
"""
Args:
estimator: The estimator used to compute the gradients.
run_options: Backend runtime options used for circuit execution. The order of priority is:
run_options in ``run`` method > gradient's default run_options > primitive's default
setting. Higher priority setting overrides lower priority setting.
"""
self._estimator: BaseEstimator = estimator
self._default_run_options = run_options
def run(
self,
circuits: Sequence[QuantumCircuit],
observables: Sequence[BaseOperator | PauliSumOp],
parameter_values: Sequence[Sequence[float]],
parameters: Sequence[Sequence[Parameter] | None] | None = None,
**run_options,
) -> AlgorithmJob:
"""Run the job of the estimator gradient on the given circuits.
Args:
circuits: The list of quantum circuits to compute the gradients.
observables: The list of observables.
parameter_values: The list of parameter values to be bound to the circuit.
parameters: The sequence of parameters to calculate only the gradients of
the specified parameters. Each sequence of parameters corresponds to a circuit in
``circuits``. Defaults to None, which means that the gradients of all parameters in
each circuit are calculated.
run_options: Backend runtime options used for circuit execution. The order of priority is:
run_options in ``run`` method > gradient's default run_options > primitive's default
setting. Higher priority setting overrides lower priority setting.
Returns:
The job object of the gradients of the expectation values. The i-th result corresponds to
``circuits[i]`` evaluated with parameters bound as ``parameter_values[i]``. The j-th
element of the i-th result corresponds to the gradient of the i-th circuit with respect
to the j-th parameter.
Raises:
ValueError: Invalid arguments are given.
"""
# if ``parameters`` is none, all parameters in each circuit are differentiated.
if parameters is None:
parameters = [None for _ in range(len(circuits))]
# Validate the arguments.
self._validate_arguments(circuits, observables, parameter_values, parameters)
# The priority of run option is as follows:
# run_options in ``run`` method > gradient's default run_options > primitive's default setting.
run_opts = copy(self._default_run_options)
run_opts.update(run_options)
job = AlgorithmJob(
self._run, circuits, observables, parameter_values, parameters, **run_opts
)
job.submit()
return job
@abstractmethod
def _run(
self,
circuits: Sequence[QuantumCircuit],
observables: Sequence[BaseOperator | PauliSumOp],
parameter_values: Sequence[Sequence[float]],
parameters: Sequence[Sequence[Parameter] | None],
**run_options,
) -> EstimatorGradientResult:
"""Compute the estimator gradients on the given circuits."""
raise NotImplementedError()
def _validate_arguments(
self,
circuits: Sequence[QuantumCircuit],
observables: Sequence[BaseOperator | PauliSumOp],
parameter_values: Sequence[Sequence[float]],
parameters: Sequence[Sequence[Parameter] | None] | None = None,
) -> None:
"""Validate the arguments of the ``run`` method.
Args:
circuits: The list of quantum circuits to compute the gradients.
observables: The list of observables.
parameter_values: The list of parameter values to be bound to the circuit.
parameters: The Sequence of Sequence of Parameters to calculate only the gradients of
the specified parameters. Each Sequence of Parameters corresponds to a circuit in
``circuits``. Defaults to None, which means that the gradients of all parameters in
each circuit are calculated.
Raises:
ValueError: Invalid arguments are given.
"""
# Validation
if len(circuits) != len(parameter_values):
raise ValueError(
f"The number of circuits ({len(circuits)}) does not match "
f"the number of parameter value sets ({len(parameter_values)})."
)
if len(circuits) != len(observables):
raise ValueError(
f"The number of circuits ({len(circuits)}) does not match "
f"the number of observables ({len(observables)})."
)
if parameters is not None:
if len(circuits) != len(parameters):
raise ValueError(
f"The number of circuits ({len(circuits)}) does not match "
f"the number of the specified parameter sets ({len(parameters)})."
)
for i, (circuit, parameter_value) in enumerate(zip(circuits, parameter_values)):
if not circuit.num_parameters:
raise ValueError(f"The {i}-th circuit is not parameterised.")
if len(parameter_value) != circuit.num_parameters:
raise ValueError(
f"The number of values ({len(parameter_value)}) does not match "
f"the number of parameters ({circuit.num_parameters}) for the {i}-th circuit."
)
for i, (circuit, observable) in enumerate(zip(circuits, observables)):
if circuit.num_qubits != observable.num_qubits:
raise ValueError(
f"The number of qubits of the {i}-th circuit ({circuit.num_qubits}) does "
f"not match the number of qubits of the {i}-th observable "
f"({observable.num_qubits})."
)

View File

@ -0,0 +1,136 @@
# This code is part of Qiskit.
#
# (C) Copyright IBM 2022.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""
Abstract base class of gradient for ``Sampler``.
"""
from __future__ import annotations
from abc import ABC, abstractmethod
from collections.abc import Sequence
from copy import copy
from qiskit.circuit import QuantumCircuit, Parameter
from qiskit.primitives import BaseSampler
from qiskit.algorithms import AlgorithmJob
from .sampler_gradient_result import SamplerGradientResult
class BaseSamplerGradient(ABC):
"""Base class for a ``SamplerGradient`` to compute the gradients of the sampling probability."""
def __init__(self, sampler: BaseSampler, **run_options):
"""
Args:
sampler: The sampler used to compute the gradients.
run_options: Backend runtime options used for circuit execution. The order of priority is:
run_options in `run` method > gradient's default run_options > primitive's default
setting. Higher priority setting overrides lower priority setting.
"""
self._sampler: BaseSampler = sampler
self._default_run_options = run_options
def run(
self,
circuits: Sequence[QuantumCircuit],
parameter_values: Sequence[Sequence[float]],
parameters: Sequence[Sequence[Parameter] | None] | None = None,
**run_options,
) -> AlgorithmJob:
"""Run the job of the sampler gradient on the given circuits.
Args:
circuits: The list of quantum circuits to compute the gradients.
parameter_values: The list of parameter values to be bound to the circuit.
parameters: The sequence of parameters to calculate only the gradients of
the specified parameters. Each sequence of parameters corresponds to a circuit in
``circuits``. Defaults to None, which means that the gradients of all parameters in
each circuit are calculated.
run_options: Backend runtime options used for circuit execution. The order of priority is:
run_options in ``run`` method > gradient's default run_options > primitive's default
setting. Higher priority setting overrides lower priority setting.
Returns:
The job object of the gradients of the sampling probability. The i-th result
corresponds to ``circuits[i]`` evaluated with parameters bound as ``parameter_values[i]``.
The j-th quasi-probability distribution in the i-th result corresponds to the gradients of
the sampling probability for the j-th parameter in ``circuits[i]``.
Raises:
ValueError: Invalid arguments are given.
"""
# if ``parameters`` is none, all parameters in each circuit are differentiated.
if parameters is None:
parameters = [None for _ in range(len(circuits))]
# Validate the arguments.
self._validate_arguments(circuits, parameter_values, parameters)
# The priority of run option is as follows:
# run_options in `run` method > gradient's default run_options > primitive's default run_options.
run_opts = copy(self._default_run_options)
run_opts.update(run_options)
job = AlgorithmJob(self._run, circuits, parameter_values, parameters, **run_opts)
job.submit()
return job
@abstractmethod
def _run(
self,
circuits: Sequence[QuantumCircuit],
parameter_values: Sequence[Sequence[float]],
parameters: Sequence[Sequence[Parameter] | None],
**run_options,
) -> SamplerGradientResult:
"""Compute the sampler gradients on the given circuits."""
raise NotImplementedError()
def _validate_arguments(
self,
circuits: Sequence[QuantumCircuit],
parameter_values: Sequence[Sequence[float]],
parameters: Sequence[Sequence[Parameter] | None] | None = None,
) -> None:
"""Validate the arguments of the ``run`` method.
Args:
circuits: The list of quantum circuits to compute the gradients.
parameter_values: The list of parameter values to be bound to the circuit.
parameters: The Sequence of Sequence of Parameters to calculate only the gradients of
the specified parameters. Each Sequence of Parameters corresponds to a circuit in
``circuits``. Defaults to None, which means that the gradients of all parameters in
each circuit are calculated.
Raises:
ValueError: Invalid arguments are given.
"""
# Validate the arguments.
if len(circuits) != len(parameter_values):
raise ValueError(
f"The number of circuits ({len(circuits)}) does not match "
f"the number of parameter value sets ({len(parameter_values)})."
)
if parameters is not None:
if len(circuits) != len(parameters):
raise ValueError(
f"The number of circuits ({len(circuits)}) does not match "
f"the number of the specified parameter sets ({len(parameters)})."
)
for i, (circuit, parameter_value) in enumerate(zip(circuits, parameter_values)):
if not circuit.num_parameters:
raise ValueError(f"The {i}-th circuit is not parameterised.")
if len(parameter_value) != circuit.num_parameters:
raise ValueError(
f"The number of values ({len(parameter_value)}) does not match "
f"the number of parameters ({circuit.num_parameters}) for the {i}-th circuit."
)

View File

@ -0,0 +1,34 @@
# This code is part of Qiskit.
#
# (C) Copyright IBM 2022.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""
Estimator result class
"""
from __future__ import annotations
from dataclasses import dataclass
from typing import Any
import numpy as np
@dataclass(frozen=True)
class EstimatorGradientResult:
"""Result of EstimatorGradient."""
gradients: list[np.ndarray]
"""The gradients of the expectation values."""
metadata: list[dict[str, Any]]
"""Additional information about the job."""
run_options: dict[str, Any]
"""run_options for the estimator. Currently, estimator's default run_options is not
included."""

View File

@ -0,0 +1,99 @@
# This code is part of Qiskit.
#
# (C) Copyright IBM 2022.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""Gradient of Sampler with Finite difference method."""
from __future__ import annotations
from typing import Sequence
import numpy as np
from qiskit.algorithms import AlgorithmError
from qiskit.circuit import Parameter, QuantumCircuit
from qiskit.opflow import PauliSumOp
from qiskit.primitives import BaseEstimator
from qiskit.quantum_info.operators.base_operator import BaseOperator
from .base_estimator_gradient import BaseEstimatorGradient
from .estimator_gradient_result import EstimatorGradientResult
class FiniteDiffEstimatorGradient(BaseEstimatorGradient):
"""
Compute the gradients of the expectation values by finite difference method.
"""
def __init__(self, estimator: BaseEstimator, epsilon: float, **run_options):
"""
Args:
estimator: The estimator used to compute the gradients.
epsilon: The offset size for the finite difference gradients.
run_options: Backend runtime options used for circuit execution. The order of priority is:
run_options in ``run`` method > gradient's default run_options > primitive's default
setting. Higher priority setting overrides lower priority setting.
Raises:
ValueError: If ``epsilon`` is not positive.
"""
if epsilon <= 0:
raise ValueError(f"epsilon ({epsilon}) should be positive.")
self._epsilon = epsilon
self._base_parameter_values_dict = {}
super().__init__(estimator, **run_options)
def _run(
self,
circuits: Sequence[QuantumCircuit],
observables: Sequence[BaseOperator | PauliSumOp],
parameter_values: Sequence[Sequence[float]],
parameters: Sequence[Sequence[Parameter] | None],
**run_options,
) -> EstimatorGradientResult:
"""Compute the estimator gradients on the given circuits."""
jobs, metadata_ = [], []
for circuit, observable, parameter_values_, parameters_ in zip(
circuits, observables, parameter_values, parameters
):
# indices of parameters to be differentiated
if parameters_ is None:
indices = list(range(circuit.num_parameters))
else:
indices = [circuit.parameters.data.index(p) for p in parameters_]
metadata_.append({"parameters": [circuit.parameters[idx] for idx in indices]})
offset = np.identity(circuit.num_parameters)[indices, :]
plus = parameter_values_ + self._epsilon * offset
minus = parameter_values_ - self._epsilon * offset
n = 2 * len(indices)
job = self._estimator.run(
[circuit] * n, [observable] * n, plus.tolist() + minus.tolist(), **run_options
)
jobs.append(job)
# combine the results
try:
results = [job.result() for job in jobs]
except Exception as exc:
raise AlgorithmError("Estimator job failed.") from exc
gradients = []
for result in results:
n = len(result.values) // 2 # is always a multiple of 2
gradient_ = (result.values[:n] - result.values[n:]) / (2 * self._epsilon)
gradients.append(gradient_)
# TODO: include primitive's run_options as well
return EstimatorGradientResult(
gradients=gradients, metadata=metadata_, run_options=run_options
)

View File

@ -0,0 +1,98 @@
# This code is part of Qiskit.
#
# (C) Copyright IBM 2022.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""Gradient of Sampler with Finite difference method."""
from __future__ import annotations
from typing import Sequence
import numpy as np
from qiskit.algorithms import AlgorithmError
from qiskit.circuit import Parameter, QuantumCircuit
from qiskit.primitives import BaseSampler
from .base_sampler_gradient import BaseSamplerGradient
from .sampler_gradient_result import SamplerGradientResult
class FiniteDiffSamplerGradient(BaseSamplerGradient):
"""Compute the gradients of the sampling probability by finite difference method."""
def __init__(
self,
sampler: BaseSampler,
epsilon: float,
**run_options,
):
"""
Args:
sampler: The sampler used to compute the gradients.
epsilon: The offset size for the finite difference gradients.
run_options: Backend runtime options used for circuit execution. The order of priority is:
run_options in ``run`` method > gradient's default run_options > primitive's default
setting. Higher priority setting overrides lower priority setting.
Raises:
ValueError: If ``epsilon`` is not positive.
"""
if epsilon <= 0:
raise ValueError(f"epsilon ({epsilon}) should be positive.")
self._epsilon = epsilon
super().__init__(sampler, **run_options)
def _run(
self,
circuits: Sequence[QuantumCircuit],
parameter_values: Sequence[Sequence[float]],
parameters: Sequence[Sequence[Parameter] | None],
**run_options,
) -> SamplerGradientResult:
"""Compute the sampler gradients on the given circuits."""
jobs, metadata_ = [], []
for circuit, parameter_values_, parameters_ in zip(circuits, parameter_values, parameters):
# indices of parameters to be differentiated
if parameters_ is None:
indices = list(range(circuit.num_parameters))
else:
indices = [circuit.parameters.data.index(p) for p in parameters_]
metadata_.append({"parameters": [circuit.parameters[idx] for idx in indices]})
offset = np.identity(circuit.num_parameters)[indices, :]
plus = parameter_values_ + self._epsilon * offset
minus = parameter_values_ - self._epsilon * offset
n = 2 * len(indices)
job = self._sampler.run([circuit] * n, plus.tolist() + minus.tolist(), **run_options)
jobs.append(job)
# combine the results
try:
results = [job.result() for job in jobs]
except Exception as exc:
raise AlgorithmError("Sampler job failed.") from exc
gradients = []
for i, result in enumerate(results):
n = len(result.quasi_dists) // 2
gradient_ = []
for dist_plus, dist_minus in zip(result.quasi_dists[:n], result.quasi_dists[n:]):
grad_dist = np.zeros(2 ** circuits[i].num_qubits)
grad_dist[list(dist_plus.keys())] += list(dist_plus.values())
grad_dist[list(dist_minus.keys())] -= list(dist_minus.values())
grad_dist /= 2 * self._epsilon
gradient_.append(dict(enumerate(grad_dist)))
gradients.append(gradient_)
# TODO: include primitive's run_options as well
return SamplerGradientResult(
gradients=gradients, metadata=metadata_, run_options=run_options
)

View File

@ -0,0 +1,135 @@
# This code is part of Qiskit.
#
# (C) Copyright IBM 2022.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""
Gradient of probabilities with linear combination of unitaries (LCU)
"""
from __future__ import annotations
from typing import Sequence
import numpy as np
from qiskit.algorithms import AlgorithmError
from qiskit.circuit import Parameter, ParameterExpression, QuantumCircuit
from qiskit.opflow import PauliSumOp
from qiskit.primitives import BaseEstimator
from qiskit.primitives.utils import init_observable
from qiskit.quantum_info import Pauli
from qiskit.quantum_info.operators.base_operator import BaseOperator
from .base_estimator_gradient import BaseEstimatorGradient
from .estimator_gradient_result import EstimatorGradientResult
from .utils import _make_lin_comb_gradient_circuit
Pauli_Z = Pauli("Z")
class LinCombEstimatorGradient(BaseEstimatorGradient):
"""Compute the gradients of the expectation values.
This method employs a linear combination of unitaries [1].
**Reference:**
[1] Schuld et al., Evaluating analytic gradients on quantum hardware, 2018
`arXiv:1811.11184 <https://arxiv.org/pdf/1811.11184.pdf>`_
"""
def __init__(self, estimator: BaseEstimator, **run_options):
"""
Args:
estimator: The estimator used to compute the gradients.
run_options: Backend runtime options used for circuit execution. The order of priority is:
run_options in ``run`` method > gradient's default run_options > primitive's default
setting. Higher priority setting overrides lower priority setting.
"""
self._gradient_circuits = {}
super().__init__(estimator, **run_options)
def _run(
self,
circuits: Sequence[QuantumCircuit],
observables: Sequence[BaseOperator | PauliSumOp],
parameter_values: Sequence[Sequence[float]],
parameters: Sequence[Sequence[Parameter] | None],
**run_options,
) -> EstimatorGradientResult:
"""Compute the estimator gradients on the given circuits."""
jobs, result_indices_all, coeffs_all, metadata_ = [], [], [], []
for circuit, observable, parameter_values_, parameters_ in zip(
circuits, observables, parameter_values, parameters
):
# Make the observable as observable as :class:`~qiskit.quantum_info.SparsePauliOp`.
observable = init_observable(observable)
# a set of parameters to be differentiated
if parameters_ is None:
param_set = set(circuit.parameters)
else:
param_set = set(parameters_)
metadata_.append({"parameters": [p for p in circuit.parameters if p in param_set]})
# TODO: support measurement in different basis (Y and Z+iY)
observable_ = observable.expand(Pauli_Z)
gradient_circuits_ = self._gradient_circuits.get(id(circuit))
if gradient_circuits_ is None:
gradient_circuits_ = _make_lin_comb_gradient_circuit(circuit)
self._gradient_circuits[id(circuit)] = gradient_circuits_
# only compute the gradients for parameters in the parameter set
gradient_circuits, result_indices, coeffs = [], [], []
result_idx = 0
for i, param in enumerate(circuit.parameters):
if param in param_set:
gradient_circuits.extend(
grad.gradient_circuit for grad in gradient_circuits_[param]
)
result_indices.extend(result_idx for _ in gradient_circuits_[param])
result_idx += 1
for grad_data in gradient_circuits_[param]:
coeff = grad_data.coeff
# if the parameter is a parameter expression, we need to substitute
if isinstance(coeff, ParameterExpression):
local_map = {
p: parameter_values_[circuit.parameters.data.index(p)]
for p in coeff.parameters
}
bound_coeff = float(coeff.bind(local_map))
else:
bound_coeff = coeff
coeffs.append(bound_coeff)
n = len(gradient_circuits)
job = self._estimator.run(
gradient_circuits, [observable_] * n, [parameter_values_] * n, **run_options
)
jobs.append(job)
result_indices_all.append(result_indices)
coeffs_all.append(coeffs)
# combine the results
try:
results = [job.result() for job in jobs]
except Exception as exc:
raise AlgorithmError("Estimator job failed.") from exc
gradients = []
for i, result in enumerate(results):
gradient_ = np.zeros(len(metadata_[i]["parameters"]))
for grad_, idx, coeff in zip(result.values, result_indices_all[i], coeffs_all[i]):
gradient_[idx] += coeff * grad_
gradients.append(gradient_)
# TODO: include primitive's run_options as well
return EstimatorGradientResult(
gradients=gradients, metadata=metadata_, run_options=run_options
)

View File

@ -0,0 +1,126 @@
# This code is part of Qiskit.
#
# (C) Copyright IBM 2022.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""
Gradient of probabilities with linear combination of unitaries (LCU)
"""
from __future__ import annotations
from typing import Sequence
import numpy as np
from qiskit.algorithms import AlgorithmError
from qiskit.circuit import Parameter, ParameterExpression, QuantumCircuit
from qiskit.primitives import BaseSampler
from .base_sampler_gradient import BaseSamplerGradient
from .sampler_gradient_result import SamplerGradientResult
from .utils import _make_lin_comb_gradient_circuit
class LinCombSamplerGradient(BaseSamplerGradient):
"""Compute the gradients of the sampling probability.
This method employs a linear combination of unitaries [1].
**Reference:**
[1] Schuld et al., Evaluating analytic gradients on quantum hardware, 2018
`arXiv:1811.11184 <https://arxiv.org/pdf/1811.11184.pdf>`_
"""
def __init__(self, sampler: BaseSampler, **run_options):
"""
Args:
sampler: The sampler used to compute the gradients.
run_options: Backend runtime options used for circuit execution. The order of priority is:
run_options in ``run`` method > gradient's default run_options > primitive's default
setting. Higher priority setting overrides lower priority setting.
"""
self._gradient_circuits = {}
super().__init__(sampler, **run_options)
def _run(
self,
circuits: Sequence[QuantumCircuit],
parameter_values: Sequence[Sequence[float]],
parameters: Sequence[Sequence[Parameter] | None],
**run_options,
) -> SamplerGradientResult:
"""Compute the sampler gradients on the given circuits."""
jobs, result_indices_all, coeffs_all, metadata_ = [], [], [], []
for circuit, parameter_values_, parameters_ in zip(circuits, parameter_values, parameters):
# a set of parameters to be differentiated
if parameters_ is None:
param_set = set(circuit.parameters)
else:
param_set = set(parameters_)
metadata_.append({"parameters": [p for p in circuit.parameters if p in param_set]})
# TODO: support measurement in different basis (Y and Z+iY)
gradient_circuits_ = self._gradient_circuits.get(id(circuit))
if gradient_circuits_ is None:
gradient_circuits_ = _make_lin_comb_gradient_circuit(circuit, add_measurement=True)
self._gradient_circuits[id(circuit)] = gradient_circuits_
# only compute the gradients for parameters in the parameter set
gradient_circuits, result_indices, coeffs = [], [], []
result_idx = 0
for i, param in enumerate(circuit.parameters):
if param in param_set:
gradient_circuits.extend(
grad.gradient_circuit for grad in gradient_circuits_[param]
)
result_indices.extend(result_idx for _ in gradient_circuits_[param])
result_idx += 1
for grad_data in gradient_circuits_[param]:
coeff = grad_data.coeff
# if the parameter is a parameter expression, we need to substitute
if isinstance(coeff, ParameterExpression):
local_map = {
p: parameter_values_[circuit.parameters.data.index(p)]
for p in coeff.parameters
}
bound_coeff = float(coeff.bind(local_map))
else:
bound_coeff = coeff
coeffs.append(bound_coeff)
n = len(gradient_circuits)
job = self._sampler.run(gradient_circuits, [parameter_values_] * n, **run_options)
jobs.append(job)
result_indices_all.append(result_indices)
coeffs_all.append(coeffs)
# combine the results
try:
results = [job.result() for job in jobs]
except Exception as exc:
raise AlgorithmError("Sampler job failed.") from exc
gradients = []
for i, result in enumerate(results):
n = 2 ** circuits[i].num_qubits
grad_dists = np.zeros((len(metadata_[i]["parameters"]), n))
for idx, coeff, dist in zip(result_indices_all[i], coeffs_all[i], result.quasi_dists):
grad_dists[idx][list(dist.keys())[:n]] += np.array(list(dist.values())[:n]) * coeff
grad_dists[idx][list(dist.keys())[:n]] -= np.array(list(dist.values())[n:]) * coeff
gradient_ = []
for grad_dist in grad_dists:
gradient_.append(dict(enumerate(grad_dist)))
gradients.append(gradient_)
# TODO: include primitive's run_options as well
return SamplerGradientResult(
gradients=gradients, metadata=metadata_, run_options=run_options
)

View File

@ -0,0 +1,116 @@
# This code is part of Qiskit.
#
# (C) Copyright IBM 2022.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""
Gradient of probabilities with parameter shift
"""
from __future__ import annotations
from typing import Sequence
import numpy as np
from qiskit.algorithms import AlgorithmError
from qiskit.circuit import Parameter, QuantumCircuit
from qiskit.opflow import PauliSumOp
from qiskit.primitives import BaseEstimator
from qiskit.quantum_info.operators.base_operator import BaseOperator
from .base_estimator_gradient import BaseEstimatorGradient
from .estimator_gradient_result import EstimatorGradientResult
from .utils import _make_param_shift_parameter_values, _param_shift_preprocessing
class ParamShiftEstimatorGradient(BaseEstimatorGradient):
"""Compute the gradients of the expectation values by the parameter shift rule"""
def __init__(self, estimator: BaseEstimator, **run_options):
"""
Args:
estimator: The estimator used to compute the gradients.
run_options: Backend runtime options used for circuit execution. The order of priority is:
run_options in ``run`` method > gradient's default run_options > primitive's default
setting. Higher priority setting overrides lower priority setting.
"""
self._gradient_circuits = {}
super().__init__(estimator, **run_options)
def _run(
self,
circuits: Sequence[QuantumCircuit],
observables: Sequence[BaseOperator | PauliSumOp],
parameter_values: Sequence[Sequence[float]],
parameters: Sequence[Sequence[Parameter] | None],
**run_options,
) -> EstimatorGradientResult:
"""Compute the estimator gradients on the given circuits."""
jobs, result_indices_all, coeffs_all, metadata_ = [], [], [], []
for circuit, observable, parameter_values_, parameters_ in zip(
circuits, observables, parameter_values, parameters
):
# a set of parameters to be differentiated
if parameters_ is None:
param_set = set(circuit.parameters)
else:
param_set = set(parameters_)
metadata_.append({"parameters": [p for p in circuit.parameters if p in param_set]})
if self._gradient_circuits.get(id(circuit)):
gradient_circuit, base_parameter_values_all = self._gradient_circuits[id(circuit)]
else:
gradient_circuit, base_parameter_values_all = _param_shift_preprocessing(circuit)
self._gradient_circuits[id(circuit)] = (
gradient_circuit,
base_parameter_values_all,
)
(
gradient_parameter_values_plus,
gradient_parameter_values_minus,
result_indices,
coeffs,
) = _make_param_shift_parameter_values(
gradient_circuit_data=gradient_circuit,
base_parameter_values=base_parameter_values_all,
parameter_values=parameter_values_,
param_set=param_set,
)
n = 2 * len(gradient_parameter_values_plus)
job = self._estimator.run(
[gradient_circuit.gradient_circuit] * n,
[observable] * n,
gradient_parameter_values_plus + gradient_parameter_values_minus,
**run_options,
)
jobs.append(job)
result_indices_all.append(result_indices)
coeffs_all.append(coeffs)
# combine the results
try:
results = [job.result() for job in jobs]
except Exception as exc:
raise AlgorithmError("Estimator job failed.") from exc
gradients = []
for i, result in enumerate(results):
n = len(result.values) // 2 # is always a multiple of 2
gradient_ = result.values[:n] - result.values[n:]
values = np.zeros(len(metadata_[i]["parameters"]))
for grad_, idx, coeff in zip(gradient_, result_indices_all[i], coeffs_all[i]):
values[idx] += coeff * grad_
gradients.append(values)
# TODO: include primitive's run_options as well
return EstimatorGradientResult(
gradients=gradients, metadata=metadata_, run_options=run_options
)

View File

@ -0,0 +1,121 @@
# This code is part of Qiskit.
#
# (C) Copyright IBM 2022.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""
Gradient of probabilities with parameter shift
"""
from __future__ import annotations
from typing import Sequence
import numpy as np
from qiskit.algorithms import AlgorithmError
from qiskit.circuit import Parameter, QuantumCircuit
from qiskit.primitives import BaseSampler
from .base_sampler_gradient import BaseSamplerGradient
from .sampler_gradient_result import SamplerGradientResult
from .utils import _param_shift_preprocessing, _make_param_shift_parameter_values
class ParamShiftSamplerGradient(BaseSamplerGradient):
"""Compute the gradients of the sampling probability by the parameter shift rule."""
def __init__(self, sampler: BaseSampler, **run_options):
"""
Args:
sampler: The sampler used to compute the gradients.
run_options: Backend runtime options used for circuit execution. The order of priority is:
run_options in ``run`` method > gradient's default run_options > primitive's default
setting. Higher priority setting overrides lower priority setting.
"""
self._gradient_circuits = {}
super().__init__(sampler, **run_options)
def _run(
self,
circuits: Sequence[QuantumCircuit],
parameter_values: Sequence[Sequence[float]],
parameters: Sequence[Sequence[Parameter] | None],
**run_options,
) -> SamplerGradientResult:
"""Compute the sampler gradients on the given circuits."""
jobs, result_indices_all, coeffs_all, metadata_ = [], [], [], []
for circuit, parameter_values_, parameters_ in zip(circuits, parameter_values, parameters):
# a set of parameters to be differentiated
if parameters_ is None:
param_set = set(circuit.parameters)
else:
param_set = set(parameters_)
metadata_.append({"parameters": [p for p in circuit.parameters if p in param_set]})
if self._gradient_circuits.get(id(circuit)):
gradient_circuit, base_parameter_values_all = self._gradient_circuits[id(circuit)]
else:
gradient_circuit, base_parameter_values_all = _param_shift_preprocessing(circuit)
self._gradient_circuits[id(circuit)] = (
gradient_circuit,
base_parameter_values_all,
)
(
gradient_parameter_values_plus,
gradient_parameter_values_minus,
result_indices,
coeffs,
) = _make_param_shift_parameter_values(
gradient_circuit_data=gradient_circuit,
base_parameter_values=base_parameter_values_all,
parameter_values=parameter_values_,
param_set=param_set,
)
n = 2 * len(gradient_parameter_values_plus)
job = self._sampler.run(
[gradient_circuit.gradient_circuit] * n,
gradient_parameter_values_plus + gradient_parameter_values_minus,
**run_options,
)
jobs.append(job)
result_indices_all.append(result_indices)
coeffs_all.append(coeffs)
# combine the results
try:
results = [job.result() for job in jobs]
except Exception as exc:
raise AlgorithmError("Sampler job failed.") from exc
gradients = []
for i, result in enumerate(results):
n = len(result.quasi_dists) // 2
grad_dists = np.zeros((len(metadata_[i]["parameters"]), 2 ** circuits[i].num_qubits))
for idx, coeff, dist_plus, dist_minus in zip(
result_indices_all[i], coeffs_all[i], result.quasi_dists[:n], result.quasi_dists[n:]
):
grad_dists[idx][list(dist_plus.keys())] += (
np.array(list(dist_plus.values())) * coeff
)
grad_dists[idx][list(dist_minus.keys())] -= (
np.array(list(dist_minus.values())) * coeff
)
gradient_ = []
for grad_dist in grad_dists:
gradient_.append(dict(enumerate(grad_dist)))
gradients.append(gradient_)
# TODO: include primitive's run_options as well
return SamplerGradientResult(
gradients=gradients, metadata=metadata_, run_options=run_options
)

View File

@ -0,0 +1,31 @@
# This code is part of Qiskit.
#
# (C) Copyright IBM 2022.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""
Sampler result class
"""
from __future__ import annotations
from typing import Any
from dataclasses import dataclass
@dataclass(frozen=True)
class SamplerGradientResult:
"""Result of SamplerGradient."""
gradients: list[list[dict[int, float]]]
"""The gradients of the sample probabilities."""
metadata: list[dict[str, Any]]
"""Additional information about the job."""
run_options: dict[str, Any]
"""run_options for the sampler. Currently, sampler's default run_options is not included"""

View File

@ -0,0 +1,125 @@
# This code is part of Qiskit.
#
# (C) Copyright IBM 2022.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""Gradient of Sampler with Finite difference method."""
from __future__ import annotations
from typing import Sequence
import numpy as np
from qiskit.algorithms import AlgorithmError
from qiskit.circuit import Parameter, QuantumCircuit
from qiskit.opflow import PauliSumOp
from qiskit.primitives import BaseEstimator
from qiskit.quantum_info.operators.base_operator import BaseOperator
from .base_estimator_gradient import BaseEstimatorGradient
from .estimator_gradient_result import EstimatorGradientResult
class SPSAEstimatorGradient(BaseEstimatorGradient):
"""
Compute the gradients of the expectation value by the Simultaneous Perturbation Stochastic
Approximation (SPSA).
"""
def __init__(
self,
estimator: BaseEstimator,
epsilon: float,
batch_size: int = 1,
seed: int | None = None,
**run_options,
):
"""
Args:
estimator: The estimator used to compute the gradients.
epsilon: The offset size for the SPSA gradients.
batch_size: The number of gradients to average.
seed: The seed for a random perturbation vector.
run_options: Backend runtime options used for circuit execution. The order of priority is:
run_options in ``run`` method > gradient's default run_options > primitive's default
setting. Higher priority setting overrides lower priority setting.
Raises:
ValueError: If ``epsilon`` is not positive.
"""
if epsilon <= 0:
raise ValueError(f"epsilon ({epsilon}) should be positive.")
self._epsilon = epsilon
self._batch_size = batch_size
self._seed = np.random.default_rng(seed)
super().__init__(estimator, **run_options)
def _run(
self,
circuits: Sequence[QuantumCircuit],
observables: Sequence[BaseOperator | PauliSumOp],
parameter_values: Sequence[Sequence[float]],
parameters: Sequence[Sequence[Parameter] | None],
**run_options,
) -> EstimatorGradientResult:
"""Compute the estimator gradients on the given circuits."""
jobs, offsets, metadata_ = [], [], []
for circuit, observable, parameter_values_, parameters_ in zip(
circuits, observables, parameter_values, parameters
):
# indices of parameters to be differentiated
if parameters_ is None:
indices = list(range(circuit.num_parameters))
else:
indices = [circuit.parameters.data.index(p) for p in parameters_]
metadata_.append({"parameters": [circuit.parameters[idx] for idx in indices]})
offset = [
(-1) ** (self._seed.integers(0, 2, len(circuit.parameters)))
for _ in range(self._batch_size)
]
plus = [parameter_values_ + self._epsilon * offset_ for offset_ in offset]
minus = [parameter_values_ - self._epsilon * offset_ for offset_ in offset]
offsets.append(offset)
job = self._estimator.run(
[circuit] * 2 * self._batch_size,
[observable] * 2 * self._batch_size,
plus + minus,
**run_options,
)
jobs.append(job)
# combine the results
try:
results = [job.result() for job in jobs]
except Exception as exc:
raise AlgorithmError("Estimator job failed.") from exc
results = [job.result() for job in jobs]
gradients = []
for i, result in enumerate(results):
n = len(result.values) // 2 # is always a multiple of 2
diffs = (result.values[:n] - result.values[n:]) / (2 * self._epsilon)
# calculate the gradient for each batch. Note that (``diff`` / ``offset``) is the gradient
# since ``offset`` is a perturbation vector of 1s and -1s.
batch_gradients = np.array([diff / offset for diff, offset in zip(diffs, offsets[i])])
# take the average of the batch gradients
gradient = np.mean(batch_gradients, axis=0)
indices = [circuits[i].parameters.data.index(p) for p in metadata_[i]["parameters"]]
gradients.append(gradient[indices])
# TODO: include primitive's run_options as well
return EstimatorGradientResult(
gradients=gradients, metadata=metadata_, run_options=run_options
)

View File

@ -0,0 +1,126 @@
# This code is part of Qiskit.
#
# (C) Copyright IBM 2022.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
"""Gradient of Sampler with Finite difference method."""
from __future__ import annotations
from typing import Sequence
import numpy as np
from qiskit.algorithms import AlgorithmError
from qiskit.circuit import Parameter, QuantumCircuit
from qiskit.primitives import BaseSampler
from .base_sampler_gradient import BaseSamplerGradient
from .sampler_gradient_result import SamplerGradientResult
class SPSASamplerGradient(BaseSamplerGradient):
"""
Compute the gradients of the sampling probability by the Simultaneous Perturbation Stochastic
Approximation (SPSA).
"""
def __init__(
self,
sampler: BaseSampler,
epsilon: float,
batch_size: int = 1,
seed: int | None = None,
**run_options,
):
"""
Args:
sampler: The sampler used to compute the gradients.
epsilon: The offset size for the SPSA gradients.
batch_size: number of gradients to average.
seed: The seed for a random perturbation vector.
run_options: Backend runtime options used for circuit execution. The order of priority is:
run_options in `run` method > gradient's default run_options > primitive's default
setting. Higher priority setting overrides lower priority setting.
Raises:
ValueError: If ``epsilon`` is not positive.
"""
if epsilon <= 0:
raise ValueError(f"epsilon ({epsilon}) should be positive.")
self._batch_size = batch_size
self._epsilon = epsilon
self._seed = np.random.default_rng(seed)
super().__init__(sampler, **run_options)
def _run(
self,
circuits: Sequence[QuantumCircuit],
parameter_values: Sequence[Sequence[float]],
parameters: Sequence[Sequence[Parameter] | None],
**run_options,
) -> SamplerGradientResult:
"""Compute the sampler gradients on the given circuits."""
jobs, offsets, metadata_ = [], [], []
for circuit, parameter_values_, parameters_ in zip(circuits, parameter_values, parameters):
# indices of parameters to be differentiated
if parameters_ is None:
indices = list(range(circuit.num_parameters))
else:
indices = [circuit.parameters.data.index(p) for p in parameters_]
metadata_.append({"parameters": [circuit.parameters[idx] for idx in indices]})
offset = np.array(
[
(-1) ** (self._seed.integers(0, 2, len(circuit.parameters)))
for _ in range(self._batch_size)
]
)
plus = [parameter_values_ + self._epsilon * offset_ for offset_ in offset]
minus = [parameter_values_ - self._epsilon * offset_ for offset_ in offset]
offsets.append(offset)
job = self._sampler.run([circuit] * 2 * self._batch_size, plus + minus, **run_options)
jobs.append(job)
# combine the results
try:
results = [job.result() for job in jobs]
except Exception as exc:
raise AlgorithmError("Sampler job failed.") from exc
gradients = []
for i, result in enumerate(results):
dist_diffs = np.zeros((self._batch_size, 2 ** circuits[i].num_qubits))
for j, (dist_plus, dist_minus) in enumerate(
zip(result.quasi_dists[: self._batch_size], result.quasi_dists[self._batch_size :])
):
dist_diffs[j, list(dist_plus.keys())] += list(dist_plus.values())
dist_diffs[j, list(dist_minus.keys())] -= list(dist_minus.values())
dist_diffs /= 2 * self._epsilon
gradient = []
indices = [circuits[i].parameters.data.index(p) for p in metadata_[i]["parameters"]]
for j in range(circuits[i].num_parameters):
if not j in indices:
continue
# the gradient for jth parameter is the average of the gradients of the jth parameter
# for each batch.
batch_gradients = np.array(
[offset * dist_diff for dist_diff, offset in zip(dist_diffs, offsets[i][:, j])]
)
gradient_j = np.mean(batch_gradients, axis=0)
gradient.append(dict(enumerate(gradient_j)))
gradients.append(gradient)
# TODO: include primitive's run_options as well
return SamplerGradientResult(
gradients=gradients, metadata=metadata_, run_options=run_options
)

View File

@ -0,0 +1,379 @@
# This code is part of Qiskit.
#
# (C) Copyright IBM 2022.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
# pylint: disable=invalid-name
"""
Utility functions for gradients
"""
from __future__ import annotations
from collections import defaultdict
from copy import deepcopy
from dataclasses import dataclass
import numpy as np
from qiskit import transpile
from qiskit.circuit import (
ClassicalRegister,
Gate,
Instruction,
Parameter,
ParameterExpression,
QuantumCircuit,
QuantumRegister,
)
from qiskit.circuit.library.standard_gates import (
CXGate,
CYGate,
CZGate,
RXGate,
RXXGate,
RYGate,
RYYGate,
RZGate,
RZXGate,
RZZGate,
)
@dataclass
class ParameterShiftGradientCircuit:
"""Stores gradient circuit data for the parameter shift method"""
circuit: QuantumCircuit
"""The original quantum circuit"""
gradient_circuit: QuantumCircuit
"""An internal quantum circuit used to calculate the gradient"""
gradient_parameter_map: dict[Parameter, Parameter]
"""A dictionary maps the parameters of ``circuit`` to the parameters of ``gradient_circuit``"""
gradient_virtual_parameter_map: dict[Parameter, Parameter]
"""A dictionary maps the parameters of ``gradient_circuit`` to the virtual parameter variables"""
coeff_map: dict[Parameter, float | ParameterExpression]
"""A dictionary maps the parameters of ``gradient_circuit`` to their coefficients"""
def _make_param_shift_gradient_circuit_data(
circuit: QuantumCircuit,
) -> ParameterShiftGradientCircuit:
"""Makes a gradient circuit data for the parameter shift method. This re-assigns each parameter in
``circuit`` to a unique parameter, and construct a new gradient circuit with those new
parameters. Also, it makes maps used in later calculations.
Args:
circuit: The original quantum circuit
Returns:
necessary data to calculate gradients with the parameter shift method.
"""
supported_gates = [
"x",
"y",
"z",
"h",
"rx",
"ry",
"rz",
"p",
"cx",
"cy",
"cz",
"ryy",
"rxx",
"rzz",
"rzx",
]
circuit2 = transpile(circuit, basis_gates=supported_gates, optimization_level=0)
g_circuit = circuit2.copy_empty_like(f"g_{circuit2.name}")
param_inst_dict = defaultdict(list)
g_parameter_map = defaultdict(list)
g_virtual_parameter_map = {}
num_virtual_parameter_variables = 0
coeff_map = {}
for inst in circuit2.data:
new_inst = deepcopy(inst)
qubit_indices = [circuit2.qubits.index(qubit) for qubit in inst[1]]
new_inst.qubits = tuple(g_circuit.qubits[qubit_index] for qubit_index in qubit_indices)
# Assign new unique parameters when the instruction is parameterized.
if inst.operation.is_parameterized():
parameters = inst.operation.params
new_inst_parameters = []
# For a gate with multiple parameters e.g. a U gate
for parameter in parameters:
subs_map = {}
# For a gate parameter with multiple parameter variables.
# e.g. ry(θ) with θ = (2x + y)
for parameter_variable in parameter.parameters:
if parameter_variable in param_inst_dict:
new_parameter_variable = Parameter(
f"g{parameter_variable.name}_{len(param_inst_dict[parameter_variable])+1}"
)
else:
new_parameter_variable = Parameter(f"g{parameter_variable.name}_1")
subs_map[parameter_variable] = new_parameter_variable
param_inst_dict[parameter_variable].append(inst)
g_parameter_map[parameter_variable].append(new_parameter_variable)
# Coefficient to calculate derivative i.e. dw/dt in df/dw * dw/dt
coeff_map[new_parameter_variable] = parameter.gradient(parameter_variable)
# Substitute the parameter variables with the corresponding new parameter
# variables in ``subs_map``.
new_parameter = parameter.subs(subs_map)
# If new_parameter is not a single parameter variable, then add a new virtual
# parameter variable. e.g. ry(θ) with θ = (2x + y) becomes ry(θ + virtual_variable)
if not isinstance(new_parameter, Parameter):
virtual_parameter_variable = Parameter(
f"vθ_{num_virtual_parameter_variables+1}"
)
num_virtual_parameter_variables += 1
for new_parameter_variable in new_parameter.parameters:
g_virtual_parameter_map[new_parameter_variable] = virtual_parameter_variable
new_parameter = new_parameter + virtual_parameter_variable
new_inst_parameters.append(new_parameter)
new_inst.operation.params = new_inst_parameters
g_circuit.append(new_inst)
# for global phase
subs_map = {}
if isinstance(g_circuit.global_phase, ParameterExpression):
for parameter_variable in g_circuit.global_phase.parameters:
if parameter_variable in param_inst_dict:
new_parameter_variable = g_parameter_map[parameter_variable][0]
else:
new_parameter_variable = Parameter(f"g{parameter_variable.name}_1")
subs_map[parameter_variable] = new_parameter_variable
g_circuit.global_phase = g_circuit.global_phase.subs(subs_map)
return ParameterShiftGradientCircuit(
circuit=circuit2,
gradient_circuit=g_circuit,
gradient_virtual_parameter_map=g_virtual_parameter_map,
gradient_parameter_map=g_parameter_map,
coeff_map=coeff_map,
)
def _make_param_shift_base_parameter_values(
gradient_circuit_data: ParameterShiftGradientCircuit,
) -> list[np.ndarray]:
"""Makes base parameter values for the parameter shift method. Each base parameter value will
be added to the given parameter values in later calculations.
Args:
gradient_circuit_data: gradient circuit data for the base parameter values.
Returns:
The base parameter values for the parameter shift method.
"""
# Make internal parameter values for the parameter shift
g_parameters = gradient_circuit_data.gradient_circuit.parameters
plus_offsets = []
minus_offsets = []
# Make base decomposed parameter values for each original parameter
for g_param in g_parameters:
if g_param in gradient_circuit_data.gradient_virtual_parameter_map:
g_param = gradient_circuit_data.gradient_virtual_parameter_map[g_param]
idx = g_parameters.data.index(g_param)
plus = np.zeros(len(g_parameters))
plus[idx] += np.pi / 2
minus = np.zeros(len(g_parameters))
minus[idx] -= np.pi / 2
plus_offsets.append(plus)
minus_offsets.append(minus)
return plus_offsets + minus_offsets
def _param_shift_preprocessing(circuit: QuantumCircuit) -> ParameterShiftGradientCircuit:
"""Preprocessing for the parameter shift method.
Args:
circuit: The original quantum circuit
Returns:
necessary data to calculate gradients with the parameter shift method.
"""
gradient_circuit_data = _make_param_shift_gradient_circuit_data(circuit)
base_parameter_values = _make_param_shift_base_parameter_values(gradient_circuit_data)
return gradient_circuit_data, base_parameter_values
def _make_param_shift_parameter_values(
gradient_circuit_data: ParameterShiftGradientCircuit,
base_parameter_values: list[np.ndarray],
parameter_values: np.ndarray,
param_set: set[Parameter],
) -> list[np.ndarray]:
"""Makes parameter values for the parameter shift method. Each parameter value will be added to
the base parameter values in later calculations.
Args:
gradient_circuit_data: gradient circuit data for the parameter shift method.
base_parameter_values: base parameter values for the parameter shift method.
parameter_values: parameter values to be added to the base parameter values.
param_set: set of parameters to be used in the parameter shift method.
Returns:
The parameter values for the parameter shift method.
"""
circuit = gradient_circuit_data.circuit
gradient_circuit = gradient_circuit_data.gradient_circuit
gradient_parameter_values = np.zeros(len(gradient_circuit_data.gradient_circuit.parameters))
plus_offsets, minus_offsets, result_indices, coeffs = [], [], [], []
result_idx = 0
for i, param in enumerate(circuit.parameters):
g_params = gradient_circuit_data.gradient_parameter_map[param]
indices = [gradient_circuit.parameters.data.index(g_param) for g_param in g_params]
gradient_parameter_values[indices] = parameter_values[i]
if param in param_set:
plus_offsets.extend(base_parameter_values[idx] for idx in indices)
minus_offsets.extend(
base_parameter_values[idx + len(gradient_circuit.parameters)] for idx in indices
)
result_indices.extend(result_idx for _ in range(len(indices)))
result_idx += 1
for g_param in g_params:
coeff = gradient_circuit_data.coeff_map[g_param]
# if coeff has parameters, we need to substitute
if isinstance(coeff, ParameterExpression):
local_map = {
p: parameter_values[circuit.parameters.data.index(p)]
for p in coeff.parameters
}
bound_coeff = float(coeff.bind(local_map))
else:
bound_coeff = coeff
coeffs.append(bound_coeff / 2)
# add the base parameter values to the parameter values
gradient_parameter_values_plus = [
gradient_parameter_values + plus_offset for plus_offset in plus_offsets
]
gradient_parameter_values_minus = [
gradient_parameter_values + minus_offset for minus_offset in minus_offsets
]
return gradient_parameter_values_plus, gradient_parameter_values_minus, result_indices, coeffs
@dataclass
class LinearCombGradientCircuit:
"""Gradient circuit for the linear combination of unitaries method."""
gradient_circuit: QuantumCircuit
"""A gradient circuit for the linear combination of unitaries method."""
coeff: float | ParameterExpression
"""A coefficient corresponds to the gradient circuit."""
def _make_lin_comb_gradient_circuit(
circuit: QuantumCircuit, add_measurement: bool = False
) -> dict[Parameter, list[LinearCombGradientCircuit]]:
"""Makes gradient circuits for the linear combination of unitaries method.
Args:
circuit: The original quantum circuit.
add_measurement: If True, add measurements to the gradient circuit. Defaults to False.
``LinCombSamplerGradient`` calls this method with `add_measurement` is True.
Returns:
A dictionary mapping a parameter to the corresponding list of ``LinearCombGradientCircuit``
"""
supported_gates = [
"rx",
"ry",
"rz",
"rzx",
"rzz",
"ryy",
"rxx",
"cx",
"cy",
"cz",
"ccx",
"swap",
"iswap",
"h",
"t",
"s",
"sdg",
"x",
"y",
"z",
]
circuit2 = transpile(circuit, basis_gates=supported_gates, optimization_level=0)
qr_aux = QuantumRegister(1, "aux")
cr_aux = ClassicalRegister(1, "aux")
circuit2.add_register(qr_aux)
circuit2.add_bits(cr_aux)
circuit2.h(qr_aux)
circuit2.data.insert(0, circuit2.data.pop())
circuit2.sdg(qr_aux)
circuit2.data.insert(1, circuit2.data.pop())
grad_dict = defaultdict(list)
for i, (inst, qregs, _) in enumerate(circuit2.data):
if inst.is_parameterized():
param = inst.params[0]
for p in param.parameters:
gate = _gate_gradient(inst)
circuit3 = circuit2.copy()
# insert `gate` to i-th position
circuit3.append(gate, [qr_aux[0]] + qregs, [])
circuit3.data.insert(i, circuit3.data.pop())
circuit3.h(qr_aux)
if add_measurement:
circuit3.measure(qr_aux, cr_aux)
grad_dict[p].append(LinearCombGradientCircuit(circuit3, param.gradient(p)))
return grad_dict
def _gate_gradient(gate: Gate) -> Instruction:
"""Returns the derivative of the gate"""
# pylint: disable=too-many-return-statements
if isinstance(gate, RXGate):
return CXGate()
if isinstance(gate, RYGate):
return CYGate()
if isinstance(gate, RZGate):
return CZGate()
if isinstance(gate, RXXGate):
cxx_circ = QuantumCircuit(3)
cxx_circ.cx(0, 1)
cxx_circ.cx(0, 2)
cxx = cxx_circ.to_instruction()
return cxx
if isinstance(gate, RYYGate):
cyy_circ = QuantumCircuit(3)
cyy_circ.cy(0, 1)
cyy_circ.cy(0, 2)
cyy = cyy_circ.to_instruction()
return cyy
if isinstance(gate, RZZGate):
czz_circ = QuantumCircuit(3)
czz_circ.cz(0, 1)
czz_circ.cz(0, 2)
czz = czz_circ.to_instruction()
return czz
if isinstance(gate, RZXGate):
czx_circ = QuantumCircuit(3)
czx_circ.cx(0, 2)
czx_circ.cz(0, 1)
czx = czx_circ.to_instruction()
return czx
raise TypeError(f"Unrecognized parameterized gate, {gate}")

View File

@ -0,0 +1,15 @@
---
features:
- |
New gradient Algorithms using the primitives have been added. They internally
use the primitives to calculate the gradients. There are 3 types of
gradient classes (Finite Difference, Parameter Shift, and
Linear Combination of Unitary) for a sampler and estimator.
Example::
.. code-block:: python
estimator = Estimator(...)
gradient = ParamShiftEstimatorGradient(estimator)
job = gradient.run(circuits, observables, parameters)
gradients = job.result().gradients

View File

@ -0,0 +1,379 @@
# This code is part of Qiskit.
#
# (C) Copyright IBM 2019, 2021.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
# =============================================================================
""" Test Quantum Gradient Framework """
import unittest
from test import combine
import numpy as np
from ddt import ddt
from qiskit import QuantumCircuit
from qiskit.algorithms.gradients import (
FiniteDiffEstimatorGradient,
LinCombEstimatorGradient,
ParamShiftEstimatorGradient,
SPSAEstimatorGradient,
)
from qiskit.circuit import Parameter
from qiskit.circuit.library import EfficientSU2, RealAmplitudes
from qiskit.circuit.library.standard_gates import RXXGate, RYYGate, RZXGate, RZZGate
from qiskit.primitives import Estimator
from qiskit.quantum_info import SparsePauliOp, Operator
from qiskit.quantum_info.random import random_pauli_list
from qiskit.test import QiskitTestCase
@ddt
class TestEstimatorGradient(QiskitTestCase):
"""Test Estimator Gradient"""
@combine(
grad=[FiniteDiffEstimatorGradient, ParamShiftEstimatorGradient, LinCombEstimatorGradient]
)
def test_gradient_operators(self, grad):
"""Test the estimator gradient for different operators"""
estimator = Estimator()
a = Parameter("a")
qc = QuantumCircuit(1)
qc.h(0)
qc.p(a, 0)
qc.h(0)
if grad is FiniteDiffEstimatorGradient:
gradient = grad(estimator, epsilon=1e-6)
else:
gradient = grad(estimator)
op = SparsePauliOp.from_list([("Z", 1)])
correct_result = -1 / np.sqrt(2)
param = [np.pi / 4]
value = gradient.run([qc], [op], [param]).result().gradients[0]
self.assertAlmostEqual(value[0], correct_result, 3)
op = SparsePauliOp.from_list([("Z", 1)])
value = gradient.run([qc], [op], [param]).result().gradients[0]
self.assertAlmostEqual(value[0], correct_result, 3)
op = Operator.from_label("Z")
value = gradient.run([qc], [op], [param]).result().gradients[0]
self.assertAlmostEqual(value[0], correct_result, 3)
@combine(
grad=[FiniteDiffEstimatorGradient, ParamShiftEstimatorGradient, LinCombEstimatorGradient]
)
def test_gradient_p(self, grad):
"""Test the estimator gradient for p"""
estimator = Estimator()
a = Parameter("a")
qc = QuantumCircuit(1)
qc.h(0)
qc.p(a, 0)
qc.h(0)
if grad is FiniteDiffEstimatorGradient:
gradient = grad(estimator, epsilon=1e-6)
else:
gradient = grad(estimator)
op = SparsePauliOp.from_list([("Z", 1)])
param_list = [[np.pi / 4], [0], [np.pi / 2]]
correct_results = [[-1 / np.sqrt(2)], [0], [-1]]
for i, param in enumerate(param_list):
gradients = gradient.run([qc], [op], [param]).result().gradients[0]
for j, value in enumerate(gradients):
self.assertAlmostEqual(value, correct_results[i][j], 3)
@combine(
grad=[FiniteDiffEstimatorGradient, ParamShiftEstimatorGradient, LinCombEstimatorGradient]
)
def test_gradient_u(self, grad):
"""Test the estimator gradient for u"""
estimator = Estimator()
a = Parameter("a")
b = Parameter("b")
c = Parameter("c")
qc = QuantumCircuit(1)
qc.h(0)
qc.u(a, b, c, 0)
qc.h(0)
if grad is FiniteDiffEstimatorGradient:
gradient = grad(estimator, epsilon=1e-6)
else:
gradient = grad(estimator)
op = SparsePauliOp.from_list([("Z", 1)])
param_list = [[np.pi / 4, 0, 0], [np.pi / 4, np.pi / 4, np.pi / 4]]
correct_results = [[-0.70710678, 0.0, 0.0], [-0.35355339, -0.85355339, -0.85355339]]
for i, param in enumerate(param_list):
gradients = gradient.run([qc], [op], [param]).result().gradients[0]
for j, value in enumerate(gradients):
self.assertAlmostEqual(value, correct_results[i][j], 3)
@combine(
grad=[FiniteDiffEstimatorGradient, ParamShiftEstimatorGradient, LinCombEstimatorGradient]
)
def test_gradient_efficient_su2(self, grad):
"""Test the estimator gradient for EfficientSU2"""
estimator = Estimator()
qc = EfficientSU2(2, reps=1)
op = SparsePauliOp.from_list([("ZI", 1)])
if grad is FiniteDiffEstimatorGradient:
gradient = grad(estimator, epsilon=1e-6)
else:
gradient = grad(estimator)
param_list = [
[np.pi / 4 for param in qc.parameters],
[np.pi / 2 for param in qc.parameters],
]
correct_results = [
[
-0.35355339,
-0.70710678,
0,
0.35355339,
0,
-0.70710678,
0,
0,
],
[0, 0, 0, 1, 0, 0, 0, 0],
]
for i, param in enumerate(param_list):
gradients = gradient.run([qc], [op], [param]).result().gradients[0]
np.testing.assert_allclose(gradients, correct_results[i], atol=1e-3)
@combine(
grad=[FiniteDiffEstimatorGradient, ParamShiftEstimatorGradient, LinCombEstimatorGradient],
)
def test_gradient_2qubit_gate(self, grad):
"""Test the estimator gradient for 2 qubit gates"""
estimator = Estimator()
for gate in [RXXGate, RYYGate, RZZGate, RZXGate]:
param_list = [[np.pi / 4], [np.pi / 2]]
correct_results = [
[-0.70710678],
[-1],
]
op = SparsePauliOp.from_list([("ZI", 1)])
for i, param in enumerate(param_list):
a = Parameter("a")
qc = QuantumCircuit(2)
if grad is FiniteDiffEstimatorGradient:
gradient = grad(estimator, epsilon=1e-6)
else:
gradient = grad(estimator)
if gate is RZZGate:
qc.h([0, 1])
qc.append(gate(a), [qc.qubits[0], qc.qubits[1]], [])
qc.h([0, 1])
else:
qc.append(gate(a), [qc.qubits[0], qc.qubits[1]], [])
gradients = gradient.run([qc], [op], [param]).result().gradients[0]
np.testing.assert_allclose(gradients, correct_results[i], atol=1e-3)
@combine(
grad=[FiniteDiffEstimatorGradient, ParamShiftEstimatorGradient, LinCombEstimatorGradient]
)
def test_gradient_parameter_coefficient(self, grad):
"""Test the estimator gradient for parameter variables with coefficients"""
estimator = Estimator()
qc = RealAmplitudes(num_qubits=2, reps=1)
qc.rz(qc.parameters[0].exp() + 2 * qc.parameters[1], 0)
qc.rx(3.0 * qc.parameters[0] + qc.parameters[1].sin(), 1)
qc.u(qc.parameters[0], qc.parameters[1], qc.parameters[3], 1)
qc.p(2 * qc.parameters[0] + 1, 0)
qc.rxx(qc.parameters[0] + 2, 0, 1)
if grad is FiniteDiffEstimatorGradient:
gradient = grad(estimator, epsilon=1e-6)
else:
gradient = grad(estimator)
param_list = [[np.pi / 4 for _ in qc.parameters], [np.pi / 2 for _ in qc.parameters]]
correct_results = [
[-0.7266653, -0.4905135, -0.0068606, -0.9228880],
[-3.5972095, 0.10237173, -0.3117748, 0],
]
op = SparsePauliOp.from_list([("ZI", 1)])
for i, param in enumerate(param_list):
gradients = gradient.run([qc], [op], [param]).result().gradients[0]
np.testing.assert_allclose(gradients, correct_results[i], atol=1e-3)
@combine(
grad=[FiniteDiffEstimatorGradient, ParamShiftEstimatorGradient, LinCombEstimatorGradient]
)
def test_gradient_parameters(self, grad):
"""Test the estimator gradient for parameters"""
estimator = Estimator()
a = Parameter("a")
b = Parameter("b")
qc = QuantumCircuit(1)
qc.rx(a, 0)
qc.rx(b, 0)
if grad is FiniteDiffEstimatorGradient:
gradient = grad(estimator, epsilon=1e-6)
else:
gradient = grad(estimator)
param_list = [[np.pi / 4, np.pi / 2]]
correct_results = [
[-0.70710678],
]
op = SparsePauliOp.from_list([("Z", 1)])
for i, param in enumerate(param_list):
gradients = gradient.run([qc], [op], [param], parameters=[[a]]).result().gradients[0]
np.testing.assert_allclose(gradients, correct_results[i], atol=1e-3)
@combine(
grad=[FiniteDiffEstimatorGradient, ParamShiftEstimatorGradient, LinCombEstimatorGradient]
)
def test_gradient_multi_arguments(self, grad):
"""Test the estimator gradient for multiple arguments"""
estimator = Estimator()
a = Parameter("a")
b = Parameter("b")
qc = QuantumCircuit(1)
qc.rx(a, 0)
qc2 = QuantumCircuit(1)
qc2.rx(b, 0)
if grad is FiniteDiffEstimatorGradient:
gradient = grad(estimator, epsilon=1e-6)
else:
gradient = grad(estimator)
param_list = [[np.pi / 4], [np.pi / 2]]
correct_results = [
[-0.70710678],
[-1],
]
op = SparsePauliOp.from_list([("Z", 1)])
gradients = gradient.run([qc, qc2], [op] * 2, param_list).result().gradients
np.testing.assert_allclose(gradients, correct_results, atol=1e-3)
c = Parameter("c")
qc3 = QuantumCircuit(1)
qc3.rx(c, 0)
qc3.ry(a, 0)
param_list2 = [[np.pi / 4], [np.pi / 4, np.pi / 4], [np.pi / 4, np.pi / 4]]
correct_results2 = [
[-0.70710678],
[-0.5],
[-0.5, -0.5],
]
gradients2 = (
gradient.run([qc, qc3, qc3], [op] * 3, param_list2, parameters=[[a], [c], None])
.result()
.gradients
)
np.testing.assert_allclose(gradients2[0], correct_results2[0], atol=1e-3)
np.testing.assert_allclose(gradients2[1], correct_results2[1], atol=1e-3)
np.testing.assert_allclose(gradients2[2], correct_results2[2], atol=1e-3)
@combine(
grad=[FiniteDiffEstimatorGradient, ParamShiftEstimatorGradient, LinCombEstimatorGradient]
)
def test_gradient_validation(self, grad):
"""Test estimator gradient's validation"""
estimator = Estimator()
a = Parameter("a")
qc = QuantumCircuit(1)
qc.rx(a, 0)
if grad is FiniteDiffEstimatorGradient:
gradient = grad(estimator, epsilon=1e-6)
with self.assertRaises(ValueError):
_ = grad(estimator, epsilon=-0.1)
else:
gradient = grad(estimator)
param_list = [[np.pi / 4], [np.pi / 2]]
op = SparsePauliOp.from_list([("Z", 1)])
with self.assertRaises(ValueError):
gradient.run([qc], [op], param_list)
with self.assertRaises(ValueError):
gradient.run([qc, qc], [op, op], param_list, parameters=[[a]])
with self.assertRaises(ValueError):
gradient.run([qc, qc], [op], param_list, parameters=[[a]])
with self.assertRaises(ValueError):
gradient.run([qc], [op], [[np.pi / 4, np.pi / 4]])
def test_spsa_gradient(self):
"""Test the SPSA estimator gradient"""
estimator = Estimator()
with self.assertRaises(ValueError):
_ = SPSAEstimatorGradient(estimator, epsilon=-0.1)
a = Parameter("a")
b = Parameter("b")
qc = QuantumCircuit(2)
qc.rx(b, 0)
qc.rx(a, 1)
param_list = [[1, 1]]
correct_results = [[-0.84147098, 0.84147098]]
op = SparsePauliOp.from_list([("ZI", 1)])
gradient = SPSAEstimatorGradient(estimator, epsilon=1e-6, seed=123)
gradients = gradient.run([qc], [op], param_list).result().gradients
np.testing.assert_allclose(gradients, correct_results, atol=1e-3)
# multi parameters
gradient = SPSAEstimatorGradient(estimator, epsilon=1e-6, seed=123)
param_list2 = [[1, 1], [1, 1], [3, 3]]
gradients2 = (
gradient.run([qc] * 3, [op] * 3, param_list2, parameters=[None, [b], None])
.result()
.gradients
)
correct_results2 = [[-0.84147098, 0.84147098], [0.84147098], [-0.14112001, 0.14112001]]
for grad, correct in zip(gradients2, correct_results2):
np.testing.assert_allclose(grad, correct, atol=1e-3)
# batch size
correct_results = [[-0.84147098, 0.1682942]]
gradient = SPSAEstimatorGradient(estimator, epsilon=1e-6, batch_size=5, seed=123)
gradients = gradient.run([qc], [op], param_list).result().gradients
np.testing.assert_allclose(gradients, correct_results, atol=1e-3)
@combine(grad=[ParamShiftEstimatorGradient, LinCombEstimatorGradient])
def test_gradient_random_parameters(self, grad):
"""Test param shift and lin comb w/ random parameters"""
rng = np.random.default_rng(123)
qc = RealAmplitudes(num_qubits=3, reps=1)
params = qc.parameters
qc.rx(3.0 * params[0] + params[1].sin(), 0)
qc.ry(params[0].exp() + 2 * params[1], 1)
qc.rz(params[0] * params[1] - params[2], 2)
qc.p(2 * params[0] + 1, 0)
qc.u(params[0].sin(), params[1] - 2, params[2] * params[3], 1)
qc.sx(2)
qc.rxx(params[0].sin(), 1, 2)
qc.ryy(params[1].cos(), 2, 0)
qc.rzz(params[2] * 2, 0, 1)
qc.crx(params[0].exp(), 1, 2)
qc.cry(params[1].arctan(), 2, 0)
qc.crz(params[2] * -2, 0, 1)
qc.dcx(0, 1)
qc.csdg(0, 1)
qc.toffoli(0, 1, 2)
qc.iswap(0, 2)
qc.swap(1, 2)
qc.global_phase = params[0] * params[1] + params[2].cos().exp()
size = 10
op = SparsePauliOp(random_pauli_list(num_qubits=qc.num_qubits, size=size, seed=rng))
op.coeffs = rng.normal(0, 10, size)
estimator = Estimator()
findiff = FiniteDiffEstimatorGradient(estimator, 1e-6)
gradient = grad(estimator)
num_tries = 10
param_values = rng.normal(0, 2, (num_tries, qc.num_parameters)).tolist()
np.testing.assert_allclose(
findiff.run([qc] * num_tries, [op] * num_tries, param_values).result().gradients,
gradient.run([qc] * num_tries, [op] * num_tries, param_values).result().gradients,
rtol=1e-4,
)
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,519 @@
# This code is part of Qiskit.
#
# (C) Copyright IBM 2019, 2021.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
# =============================================================================
""" Test Quantum Gradient Framework """
import unittest
from test import combine
from typing import List
import numpy as np
from ddt import ddt
from qiskit import QuantumCircuit
from qiskit.algorithms.gradients import (
FiniteDiffSamplerGradient,
LinCombSamplerGradient,
ParamShiftSamplerGradient,
SPSASamplerGradient,
)
from qiskit.circuit import Parameter
from qiskit.circuit.library import EfficientSU2, RealAmplitudes
from qiskit.circuit.library.standard_gates import RXXGate, RYYGate, RZXGate, RZZGate
from qiskit.primitives import Sampler
from qiskit.result import QuasiDistribution
from qiskit.test import QiskitTestCase
@ddt
class TestSamplerGradient(QiskitTestCase):
"""Test Sampler Gradient"""
@combine(grad=[FiniteDiffSamplerGradient, ParamShiftSamplerGradient, LinCombSamplerGradient])
def test_gradient_p(self, grad):
"""Test the sampler gradient for p"""
sampler = Sampler()
a = Parameter("a")
qc = QuantumCircuit(1)
qc.h(0)
qc.p(a, 0)
qc.h(0)
qc.measure_all()
if grad is FiniteDiffSamplerGradient:
gradient = grad(sampler, epsilon=1e-6)
else:
gradient = grad(sampler)
param_list = [[np.pi / 4], [0], [np.pi / 2]]
correct_results = [
[{0: -0.5 / np.sqrt(2), 1: 0.5 / np.sqrt(2)}],
[{0: 0, 1: 0}],
[{0: -0.499999, 1: 0.499999}],
]
for i, param in enumerate(param_list):
gradients = gradient.run([qc], [param]).result().gradients[0]
for j, quasi_dist in enumerate(gradients):
for k in quasi_dist:
self.assertAlmostEqual(quasi_dist[k], correct_results[i][j][k], 3)
@combine(grad=[FiniteDiffSamplerGradient, ParamShiftSamplerGradient, LinCombSamplerGradient])
def test_gradient_u(self, grad):
"""Test the sampler gradient for u"""
sampler = Sampler()
a = Parameter("a")
b = Parameter("b")
c = Parameter("c")
qc = QuantumCircuit(1)
qc.h(0)
qc.u(a, b, c, 0)
qc.h(0)
qc.measure_all()
if grad is FiniteDiffSamplerGradient:
gradient = grad(sampler, epsilon=1e-6)
else:
gradient = grad(sampler)
param_list = [[np.pi / 4, 0, 0], [np.pi / 4, np.pi / 4, np.pi / 4]]
correct_results = [
[{0: -0.5 / np.sqrt(2), 1: 0.5 / np.sqrt(2)}, {0: 0, 1: 0}, {0: 0, 1: 0}],
[{0: -0.176777, 1: 0.176777}, {0: -0.426777, 1: 0.426777}, {0: -0.426777, 1: 0.426777}],
]
for i, param in enumerate(param_list):
gradients = gradient.run([qc], [param]).result().gradients[0]
for j, quasi_dist in enumerate(gradients):
for k in quasi_dist:
self.assertAlmostEqual(quasi_dist[k], correct_results[i][j][k], 3)
@combine(grad=[FiniteDiffSamplerGradient, ParamShiftSamplerGradient, LinCombSamplerGradient])
def test_gradient_efficient_su2(self, grad):
"""Test the sampler gradient for EfficientSU2"""
sampler = Sampler()
qc = EfficientSU2(2, reps=1)
qc.measure_all()
if grad is FiniteDiffSamplerGradient:
gradient = grad(sampler, epsilon=1e-6)
else:
gradient = grad(sampler)
param_list = [
[np.pi / 4 for param in qc.parameters],
[np.pi / 2 for param in qc.parameters],
]
correct_results = [
[
{
0: -0.11963834764831836,
1: -0.05713834764831845,
2: -0.21875000000000003,
3: 0.39552669529663675,
},
{
0: -0.32230339059327373,
1: -0.031250000000000014,
2: 0.2339150429449554,
3: 0.11963834764831843,
},
{
0: 0.012944173824159189,
1: -0.01294417382415923,
2: 0.07544417382415919,
3: -0.07544417382415919,
},
{
0: 0.2080266952966367,
1: -0.03125000000000002,
2: -0.11963834764831842,
3: -0.057138347648318405,
},
{
0: -0.11963834764831838,
1: 0.11963834764831838,
2: -0.21875000000000003,
3: 0.21875,
},
{
0: -0.2781092167691146,
1: -0.0754441738241592,
2: 0.27810921676911443,
3: 0.07544417382415924,
},
{0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0},
{0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0},
],
[
{
0: -4.163336342344337e-17,
1: 2.7755575615628914e-17,
2: -4.163336342344337e-17,
3: 0.0,
},
{0: 0.0, 1: -1.3877787807814457e-17, 2: 4.163336342344337e-17, 3: 0.0},
{
0: -0.24999999999999994,
1: 0.24999999999999994,
2: 0.24999999999999994,
3: -0.24999999999999994,
},
{
0: 0.24999999999999994,
1: 0.24999999999999994,
2: -0.24999999999999994,
3: -0.24999999999999994,
},
{
0: -4.163336342344337e-17,
1: 4.163336342344337e-17,
2: -4.163336342344337e-17,
3: 5.551115123125783e-17,
},
{
0: -0.24999999999999994,
1: 0.24999999999999994,
2: 0.24999999999999994,
3: -0.24999999999999994,
},
{0: 0.0, 1: 2.7755575615628914e-17, 2: 0.0, 3: 2.7755575615628914e-17},
{0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0},
],
]
for i, param in enumerate(param_list):
gradients = gradient.run([qc], [param]).result().gradients[0]
for j, quasi_dist in enumerate(gradients):
for k in quasi_dist:
self.assertAlmostEqual(quasi_dist[k], correct_results[i][j][k], 3)
@combine(grad=[FiniteDiffSamplerGradient, ParamShiftSamplerGradient, LinCombSamplerGradient])
def test_gradient_2qubit_gate(self, grad):
"""Test the sampler gradient for 2 qubit gates"""
sampler = Sampler()
for gate in [RXXGate, RYYGate, RZZGate, RZXGate]:
param_list = [[np.pi / 4], [np.pi / 2]]
if gate is RZXGate:
correct_results = [
[{0: -0.5 / np.sqrt(2), 1: 0, 2: 0.5 / np.sqrt(2), 3: 0}],
[{0: -0.5, 1: 0, 2: 0.5, 3: 0}],
]
else:
correct_results = [
[{0: -0.5 / np.sqrt(2), 1: 0, 2: 0, 3: 0.5 / np.sqrt(2)}],
[{0: -0.5, 1: 0, 2: 0, 3: 0.5}],
]
for i, param in enumerate(param_list):
a = Parameter("a")
qc = QuantumCircuit(2)
qc.append(gate(a), [qc.qubits[0], qc.qubits[1]], [])
qc.measure_all()
if grad is FiniteDiffSamplerGradient:
gradient = grad(sampler, epsilon=1e-6)
else:
gradient = grad(sampler)
gradients = gradient.run([qc], [param]).result().gradients[0]
for j, quasi_dist in enumerate(gradients):
for k in quasi_dist:
self.assertAlmostEqual(quasi_dist[k], correct_results[i][j][k], 3)
@combine(grad=[FiniteDiffSamplerGradient, ParamShiftSamplerGradient, LinCombSamplerGradient])
def test_gradient_parameter_coefficient(self, grad):
"""Test the sampler gradient for parameter variables with coefficients"""
sampler = Sampler()
qc = RealAmplitudes(num_qubits=2, reps=1)
qc.rz(qc.parameters[0].exp() + 2 * qc.parameters[1], 0)
qc.rx(3.0 * qc.parameters[0] + qc.parameters[1].sin(), 1)
qc.u(qc.parameters[0], qc.parameters[1], qc.parameters[3], 1)
qc.p(2 * qc.parameters[0] + 1, 0)
qc.rxx(qc.parameters[0] + 2, 0, 1)
qc.measure_all()
if grad is FiniteDiffSamplerGradient:
gradient = grad(sampler, epsilon=1e-6)
else:
gradient = grad(sampler)
param_list = [[np.pi / 4 for _ in qc.parameters], [np.pi / 2 for _ in qc.parameters]]
correct_results = [
[
{
0: 0.30014831912265927,
1: -0.6634809704357856,
2: 0.343589357193753,
3: 0.019743294119373426,
},
{
0: 0.16470607453981906,
1: -0.40996282450610577,
2: 0.08791803062881773,
3: 0.15733871933746948,
},
{
0: 0.27036068339663866,
1: -0.273790986018701,
2: 0.12752010079553433,
3: -0.12408979817347202,
},
{
0: -0.2098616294167757,
1: -0.2515823946449894,
2: 0.21929102305386305,
3: 0.24215300100790207,
},
],
[
{
0: -1.844810060881004,
1: 0.04620532700836027,
2: 1.6367366426074323,
3: 0.16186809126521057,
},
{
0: 0.07296073407769421,
1: -0.021774869186331716,
2: 0.02177486918633173,
3: -0.07296073407769456,
},
{
0: -0.07794369186049102,
1: -0.07794369186049122,
2: 0.07794369186049117,
3: 0.07794369186049112,
},
{
0: 0.0,
1: 0.0,
2: 0.0,
3: 0.0,
},
],
]
for i, param in enumerate(param_list):
gradients = gradient.run([qc], [param]).result().gradients[0]
for j, quasi_dist in enumerate(gradients):
for k in quasi_dist:
self.assertAlmostEqual(quasi_dist[k], correct_results[i][j][k], 2)
@combine(grad=[FiniteDiffSamplerGradient, ParamShiftSamplerGradient, LinCombSamplerGradient])
def test_gradient_parameters(self, grad):
"""Test the sampler gradient for parameters"""
sampler = Sampler()
a = Parameter("a")
b = Parameter("b")
qc = QuantumCircuit(1)
qc.rx(a, 0)
qc.rz(b, 0)
qc.measure_all()
if grad is FiniteDiffSamplerGradient:
gradient = grad(sampler, epsilon=1e-6)
else:
gradient = grad(sampler)
param_list = [[np.pi / 4, np.pi / 2]]
correct_results = [
[{0: -0.5 / np.sqrt(2), 1: 0.5 / np.sqrt(2)}],
]
for i, param in enumerate(param_list):
gradients = gradient.run([qc], [param], parameters=[[a]]).result().gradients[0]
for j, quasi_dist in enumerate(gradients):
for k in quasi_dist:
self.assertAlmostEqual(quasi_dist[k], correct_results[i][j][k], 3)
@combine(grad=[FiniteDiffSamplerGradient, ParamShiftSamplerGradient, LinCombSamplerGradient])
def test_gradient_multi_arguments(self, grad):
"""Test the sampler gradient for multiple arguments"""
sampler = Sampler()
a = Parameter("a")
b = Parameter("b")
qc = QuantumCircuit(1)
qc.rx(a, 0)
qc.measure_all()
qc2 = QuantumCircuit(1)
qc2.rx(b, 0)
qc2.measure_all()
if grad is FiniteDiffSamplerGradient:
gradient = grad(sampler, epsilon=1e-6)
else:
gradient = grad(sampler)
param_list = [[np.pi / 4], [np.pi / 2]]
correct_results = [
[{0: -0.5 / np.sqrt(2), 1: 0.5 / np.sqrt(2)}],
[{0: -0.499999, 1: 0.499999}],
]
gradients = gradient.run([qc, qc2], param_list).result().gradients
for j, q_dists in enumerate(gradients):
quasi_dist = q_dists[0]
for k in quasi_dist:
self.assertAlmostEqual(quasi_dist[k], correct_results[j][0][k], 3)
c = Parameter("c")
qc3 = QuantumCircuit(1)
qc3.rx(c, 0)
qc3.ry(a, 0)
qc3.measure_all()
param_list2 = [[np.pi / 4], [np.pi / 4, np.pi / 4], [np.pi / 4, np.pi / 4]]
gradients = (
gradient.run([qc, qc3, qc3], param_list2, parameters=[[a], [c], None])
.result()
.gradients
)
correct_results = [
[{0: -0.5 / np.sqrt(2), 1: 0.5 / np.sqrt(2)}],
[{0: -0.25, 1: 0.25}],
[{0: -0.25, 1: 0.25}, {0: -0.25, 1: 0.25}],
]
for i, result in enumerate(gradients):
for j, q_dists in enumerate(result):
for k in q_dists:
self.assertAlmostEqual(q_dists[k], correct_results[i][j][k], 3)
@combine(grad=[FiniteDiffSamplerGradient, ParamShiftSamplerGradient, LinCombSamplerGradient])
def test_gradient_validation(self, grad):
"""Test sampler gradient's validation"""
sampler = Sampler()
a = Parameter("a")
qc = QuantumCircuit(1)
qc.rx(a, 0)
qc.measure_all()
if grad is FiniteDiffSamplerGradient:
gradient = grad(sampler, epsilon=1e-6)
with self.assertRaises(ValueError):
_ = grad(sampler, epsilon=-0.1)
else:
gradient = grad(sampler)
param_list = [[np.pi / 4], [np.pi / 2]]
with self.assertRaises(ValueError):
gradient.run([qc], param_list)
with self.assertRaises(ValueError):
gradient.run([qc, qc], param_list, parameters=[[a]])
with self.assertRaises(ValueError):
gradient.run([qc], [[np.pi / 4, np.pi / 4]])
def test_spsa_gradient(self):
"""Test the SPSA sampler gradient"""
sampler = Sampler()
with self.assertRaises(ValueError):
_ = SPSASamplerGradient(sampler, epsilon=-0.1)
a = Parameter("a")
b = Parameter("b")
qc = QuantumCircuit(2)
qc.rx(b, 0)
qc.rx(a, 1)
qc.measure_all()
param_list = [[1, 2]]
correct_results = [
[
{0: 0.2273244, 1: -0.6480598, 2: 0.2273244, 3: 0.1934111},
{0: -0.2273244, 1: 0.6480598, 2: -0.2273244, 3: -0.1934111},
],
]
gradient = SPSASamplerGradient(sampler, epsilon=1e-6, seed=123)
for i, param in enumerate(param_list):
gradients = gradient.run([qc], [param]).result().gradients[0]
for j, quasi_dist in enumerate(gradients):
for k in quasi_dist:
self.assertAlmostEqual(quasi_dist[k], correct_results[i][j][k], 3)
# multi parameters
param_list2 = [[1, 2], [1, 2], [3, 4]]
correct_results2 = [
[
{0: 0.2273244, 1: -0.6480598, 2: 0.2273244, 3: 0.1934111},
{0: -0.2273244, 1: 0.6480598, 2: -0.2273244, 3: -0.1934111},
],
[
{0: -0.2273244, 1: 0.6480598, 2: -0.2273244, 3: -0.1934111},
],
[
{0: -0.0141129, 1: -0.0564471, 2: -0.3642884, 3: 0.4348484},
{0: 0.0141129, 1: 0.0564471, 2: 0.3642884, 3: -0.4348484},
],
]
gradient = SPSASamplerGradient(sampler, epsilon=1e-6, seed=123)
gradients = (
gradient.run([qc] * 3, param_list2, parameters=[None, [b], None]).result().gradients
)
for i, result in enumerate(gradients):
for j, q_dists in enumerate(result):
for k in q_dists:
self.assertAlmostEqual(q_dists[k], correct_results2[i][j][k], 3)
# batch size
param_list = [[1, 1]]
gradient = SPSASamplerGradient(sampler, epsilon=1e-6, batch_size=4, seed=123)
gradients = gradient.run([qc], param_list).result().gradients
correct_results3 = [
[
{
0: -0.1620149622932887,
1: -0.25872053011771756,
2: 0.3723827084675668,
3: 0.04835278392088804,
},
{
0: -0.1620149622932887,
1: 0.3723827084675668,
2: -0.25872053011771756,
3: 0.04835278392088804,
},
]
]
for i, result in enumerate(gradients):
for j, q_dists in enumerate(result):
for k in q_dists:
self.assertAlmostEqual(q_dists[k], correct_results3[i][j][k], 3)
@combine(grad=[ParamShiftSamplerGradient, LinCombSamplerGradient])
def test_gradient_random_parameters(self, grad):
"""Test param shift and lin comb w/ random parameters"""
rng = np.random.default_rng(123)
qc = RealAmplitudes(num_qubits=3, reps=1)
params = qc.parameters
qc.rx(3.0 * params[0] + params[1].sin(), 0)
qc.ry(params[0].exp() + 2 * params[1], 1)
qc.rz(params[0] * params[1] - params[2], 2)
qc.p(2 * params[0] + 1, 0)
qc.u(params[0].sin(), params[1] - 2, params[2] * params[3], 1)
qc.sx(2)
qc.rxx(params[0].sin(), 1, 2)
qc.ryy(params[1].cos(), 2, 0)
qc.rzz(params[2] * 2, 0, 1)
qc.crx(params[0].exp(), 1, 2)
qc.cry(params[1].arctan(), 2, 0)
qc.crz(params[2] * -2, 0, 1)
qc.dcx(0, 1)
qc.csdg(0, 1)
qc.toffoli(0, 1, 2)
qc.iswap(0, 2)
qc.swap(1, 2)
qc.global_phase = params[0] * params[1] + params[2].cos().exp()
qc.measure_all()
sampler = Sampler()
findiff = FiniteDiffSamplerGradient(sampler, 1e-6)
gradient = grad(sampler)
num_qubits = qc.num_qubits
num_tries = 10
param_values = rng.normal(0, 2, (num_tries, qc.num_parameters)).tolist()
result1 = findiff.run([qc] * num_tries, param_values).result().gradients
result2 = gradient.run([qc] * num_tries, param_values).result().gradients
self.assertEqual(len(result1), len(result2))
for res1, res2 in zip(result1, result2):
array1 = _quasi2array(res1, num_qubits)
array2 = _quasi2array(res2, num_qubits)
np.testing.assert_allclose(array1, array2, rtol=1e-4)
def _quasi2array(quasis: List[QuasiDistribution], num_qubits: int) -> np.ndarray:
ret = np.zeros((len(quasis), 2**num_qubits))
for i, quasi in enumerate(quasis):
ret[i, list(quasi.keys())] = list(quasi.values())
return ret
if __name__ == "__main__":
unittest.main()