437 lines
19 KiB
Plaintext
437 lines
19 KiB
Plaintext
---
|
||
title: GradientDescent
|
||
description: API reference for qiskit.algorithms.optimizers.GradientDescent
|
||
in_page_toc_min_heading_level: 1
|
||
python_api_type: class
|
||
python_api_name: qiskit.algorithms.optimizers.GradientDescent
|
||
---
|
||
|
||
# GradientDescent
|
||
|
||
<Class id="qiskit.algorithms.optimizers.GradientDescent" isDedicatedPage={true} github="https://github.com/qiskit/qiskit/tree/stable/0.46/qiskit/algorithms/optimizers/gradient_descent.py" signature="qiskit.algorithms.optimizers.GradientDescent(maxiter=100, learning_rate=0.01, tol=1e-07, callback=None, perturbation=None)" modifiers="class">
|
||
Bases: [`SteppableOptimizer`](qiskit.algorithms.optimizers.SteppableOptimizer "qiskit.algorithms.optimizers.steppable_optimizer.SteppableOptimizer")
|
||
|
||
The gradient descent minimization routine.
|
||
|
||
For a function $f$ and an initial point $\vec\theta_0$, the standard (or “vanilla”) gradient descent method is an iterative scheme to find the minimum $\vec\theta^*$ of $f$ by updating the parameters in the direction of the negative gradient of $f$
|
||
|
||
$$
|
||
\vec\theta_{n+1} = \vec\theta_{n} - \eta_n \vec\nabla f(\vec\theta_{n}),
|
||
$$
|
||
|
||
for a small learning rate $\eta_n > 0$.
|
||
|
||
You can either provide the analytic gradient $\vec\nabla f$ as `jac` in the [`minimize()`](#qiskit.algorithms.optimizers.GradientDescent.minimize "qiskit.algorithms.optimizers.GradientDescent.minimize") method, or, if you do not provide it, use a finite difference approximation of the gradient. To adapt the size of the perturbation in the finite difference gradients, set the `perturbation` property in the initializer.
|
||
|
||
This optimizer supports a callback function. If provided in the initializer, the optimizer will call the callback in each iteration with the following information in this order: current number of function values, current parameters, current function value, norm of current gradient.
|
||
|
||
**Examples**
|
||
|
||
A minimum example that will use finite difference gradients with a default perturbation of 0.01 and a default learning rate of 0.01.
|
||
|
||
```python
|
||
from qiskit.algorithms.optimizers import GradientDescent
|
||
|
||
def f(x):
|
||
return (np.linalg.norm(x) - 1) ** 2
|
||
|
||
initial_point = np.array([1, 0.5, -0.2])
|
||
|
||
optimizer = GradientDescent(maxiter=100)
|
||
|
||
result = optimizer.minimize(fun=fun, x0=initial_point)
|
||
|
||
print(f"Found minimum {result.x} at a value"
|
||
"of {result.fun} using {result.nfev} evaluations.")
|
||
```
|
||
|
||
An example where the learning rate is an iterator and we supply the analytic gradient. Note how much faster this convergences (i.e. less `nfev`) compared to the previous example.
|
||
|
||
```python
|
||
from qiskit.algorithms.optimizers import GradientDescent
|
||
|
||
def learning_rate():
|
||
power = 0.6
|
||
constant_coeff = 0.1
|
||
def powerlaw():
|
||
n = 0
|
||
while True:
|
||
yield constant_coeff * (n ** power)
|
||
n += 1
|
||
|
||
return powerlaw()
|
||
|
||
def f(x):
|
||
return (np.linalg.norm(x) - 1) ** 2
|
||
|
||
def grad_f(x):
|
||
return 2 * (np.linalg.norm(x) - 1) * x / np.linalg.norm(x)
|
||
|
||
initial_point = np.array([1, 0.5, -0.2])
|
||
|
||
optimizer = GradientDescent(maxiter=100, learning_rate=learning_rate)
|
||
result = optimizer.minimize(fun=fun, jac=grad_f, x0=initial_point)
|
||
|
||
print(f"Found minimum {result.x} at a value"
|
||
"of {result.fun} using {result.nfev} evaluations.")
|
||
```
|
||
|
||
An other example where the evaluation of the function has a chance of failing. The user, with specific knowledge about his function can catch this errors and handle them before passing the result to the optimizer.
|
||
|
||
> ```python
|
||
> import random
|
||
> import numpy as np
|
||
> from qiskit.algorithms.optimizers import GradientDescent
|
||
>
|
||
> def objective(x):
|
||
> if random.choice([True, False]):
|
||
> return None
|
||
> else:
|
||
> return (np.linalg.norm(x) - 1) ** 2
|
||
>
|
||
> def grad(x):
|
||
> if random.choice([True, False]):
|
||
> return None
|
||
> else:
|
||
> return 2 * (np.linalg.norm(x) - 1) * x / np.linalg.norm(x)
|
||
>
|
||
>
|
||
> initial_point = np.random.normal(0, 1, size=(100,))
|
||
>
|
||
> optimizer = GradientDescent(maxiter=20)
|
||
> optimizer.start(x0=initial_point, fun=objective, jac=grad)
|
||
>
|
||
> while optimizer.continue_condition():
|
||
> ask_data = optimizer.ask()
|
||
> evaluated_gradient = None
|
||
>
|
||
> while evaluated_gradient is None:
|
||
> evaluated_gradient = grad(ask_data.x_center)
|
||
> optimizer.state.njev += 1
|
||
>
|
||
> optmizer.state.nit += 1
|
||
>
|
||
> tell_data = TellData(eval_jac=evaluated_gradient)
|
||
> optimizer.tell(ask_data=ask_data, tell_data=tell_data)
|
||
>
|
||
> result = optimizer.create_result()
|
||
> ```
|
||
|
||
Users that aren’t dealing with complicated functions and who are more familiar with step by step optimization algorithms can use the [`step()`](#qiskit.algorithms.optimizers.GradientDescent.step "qiskit.algorithms.optimizers.GradientDescent.step") method which wraps the [`ask()`](#qiskit.algorithms.optimizers.GradientDescent.ask "qiskit.algorithms.optimizers.GradientDescent.ask") and [`tell()`](#qiskit.algorithms.optimizers.GradientDescent.tell "qiskit.algorithms.optimizers.GradientDescent.tell") methods. In the same spirit the method [`minimize()`](#qiskit.algorithms.optimizers.GradientDescent.minimize "qiskit.algorithms.optimizers.GradientDescent.minimize") will optimize the function and return the result.
|
||
|
||
To see other libraries that use this interface one can visit: [https://optuna.readthedocs.io/en/stable/tutorial/20\_recipes/009\_ask\_and\_tell.html](https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/009_ask_and_tell.html)
|
||
|
||
**Parameters**
|
||
|
||
* **maxiter** ([*int*](https://docs.python.org/3/library/functions.html#int "(in Python v3.12)")) – The maximum number of iterations.
|
||
* **learning\_rate** ([*float*](https://docs.python.org/3/library/functions.html#float "(in Python v3.12)") *|*[*list*](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.12)")*\[*[*float*](https://docs.python.org/3/library/functions.html#float "(in Python v3.12)")*] | np.ndarray | Callable\[\[], Generator\[*[*float*](https://docs.python.org/3/library/functions.html#float "(in Python v3.12)")*, None, None]]*) – A constant, list, array or factory of generators yielding learning rates for the parameter updates. See the docstring for an example.
|
||
* **tol** ([*float*](https://docs.python.org/3/library/functions.html#float "(in Python v3.12)")) – If the norm of the parameter update is smaller than this threshold, the optimizer has converged.
|
||
* **perturbation** ([*float*](https://docs.python.org/3/library/functions.html#float "(in Python v3.12)") *| None*) – If no gradient is passed to [`minimize()`](#qiskit.algorithms.optimizers.GradientDescent.minimize "qiskit.algorithms.optimizers.GradientDescent.minimize") the gradient is approximated with a forward finite difference scheme with `perturbation` perturbation in both directions (defaults to 1e-2 if required). Ignored when we have an explicit function for the gradient.
|
||
|
||
**Raises**
|
||
|
||
[**ValueError**](https://docs.python.org/3/library/exceptions.html#ValueError "(in Python v3.12)") – If `learning_rate` is an array and its length is less than `maxiter`.
|
||
|
||
## Attributes
|
||
|
||
### bounds\_support\_level
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.bounds_support_level">
|
||
Returns bounds support level
|
||
</Attribute>
|
||
|
||
### gradient\_support\_level
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.gradient_support_level">
|
||
Returns gradient support level
|
||
</Attribute>
|
||
|
||
### initial\_point\_support\_level
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.initial_point_support_level">
|
||
Returns initial point support level
|
||
</Attribute>
|
||
|
||
### is\_bounds\_ignored
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_bounds_ignored">
|
||
Returns is bounds ignored
|
||
</Attribute>
|
||
|
||
### is\_bounds\_required
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_bounds_required">
|
||
Returns is bounds required
|
||
</Attribute>
|
||
|
||
### is\_bounds\_supported
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_bounds_supported">
|
||
Returns is bounds supported
|
||
</Attribute>
|
||
|
||
### is\_gradient\_ignored
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_gradient_ignored">
|
||
Returns is gradient ignored
|
||
</Attribute>
|
||
|
||
### is\_gradient\_required
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_gradient_required">
|
||
Returns is gradient required
|
||
</Attribute>
|
||
|
||
### is\_gradient\_supported
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_gradient_supported">
|
||
Returns is gradient supported
|
||
</Attribute>
|
||
|
||
### is\_initial\_point\_ignored
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_initial_point_ignored">
|
||
Returns is initial point ignored
|
||
</Attribute>
|
||
|
||
### is\_initial\_point\_required
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_initial_point_required">
|
||
Returns is initial point required
|
||
</Attribute>
|
||
|
||
### is\_initial\_point\_supported
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_initial_point_supported">
|
||
Returns is initial point supported
|
||
</Attribute>
|
||
|
||
### perturbation
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.perturbation">
|
||
Returns the perturbation.
|
||
|
||
This is the perturbation used in the finite difference gradient approximation.
|
||
</Attribute>
|
||
|
||
### setting
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.setting">
|
||
Return setting
|
||
</Attribute>
|
||
|
||
### settings
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.settings" />
|
||
|
||
### state
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.state">
|
||
Return the current state of the optimizer.
|
||
</Attribute>
|
||
|
||
### tol
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.tol">
|
||
Returns the tolerance of the optimizer.
|
||
|
||
Any step with smaller stepsize than this value will stop the optimization.
|
||
</Attribute>
|
||
|
||
## Methods
|
||
|
||
### ask
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.ask" signature="ask()">
|
||
Returns an object with the data needed to evaluate the gradient.
|
||
|
||
If this object contains a gradient function the gradient can be evaluated directly. Otherwise approximate it with a finite difference scheme.
|
||
|
||
**Return type**
|
||
|
||
[*AskData*](qiskit.algorithms.optimizers.AskData "qiskit.algorithms.optimizers.steppable_optimizer.AskData")
|
||
</Function>
|
||
|
||
### continue\_condition
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.continue_condition" signature="continue_condition()">
|
||
Condition that indicates the optimization process should come to an end.
|
||
|
||
When the stepsize is smaller than the tolerance, the optimization process is considered finished.
|
||
|
||
**Returns**
|
||
|
||
`True` if the optimization process should continue, `False` otherwise.
|
||
|
||
**Return type**
|
||
|
||
[bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.12)")
|
||
</Function>
|
||
|
||
### create\_result
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.create_result" signature="create_result()">
|
||
Creates a result of the optimization process.
|
||
|
||
This result contains the best point, the best function value, the number of function/gradient evaluations and the number of iterations.
|
||
|
||
**Returns**
|
||
|
||
The result of the optimization process.
|
||
|
||
**Return type**
|
||
|
||
[*OptimizerResult*](qiskit.algorithms.optimizers.OptimizerResult "qiskit.algorithms.optimizers.optimizer.OptimizerResult")
|
||
</Function>
|
||
|
||
### evaluate
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.evaluate" signature="evaluate(ask_data)">
|
||
Evaluates the gradient.
|
||
|
||
It does so either by evaluating an analytic gradient or by approximating it with a finite difference scheme. It will either add `1` to the number of gradient evaluations or add `N+1` to the number of function evaluations (Where N is the dimension of the gradient).
|
||
|
||
**Parameters**
|
||
|
||
**ask\_data** ([*AskData*](qiskit.algorithms.optimizers.AskData "qiskit.algorithms.optimizers.steppable_optimizer.AskData")) – It contains the point where the gradient is to be evaluated and the gradient function or, in its absence, the objective function to perform a finite difference approximation.
|
||
|
||
**Returns**
|
||
|
||
The data containing the gradient evaluation.
|
||
|
||
**Return type**
|
||
|
||
[*TellData*](qiskit.algorithms.optimizers.TellData "qiskit.algorithms.optimizers.steppable_optimizer.TellData")
|
||
</Function>
|
||
|
||
### get\_support\_level
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.get_support_level" signature="get_support_level()">
|
||
Get the support level dictionary.
|
||
</Function>
|
||
|
||
### gradient\_num\_diff
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.gradient_num_diff" signature="gradient_num_diff(x_center, f, epsilon, max_evals_grouped=None)" modifiers="static">
|
||
We compute the gradient with the numeric differentiation in the parallel way, around the point x\_center.
|
||
|
||
**Parameters**
|
||
|
||
* **x\_center** (*ndarray*) – point around which we compute the gradient
|
||
* **f** (*func*) – the function of which the gradient is to be computed.
|
||
* **epsilon** ([*float*](https://docs.python.org/3/library/functions.html#float "(in Python v3.12)")) – the epsilon used in the numeric differentiation.
|
||
* **max\_evals\_grouped** ([*int*](https://docs.python.org/3/library/functions.html#int "(in Python v3.12)")) – max evals grouped, defaults to 1 (i.e. no batching).
|
||
|
||
**Returns**
|
||
|
||
the gradient computed
|
||
|
||
**Return type**
|
||
|
||
grad
|
||
</Function>
|
||
|
||
### minimize
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.minimize" signature="minimize(fun, x0, jac=None, bounds=None)">
|
||
Minimizes the function.
|
||
|
||
For well behaved functions the user can call this method to minimize a function. If the user wants more control on how to evaluate the function a custom loop can be created using [`ask()`](#qiskit.algorithms.optimizers.GradientDescent.ask "qiskit.algorithms.optimizers.GradientDescent.ask") and [`tell()`](#qiskit.algorithms.optimizers.GradientDescent.tell "qiskit.algorithms.optimizers.GradientDescent.tell") and evaluating the function manually.
|
||
|
||
**Parameters**
|
||
|
||
* **fun** (*Callable\[\[POINT],* [*float*](https://docs.python.org/3/library/functions.html#float "(in Python v3.12)")*]*) – Function to minimize.
|
||
* **x0** (*POINT*) – Initial point.
|
||
* **jac** (*Callable\[\[POINT], POINT] | None*) – Function to compute the gradient.
|
||
* **bounds** ([*list*](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.12)")*\[*[*tuple*](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.12)")*\[*[*float*](https://docs.python.org/3/library/functions.html#float "(in Python v3.12)")*,* [*float*](https://docs.python.org/3/library/functions.html#float "(in Python v3.12)")*]] | None*) – Bounds of the search space.
|
||
|
||
**Returns**
|
||
|
||
Object containing the result of the optimization.
|
||
|
||
**Return type**
|
||
|
||
[OptimizerResult](qiskit.algorithms.optimizers.OptimizerResult "qiskit.algorithms.optimizers.OptimizerResult")
|
||
</Function>
|
||
|
||
### print\_options
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.print_options" signature="print_options()">
|
||
Print algorithm-specific options.
|
||
</Function>
|
||
|
||
### set\_max\_evals\_grouped
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.set_max_evals_grouped" signature="set_max_evals_grouped(limit)">
|
||
Set max evals grouped
|
||
</Function>
|
||
|
||
### set\_options
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.set_options" signature="set_options(**kwargs)">
|
||
Sets or updates values in the options dictionary.
|
||
|
||
The options dictionary may be used internally by a given optimizer to pass additional optional values for the underlying optimizer/optimization function used. The options dictionary may be initially populated with a set of key/values when the given optimizer is constructed.
|
||
|
||
**Parameters**
|
||
|
||
**kwargs** ([*dict*](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.12)")) – options, given as name=value.
|
||
</Function>
|
||
|
||
### start
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.start" signature="start(fun, x0, jac=None, bounds=None)">
|
||
Populates the state of the optimizer with the data provided and sets all the counters to 0.
|
||
|
||
**Parameters**
|
||
|
||
* **fun** (*Callable\[\[POINT],* [*float*](https://docs.python.org/3/library/functions.html#float "(in Python v3.12)")*]*) – Function to minimize.
|
||
* **x0** (*POINT*) – Initial point.
|
||
* **jac** (*Callable\[\[POINT], POINT] | None*) – Function to compute the gradient.
|
||
* **bounds** ([*list*](https://docs.python.org/3/library/stdtypes.html#list "(in Python v3.12)")*\[*[*tuple*](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.12)")*\[*[*float*](https://docs.python.org/3/library/functions.html#float "(in Python v3.12)")*,* [*float*](https://docs.python.org/3/library/functions.html#float "(in Python v3.12)")*]] | None*) – Bounds of the search space.
|
||
</Function>
|
||
|
||
### step
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.step" signature="step()">
|
||
Performs one step in the optimization process.
|
||
|
||
This method composes [`ask()`](#qiskit.algorithms.optimizers.GradientDescent.ask "qiskit.algorithms.optimizers.GradientDescent.ask"), [`evaluate()`](#qiskit.algorithms.optimizers.GradientDescent.evaluate "qiskit.algorithms.optimizers.GradientDescent.evaluate"), and [`tell()`](#qiskit.algorithms.optimizers.GradientDescent.tell "qiskit.algorithms.optimizers.GradientDescent.tell") to make a “step” in the optimization process.
|
||
</Function>
|
||
|
||
### tell
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.tell" signature="tell(ask_data, tell_data)">
|
||
Updates `x` by an amount proportional to the learning rate and value of the gradient at that point.
|
||
|
||
**Parameters**
|
||
|
||
* **ask\_data** ([*AskData*](qiskit.algorithms.optimizers.AskData "qiskit.algorithms.optimizers.steppable_optimizer.AskData")) – The data used to evaluate the function.
|
||
* **tell\_data** ([*TellData*](qiskit.algorithms.optimizers.TellData "qiskit.algorithms.optimizers.steppable_optimizer.TellData")) – The data from the function evaluation.
|
||
|
||
**Raises**
|
||
|
||
[**ValueError**](https://docs.python.org/3/library/exceptions.html#ValueError "(in Python v3.12)") – If the gradient passed doesn’t have the right dimension.
|
||
</Function>
|
||
|
||
### wrap\_function
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.wrap_function" signature="wrap_function(function, args)" modifiers="static">
|
||
Wrap the function to implicitly inject the args at the call of the function.
|
||
|
||
**Parameters**
|
||
|
||
* **function** (*func*) – the target function
|
||
* **args** ([*tuple*](https://docs.python.org/3/library/stdtypes.html#tuple "(in Python v3.12)")) – the args to be injected
|
||
|
||
**Returns**
|
||
|
||
wrapper
|
||
|
||
**Return type**
|
||
|
||
function\_wrapper
|
||
</Function>
|
||
</Class>
|
||
|