247 lines
9.2 KiB
Plaintext
247 lines
9.2 KiB
Plaintext
---
|
||
title: GradientDescent
|
||
description: API reference for qiskit.algorithms.optimizers.GradientDescent
|
||
in_page_toc_min_heading_level: 1
|
||
python_api_type: class
|
||
python_api_name: qiskit.algorithms.optimizers.GradientDescent
|
||
---
|
||
|
||
# GradientDescent
|
||
|
||
<Class id="qiskit.algorithms.optimizers.GradientDescent" isDedicatedPage={true} github="https://github.com/qiskit/qiskit/tree/stable/0.20/qiskit/algorithms/optimizers/gradient_descent.py" signature="GradientDescent(maxiter=100, learning_rate=0.01, tol=1e-07, callback=None, perturbation=None)" modifiers="class">
|
||
Bases: `qiskit.algorithms.optimizers.optimizer.Optimizer`
|
||
|
||
The gradient descent minimization routine.
|
||
|
||
For a function $f$ and an initial point $\vec\theta_0$, the standard (or “vanilla”) gradient descent method is an iterative scheme to find the minimum $\vec\theta^*$ of $f$ by updating the parameters in the direction of the negative gradient of $f$
|
||
|
||
$$
|
||
\vec\theta_{n+1} = \vec\theta_{n} - \vec\eta\nabla f(\vec\theta_{n}),
|
||
$$
|
||
|
||
for a small learning rate $\eta > 0$.
|
||
|
||
You can either provide the analytic gradient $\vec\nabla f$ as `gradient_function` in the `optimize` method, or, if you do not provide it, use a finite difference approximation of the gradient. To adapt the size of the perturbation in the finite difference gradients, set the `perturbation` property in the initializer.
|
||
|
||
This optimizer supports a callback function. If provided in the initializer, the optimizer will call the callback in each iteration with the following information in this order: current number of function values, current parameters, current function value, norm of current gradient.
|
||
|
||
**Examples**
|
||
|
||
A minimum example that will use finite difference gradients with a default perturbation of 0.01 and a default learning rate of 0.01.
|
||
|
||
An example where the learning rate is an iterator and we supply the analytic gradient. Note how much faster this convergences (i.e. less `nfevs`) compared to the previous example.
|
||
|
||
**Parameters**
|
||
|
||
* **maxiter** (`int`) – The maximum number of iterations.
|
||
* **learning\_rate** (`Union`\[`float`, `Callable`\[\[], `Iterator`]]) – A constant or generator yielding learning rates for the parameter updates. See the docstring for an example.
|
||
* **tol** (`float`) – If the norm of the parameter update is smaller than this threshold, the optimizer is converged.
|
||
* **perturbation** (`Optional`\[`float`]) – If no gradient is passed to `GradientDescent.optimize` the gradient is approximated with a symmetric finite difference scheme with `perturbation` perturbation in both directions (defaults to 1e-2 if required). Ignored if a gradient callable is passed to `GradientDescent.optimize`.
|
||
|
||
## Methods
|
||
|
||
### get\_support\_level
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.get_support_level" signature="GradientDescent.get_support_level()">
|
||
Get the support level dictionary.
|
||
</Function>
|
||
|
||
### gradient\_num\_diff
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.gradient_num_diff" signature="GradientDescent.gradient_num_diff(x_center, f, epsilon, max_evals_grouped=1)" modifiers="static">
|
||
We compute the gradient with the numeric differentiation in the parallel way, around the point x\_center.
|
||
|
||
**Parameters**
|
||
|
||
* **x\_center** (*ndarray*) – point around which we compute the gradient
|
||
* **f** (*func*) – the function of which the gradient is to be computed.
|
||
* **epsilon** (*float*) – the epsilon used in the numeric differentiation.
|
||
* **max\_evals\_grouped** (*int*) – max evals grouped
|
||
|
||
**Returns**
|
||
|
||
the gradient computed
|
||
|
||
**Return type**
|
||
|
||
grad
|
||
</Function>
|
||
|
||
### minimize
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.minimize" signature="GradientDescent.minimize(fun, x0, jac=None, bounds=None)">
|
||
Minimize the scalar function.
|
||
|
||
**Parameters**
|
||
|
||
* **fun** (`Callable`\[\[`Union`\[`float`, `ndarray`]], `float`]) – The scalar function to minimize.
|
||
* **x0** (`Union`\[`float`, `ndarray`]) – The initial point for the minimization.
|
||
* **jac** (`Optional`\[`Callable`\[\[`Union`\[`float`, `ndarray`]], `Union`\[`float`, `ndarray`]]]) – The gradient of the scalar function `fun`.
|
||
* **bounds** (`Optional`\[`List`\[`Tuple`\[`float`, `float`]]]) – Bounds for the variables of `fun`. This argument might be ignored if the optimizer does not support bounds.
|
||
|
||
**Return type**
|
||
|
||
`OptimizerResult`
|
||
|
||
**Returns**
|
||
|
||
The result of the optimization, containing e.g. the result as attribute `x`.
|
||
</Function>
|
||
|
||
### optimize
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.optimize" signature="GradientDescent.optimize(num_vars, objective_function, gradient_function=None, variable_bounds=None, initial_point=None)">
|
||
Perform optimization.
|
||
|
||
**Parameters**
|
||
|
||
* **num\_vars** (*int*) – Number of parameters to be optimized.
|
||
* **objective\_function** (*callable*) – A function that computes the objective function.
|
||
* **gradient\_function** (*callable*) – A function that computes the gradient of the objective function, or None if not available.
|
||
* **variable\_bounds** (*list\[(float, float)]*) – List of variable bounds, given as pairs (lower, upper). None means unbounded.
|
||
* **initial\_point** (*numpy.ndarray\[float]*) – Initial point.
|
||
|
||
**Returns**
|
||
|
||
**point, value, nfev**
|
||
|
||
point: is a 1D numpy.ndarray\[float] containing the solution value: is a float with the objective function value nfev: number of objective function calls made if available or None
|
||
|
||
**Raises**
|
||
|
||
**ValueError** – invalid input
|
||
</Function>
|
||
|
||
### print\_options
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.print_options" signature="GradientDescent.print_options()">
|
||
Print algorithm-specific options.
|
||
</Function>
|
||
|
||
### set\_max\_evals\_grouped
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.set_max_evals_grouped" signature="GradientDescent.set_max_evals_grouped(limit)">
|
||
Set max evals grouped
|
||
</Function>
|
||
|
||
### set\_options
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.set_options" signature="GradientDescent.set_options(**kwargs)">
|
||
Sets or updates values in the options dictionary.
|
||
|
||
The options dictionary may be used internally by a given optimizer to pass additional optional values for the underlying optimizer/optimization function used. The options dictionary may be initially populated with a set of key/values when the given optimizer is constructed.
|
||
|
||
**Parameters**
|
||
|
||
**kwargs** (*dict*) – options, given as name=value.
|
||
</Function>
|
||
|
||
### wrap\_function
|
||
|
||
<Function id="qiskit.algorithms.optimizers.GradientDescent.wrap_function" signature="GradientDescent.wrap_function(function, args)" modifiers="static">
|
||
Wrap the function to implicitly inject the args at the call of the function.
|
||
|
||
**Parameters**
|
||
|
||
* **function** (*func*) – the target function
|
||
* **args** (*tuple*) – the args to be injected
|
||
|
||
**Returns**
|
||
|
||
wrapper
|
||
|
||
**Return type**
|
||
|
||
function\_wrapper
|
||
</Function>
|
||
|
||
## Attributes
|
||
|
||
### bounds\_support\_level
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.bounds_support_level">
|
||
Returns bounds support level
|
||
</Attribute>
|
||
|
||
### gradient\_support\_level
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.gradient_support_level">
|
||
Returns gradient support level
|
||
</Attribute>
|
||
|
||
### initial\_point\_support\_level
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.initial_point_support_level">
|
||
Returns initial point support level
|
||
</Attribute>
|
||
|
||
### is\_bounds\_ignored
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_bounds_ignored">
|
||
Returns is bounds ignored
|
||
</Attribute>
|
||
|
||
### is\_bounds\_required
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_bounds_required">
|
||
Returns is bounds required
|
||
</Attribute>
|
||
|
||
### is\_bounds\_supported
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_bounds_supported">
|
||
Returns is bounds supported
|
||
</Attribute>
|
||
|
||
### is\_gradient\_ignored
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_gradient_ignored">
|
||
Returns is gradient ignored
|
||
</Attribute>
|
||
|
||
### is\_gradient\_required
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_gradient_required">
|
||
Returns is gradient required
|
||
</Attribute>
|
||
|
||
### is\_gradient\_supported
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_gradient_supported">
|
||
Returns is gradient supported
|
||
</Attribute>
|
||
|
||
### is\_initial\_point\_ignored
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_initial_point_ignored">
|
||
Returns is initial point ignored
|
||
</Attribute>
|
||
|
||
### is\_initial\_point\_required
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_initial_point_required">
|
||
Returns is initial point required
|
||
</Attribute>
|
||
|
||
### is\_initial\_point\_supported
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.is_initial_point_supported">
|
||
Returns is initial point supported
|
||
</Attribute>
|
||
|
||
### setting
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.setting">
|
||
Return setting
|
||
</Attribute>
|
||
|
||
### settings
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.GradientDescent.settings">
|
||
**Return type**
|
||
|
||
`Dict`\[`str`, `Any`]
|
||
</Attribute>
|
||
</Class>
|
||
|