291 lines
10 KiB
Plaintext
291 lines
10 KiB
Plaintext
---
|
||
title: ADAM
|
||
description: API reference for qiskit.algorithms.optimizers.ADAM
|
||
in_page_toc_min_heading_level: 1
|
||
python_api_type: class
|
||
python_api_name: qiskit.algorithms.optimizers.ADAM
|
||
---
|
||
|
||
# ADAM
|
||
|
||
<Class id="qiskit.algorithms.optimizers.ADAM" isDedicatedPage={true} github="https://github.com/qiskit/qiskit/tree/stable/0.20/qiskit/algorithms/optimizers/adam_amsgrad.py" signature="ADAM(maxiter=10000, tol=1e-06, lr=0.001, beta_1=0.9, beta_2=0.99, noise_factor=1e-08, eps=1e-10, amsgrad=False, snapshot_dir=None)" modifiers="class">
|
||
Bases: `qiskit.algorithms.optimizers.optimizer.Optimizer`
|
||
|
||
Adam and AMSGRAD optimizers.
|
||
|
||
Adam \[1] is a gradient-based optimization algorithm that is relies on adaptive estimates of lower-order moments. The algorithm requires little memory and is invariant to diagonal rescaling of the gradients. Furthermore, it is able to cope with non-stationary objective functions and noisy and/or sparse gradients.
|
||
|
||
AMSGRAD \[2] (a variant of Adam) uses a ‘long-term memory’ of past gradients and, thereby, improves convergence properties.
|
||
|
||
**References**
|
||
|
||
**\[1]: Kingma, Diederik & Ba, Jimmy (2014), Adam: A Method for Stochastic Optimization.**
|
||
|
||
[arXiv:1412.6980](https://arxiv.org/abs/1412.6980)
|
||
|
||
**\[2]: Sashank J. Reddi and Satyen Kale and Sanjiv Kumar (2018),**
|
||
|
||
On the Convergence of Adam and Beyond. [arXiv:1904.09237](https://arxiv.org/abs/1904.09237)
|
||
|
||
<Admonition title="Note" type="note">
|
||
This component has some function that is normally random. If you want to reproduce behavior then you should set the random number generator seed in the algorithm\_globals (`qiskit.utils.algorithm_globals.random_seed = seed`).
|
||
</Admonition>
|
||
|
||
**Parameters**
|
||
|
||
* **maxiter** (`int`) – Maximum number of iterations
|
||
* **tol** (`float`) – Tolerance for termination
|
||
* **lr** (`float`) – Value >= 0, Learning rate.
|
||
* **beta\_1** (`float`) – Value in range 0 to 1, Generally close to 1.
|
||
* **beta\_2** (`float`) – Value in range 0 to 1, Generally close to 1.
|
||
* **noise\_factor** (`float`) – Value >= 0, Noise factor
|
||
* **eps** (`float`) – Value >=0, Epsilon to be used for finite differences if no analytic gradient method is given.
|
||
* **amsgrad** (`bool`) – True to use AMSGRAD, False if not
|
||
* **snapshot\_dir** (`Optional`\[`str`]) – If not None save the optimizer’s parameter after every step to the given directory
|
||
|
||
## Methods
|
||
|
||
### get\_support\_level
|
||
|
||
<Function id="qiskit.algorithms.optimizers.ADAM.get_support_level" signature="ADAM.get_support_level()">
|
||
Return support level dictionary
|
||
</Function>
|
||
|
||
### gradient\_num\_diff
|
||
|
||
<Function id="qiskit.algorithms.optimizers.ADAM.gradient_num_diff" signature="ADAM.gradient_num_diff(x_center, f, epsilon, max_evals_grouped=1)" modifiers="static">
|
||
We compute the gradient with the numeric differentiation in the parallel way, around the point x\_center.
|
||
|
||
**Parameters**
|
||
|
||
* **x\_center** (*ndarray*) – point around which we compute the gradient
|
||
* **f** (*func*) – the function of which the gradient is to be computed.
|
||
* **epsilon** (*float*) – the epsilon used in the numeric differentiation.
|
||
* **max\_evals\_grouped** (*int*) – max evals grouped
|
||
|
||
**Returns**
|
||
|
||
the gradient computed
|
||
|
||
**Return type**
|
||
|
||
grad
|
||
</Function>
|
||
|
||
### load\_params
|
||
|
||
<Function id="qiskit.algorithms.optimizers.ADAM.load_params" signature="ADAM.load_params(load_dir)">
|
||
Load iteration parameters for a file called `adam_params.csv`.
|
||
|
||
**Parameters**
|
||
|
||
**load\_dir** (`str`) – The directory containing `adam_params.csv`.
|
||
|
||
**Return type**
|
||
|
||
`None`
|
||
</Function>
|
||
|
||
### minimize
|
||
|
||
<Function id="qiskit.algorithms.optimizers.ADAM.minimize" signature="ADAM.minimize(fun, x0, jac=None, bounds=None, objective_function=None, initial_point=None, gradient_function=None)">
|
||
Minimize the scalar function.
|
||
|
||
**Parameters**
|
||
|
||
* **fun** (`Callable`\[\[`Union`\[`float`, `ndarray`]], `float`]) – The scalar function to minimize.
|
||
* **x0** (`Union`\[`float`, `ndarray`]) – The initial point for the minimization.
|
||
* **jac** (`Optional`\[`Callable`\[\[`Union`\[`float`, `ndarray`]], `Union`\[`float`, `ndarray`]]]) – The gradient of the scalar function `fun`.
|
||
* **bounds** (`Optional`\[`List`\[`Tuple`\[`float`, `float`]]]) – Bounds for the variables of `fun`. This argument might be ignored if the optimizer does not support bounds.
|
||
* **objective\_function** (`Optional`\[`Callable`\[\[`ndarray`], `float`]]) – DEPRECATED. A function handle to the objective function.
|
||
* **initial\_point** (`Optional`\[`ndarray`]) – DEPRECATED. The initial iteration point.
|
||
* **gradient\_function** (`Optional`\[`Callable`\[\[`ndarray`], `float`]]) – DEPRECATED. A function handle to the gradient of the objective function.
|
||
|
||
**Return type**
|
||
|
||
`OptimizerResult`
|
||
|
||
**Returns**
|
||
|
||
The result of the optimization, containing e.g. the result as attribute `x`.
|
||
</Function>
|
||
|
||
### optimize
|
||
|
||
<Function id="qiskit.algorithms.optimizers.ADAM.optimize" signature="ADAM.optimize(num_vars, objective_function, gradient_function=None, variable_bounds=None, initial_point=None)">
|
||
Perform optimization.
|
||
|
||
**Parameters**
|
||
|
||
* **num\_vars** (`int`) – Number of parameters to be optimized.
|
||
* **objective\_function** (`Callable`\[\[`ndarray`], `float`]) – Handle to a function that computes the objective function.
|
||
* **gradient\_function** (`Optional`\[`Callable`\[\[`ndarray`], `float`]]) – Handle to a function that computes the gradient of the objective function.
|
||
* **variable\_bounds** (`Optional`\[`List`\[`Tuple`\[`float`, `float`]]]) – deprecated
|
||
* **initial\_point** (`Optional`\[`ndarray`]) – The initial point for the optimization.
|
||
|
||
**Return type**
|
||
|
||
`Tuple`\[`ndarray`, `float`, `int`]
|
||
|
||
**Returns**
|
||
|
||
A tuple (point, value, nfev) where
|
||
|
||
> point: is a 1D numpy.ndarray\[float] containing the solution
|
||
>
|
||
> value: is a float with the objective function value
|
||
>
|
||
> nfev: is the number of objective function calls
|
||
</Function>
|
||
|
||
### print\_options
|
||
|
||
<Function id="qiskit.algorithms.optimizers.ADAM.print_options" signature="ADAM.print_options()">
|
||
Print algorithm-specific options.
|
||
</Function>
|
||
|
||
### save\_params
|
||
|
||
<Function id="qiskit.algorithms.optimizers.ADAM.save_params" signature="ADAM.save_params(snapshot_dir)">
|
||
Save the current iteration parameters to a file called `adam_params.csv`.
|
||
|
||
<Admonition title="Note" type="note">
|
||
The current parameters are appended to the file, if it exists already. The file is not overwritten.
|
||
</Admonition>
|
||
|
||
**Parameters**
|
||
|
||
**snapshot\_dir** (`str`) – The directory to store the file in.
|
||
|
||
**Return type**
|
||
|
||
`None`
|
||
</Function>
|
||
|
||
### set\_max\_evals\_grouped
|
||
|
||
<Function id="qiskit.algorithms.optimizers.ADAM.set_max_evals_grouped" signature="ADAM.set_max_evals_grouped(limit)">
|
||
Set max evals grouped
|
||
</Function>
|
||
|
||
### set\_options
|
||
|
||
<Function id="qiskit.algorithms.optimizers.ADAM.set_options" signature="ADAM.set_options(**kwargs)">
|
||
Sets or updates values in the options dictionary.
|
||
|
||
The options dictionary may be used internally by a given optimizer to pass additional optional values for the underlying optimizer/optimization function used. The options dictionary may be initially populated with a set of key/values when the given optimizer is constructed.
|
||
|
||
**Parameters**
|
||
|
||
**kwargs** (*dict*) – options, given as name=value.
|
||
</Function>
|
||
|
||
### wrap\_function
|
||
|
||
<Function id="qiskit.algorithms.optimizers.ADAM.wrap_function" signature="ADAM.wrap_function(function, args)" modifiers="static">
|
||
Wrap the function to implicitly inject the args at the call of the function.
|
||
|
||
**Parameters**
|
||
|
||
* **function** (*func*) – the target function
|
||
* **args** (*tuple*) – the args to be injected
|
||
|
||
**Returns**
|
||
|
||
wrapper
|
||
|
||
**Return type**
|
||
|
||
function\_wrapper
|
||
</Function>
|
||
|
||
## Attributes
|
||
|
||
### bounds\_support\_level
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.ADAM.bounds_support_level">
|
||
Returns bounds support level
|
||
</Attribute>
|
||
|
||
### gradient\_support\_level
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.ADAM.gradient_support_level">
|
||
Returns gradient support level
|
||
</Attribute>
|
||
|
||
### initial\_point\_support\_level
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.ADAM.initial_point_support_level">
|
||
Returns initial point support level
|
||
</Attribute>
|
||
|
||
### is\_bounds\_ignored
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.ADAM.is_bounds_ignored">
|
||
Returns is bounds ignored
|
||
</Attribute>
|
||
|
||
### is\_bounds\_required
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.ADAM.is_bounds_required">
|
||
Returns is bounds required
|
||
</Attribute>
|
||
|
||
### is\_bounds\_supported
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.ADAM.is_bounds_supported">
|
||
Returns is bounds supported
|
||
</Attribute>
|
||
|
||
### is\_gradient\_ignored
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.ADAM.is_gradient_ignored">
|
||
Returns is gradient ignored
|
||
</Attribute>
|
||
|
||
### is\_gradient\_required
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.ADAM.is_gradient_required">
|
||
Returns is gradient required
|
||
</Attribute>
|
||
|
||
### is\_gradient\_supported
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.ADAM.is_gradient_supported">
|
||
Returns is gradient supported
|
||
</Attribute>
|
||
|
||
### is\_initial\_point\_ignored
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.ADAM.is_initial_point_ignored">
|
||
Returns is initial point ignored
|
||
</Attribute>
|
||
|
||
### is\_initial\_point\_required
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.ADAM.is_initial_point_required">
|
||
Returns is initial point required
|
||
</Attribute>
|
||
|
||
### is\_initial\_point\_supported
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.ADAM.is_initial_point_supported">
|
||
Returns is initial point supported
|
||
</Attribute>
|
||
|
||
### setting
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.ADAM.setting">
|
||
Return setting
|
||
</Attribute>
|
||
|
||
### settings
|
||
|
||
<Attribute id="qiskit.algorithms.optimizers.ADAM.settings">
|
||
**Return type**
|
||
|
||
`Dict`\[`str`, `Any`]
|
||
</Attribute>
|
||
</Class>
|
||
|