qiskit/releasenotes/notes/0.22/steppable-optimizers-9d9b48...

89 lines
3.3 KiB
YAML

---
features:
- |
The :class:`~.SteppableOptimizer` class is added. It allows one to perfore classical
optimizations step-by-step using the :meth:`~.SteppableOptimizer.step` method. These
optimizers implement the "ask and tell" interface which (optionally) allows to manually compute
the required function or gradient evaluations and plug them back into the optimizer.
For more information about this interface see: `ask and tell interface
<https://optuna.readthedocs.io/en/stable/tutorial/20_recipes/009_ask_and_tell.html>`_.
A very simple use case when the user might want to do the optimization step by step is for
readout:
.. code-block:: python
import random
import numpy as np
from qiskit.algorithms.optimizers import GradientDescent
def objective(x):
return (np.linalg.norm(x) - 1) ** 2
def grad(x):
return 2 * (np.linalg.norm(x) - 1) * x / np.linalg.norm(x)
initial_point = np.random.normal(0, 1, size=(100,))
optimizer = GradientDescent(maxiter=20)
optimizer.start(x0=initial_point, fun=objective, jac=grad)
for _ in range(maxiter):
state = optimizer.state
# Here you can manually read out anything from the optimizer state.
optimizer.step()
result = optimizer.create_result()
A more complex case would be error handling. Imagine that the function you are evaluating has
a random chance of failing. In this case you can catch the error and run the function again
until it yields the desired result before continuing the optimization process. In this case
one would use the ask and tell interface.
.. code-block:: python
import random
import numpy as np
from qiskit.algorithms.optimizers import GradientDescent
def objective(x):
if random.choice([True, False]):
return None
else:
return (np.linalg.norm(x) - 1) ** 2
def grad(x):
if random.choice([True, False]):
return None
else:
return 2 * (np.linalg.norm(x) - 1) * x / np.linalg.norm(x)
initial_point = np.random.normal(0, 1, size=(100,))
optimizer = GradientDescent(maxiter=20)
optimizer.start(x0=initial_point, fun=objective, jac=grad)
while optimizer.continue_condition():
ask_data = optimizer.ask()
evaluated_gradient = None
while evaluated_gradient is None:
evaluated_gradient = grad(ask_data.x_center)
optimizer.state.njev += 1
optimizer.state.nit += 1
cf = TellData(eval_jac=evaluated_gradient)
optimizer.tell(ask_data=ask_data, tell_data=tell_data)
result = optimizer.create_result()
Transitioned :class:`GradientDescent` to be a subclass of :class:`.SteppableOptimizer`.
fixes:
- |
:class:`.GradientDescent` will now correctly count the number of iterations, function evaluations and
gradient evaluations. Also the documentation now correctly states that the gradient is approximated
by a forward finite difference method.