Previously the module of this was set incorrectly (stemming from its
move in gh-9064), at which point the `__getstate__`/`__setstate__`
pickling wouldn't work correctly any more. Also, however, there was no
`__getnewargs__` and `new` didn't have a zero-argument form, so this
wouldn't have worked either.
* Finalise support for Numpy 2.0
This commit brings the Qiskit test suite to a passing state (with all
optionals installed) with Numpy 2.0.0b1, building on previous commits
that handled much of the rest of the changing requirements:
- gh-10890
- gh-10891
- gh-10892
- gh-10897
- gh-11023
Notably, this commit did not actually require a rebuild of Qiskit,
despite us compiling against Numpy; it seems to happen that the C API
stuff we use via `rust-numpy` (which loads the Numpy C extensions
dynamically during module initialisation) hasn't changed.
The main changes are:
- adapting to the changed `copy=None` and `copy=False` semantics in
`array` and `asarray`.
- making sure all our implementers of `__array__` accept both `dtype`
and `copy` arguments.
Co-authored-by: Lev S. Bishop <18673315+levbishop@users.noreply.github.com>
* Update `__array__` methods for Numpy 2.0 compatibility
As of Numpy 2.0, implementers of `__array__` are expected and required
to have a signature
def __array__(self, dtype=None, copy=None): ...
In Numpys before 2.0, the `copy` argument will never be passed, and the
expected signature was
def __array__(self, dtype=None): ...
Because of this, we have latitude to set `copy` in our implementations
to anything we like if we're running against Numpy 1.x, but we should
default to `copy=None` if we're running against Numpy 2.0.
The semantics of the `copy` argument to `np.array` changed in Numpy 2.0.
Now, `copy=False` means "raise a `ValueError` if a copy is required" and
`copy=None` means "copy only if required". In Numpy 1.x, `copy=False`
meant "copy only if required". In _both_ Numpy 1.x and 2.0,
`ndarray.astype` takes a `copy` argument, and in both, `copy=False`
means "copy only if required". In Numpy 2.0 only, `np.asarray` gained a
`copy` argument with the same semantics as the `np.array` copy argument
from Numpy 2.0.
Further, the semantics of the `__array__` method changed in Numpy 2.0,
particularly around copying. Now, Numpy will assume that it can pass
`copy=True` and the implementer will handle this. If `copy=False` is
given and a copy or calculation is required, then the implementer is
required to raise `ValueError`. We have a few places where the
`__array__` method may (or always does) calculate the array, so in all
these, we must forbid `copy=False`.
With all this in mind: this PR sets up all our implementers of
`__array__` to either default to `copy=None` if they will never actually
need to _use_ the `copy` argument within themselves (except perhaps to
test if it was set by Numpy 2.0 to `False`, as Numpy 1.x will never set
it), or to a compatibility shim `_numpy_compat.COPY_ONLY_IF_NEEDED` if
they do naturally want to use it with those semantics. The pattern
def __array__(self, dtype=None, copy=_numpy_compat.COPY_ONLY_IF_NEEDED):
dtype = self._array.dtype if dtype is None else dtype
return np.array(self._array, dtype=dtype, copy=copy)
using `array` instead of `asarray` lets us achieve all the desired
behaviour between the interactions of `dtype` and `copy` in a way that
is compatible with both Numpy 1.x and 2.x.
* fixing numerical issues on mac-arm
* Change error to match Numpy
---------
Co-authored-by: Lev S. Bishop <18673315+levbishop@users.noreply.github.com>
Co-authored-by: Sebastian Brandhofer <148463728+sbrandhsn@users.noreply.github.com>
* Add support for returning a DAGCircuit to TwoQubitBasisDecomposer
This commit adds a new flag, use_dag, to the constructor for the
TwoQubitBasisDecomposer. When set to True, the __call__ method will
return a DAGCircuit instead of a QuantumCircuit. This is useful when the
two qubit basis decomposer is called from within a transpiler context,
as with the UnitarySynthesis pass, to avoid an extra conversion step.
* Pivot to argument on __call__ and add to XXDecomposer too
This commit moves the use_dag flag to the __call__ method directly
instead of storing it as an instance variable. To make the interface
consistent between the 2 built-in decomposers the flag is also added to
the XXDecomposer class's __call__ method too. This was needed because
the unitary synthesis pass calls the decomposers interchangeably and to
be able to use them without type checking they both will need the flag.
This class is hard to make the most efficient, due to how abstract the
base class is, with many different options and many private methods that
subclasses override. Still, in cases where, _at the point of build_, we
can detect that rotation or entanglement layers are simple applications
of a single standard-library gate, we can skip the entire `copy` +
`assign` + `compose` pipeline and simply construct the gates in-place
with the correct parameters. This skips huge tracts of overhead that
using the high-level, abstract interfaces (which are somewhat
necessarily more optimised to work with large operands) in tight inner
loops adds, culminating in about a 10x improvement in build time.
`NLocal` is so abstract that it's hard to hit similar performance to an
idiomatic direct construction of the relevant structure, but to be fair,
the concept of a circuit library is not necessarily to make the absolute
fastest constructors for tight loops, but to make it much simpler to
just get a circuit that works as intended.
* Provider abstraction is not very useful
* udpate docs
* ignore deprecations
* not triggered on CI
* deprecation warnings in visual tests
* set up
* set up without warning?
* setUpClass
* more test adjust
* raise at setUpClass
* warms
* test_circuit_matplotlib_drawer.py
* skip Aer warning
* Apply suggestions from code review
Co-authored-by: Matthew Treinish <mtreinish@kortar.org>
* reno
* Run black
* Update release note
* linter
---------
Co-authored-by: Matthew Treinish <mtreinish@kortar.org>
* Increase heuristic effort for optimization level 2
This commit tweaks the heuristic effort in optimization level 2 to be
more of a middle ground between level 1 and 3; with a better balance
between output quality and runtime. This places it to be a better
default for a pass manager we use if one isn't specified. The
tradeoff here is that the vf2layout and vf2postlayout search space is
reduced to be the same as level 1. There are diminishing margins of
return on the vf2 layout search especially for cases when there are a
large number of qubit permutations for the mapping found. Then the
number of sabre trials is brought up to the same level as optimization
level 3. As this can have a significant impact on output and the extra
runtime cost is minimal. The larger change is that the optimization
passes from level 3. This ends up mainly being 2q peephole optimization.
With the performance improvements from #12010 and #11946 and all the
follow-on PRs this is now fast enough to rely on in optimization level
2.
* Add test workaround from level 3 to level 2 too
* Expand vf2 call limit on VF2Layout
For the initial VF2Layout call this commit expands the vf2 call limit
back to the previous level instead of reducing it to the same as level 1.
The idea behind making this change is that spending up to 10s to find a
perfect layout is a worthwhile tradeoff as that will greatly improve the
result from execution. But scoring multiple layouts to find the lowest
error rate subgraph has a diminishing margin of return in most cases as
there typically aren't thousands of unique subgraphs and often when we
hit the scoring limit it's just permuting the qubits inside a subgraph
which doesn't provide the most value.
For VF2PostLayout the lower call limits from level 1 is still used. This
is because both the search for isomorphic subgraphs is typically much
shorter with the vf2++ node ordering heuristic so we don't need to spend
as much time looking for alternative subgraphs.
* Move 2q peephole outside of optimization loop in O2
Due to potential instability in the 2q peephole optimization we run we
were using the `MinimumPoint` pass to provide backtracking when we reach
a local minimum. However, this pass adds a significant amount of
overhead because it deep copies the circuit at every iteration of the
optimization loop that improves the output quality. This commit tweaks
the O2 pass manager construction to only run 2q peephole once, and then
updates the optimization loop to be what the previous O2 optimization
loop was.
* add mapping features to data_bin
* Update qiskit/primitives/containers/data_bin.py
Co-authored-by: Ian Hincks <ian.hincks@gmail.com>
* add values
* change iterable with list
* Update qiskit/primitives/containers/data_bin.py
Co-authored-by: Ian Hincks <ian.hincks@gmail.com>
* Apply suggestions from code review
Co-authored-by: Christopher J. Wood <cjwood@us.ibm.com>
* reno
* change return types
---------
Co-authored-by: Ian Hincks <ian.hincks@gmail.com>
Co-authored-by: Christopher J. Wood <cjwood@us.ibm.com>
This commit removes the use of intermediate DAGCircuit objects and
calling substitute_node_with_dag() in the Optimize1qGatesDecomposition
pass. Since everything is 1q it's very easy to just directly insert the
nodes on the DAG prior to the run and then remove the old nodes. This
avoids a lot of extra operations and overhead to create a second dagcircuit
for each identified run and then substiting that dag in place of the
run.
`num_custom_gates` is not actually part of the `CIRCUIT_HEADER` structs
in any version of QPY. The number of custom-instruction objects is
instead stored as a `uint64_t` inline in the `CUSTOM_DEFINITIONS` part
of the file, so separately to the rest of the header.
This commit updates the documentation for the
generate_preset_pass_manager() function to clearly indicate that the
function will accept a integer list for the initial_layout field. This
already was supported but it wasn't documented. As it's now a documented
part of the API a unittest is added to ensure we don't regress this
functionality in the future.
Fixes#11690
* add _evolve_ecr()
- Adds support for Pauli (and related classes) to evolve through ECR gates encountered in a quantum circuit.
- Also moved the dicts of special-case gates (`basis_1q`, `basis_2q`, `non-clifford`) outside the subroutine definition. They are now just after the `_evolve_*()` functions they reference.
* fix pauli.evolve bug for certain circuit names
- Should fix qiskit issue #12093
- Bug happened after converting circuit to instruction, which AFAICT was not necessary. Now if input is a QuantumCircuit, that part of the code is bypassed.
- Removed creation of a look-up dict of bit locations, since `QuantumCircuit.find_bit` already provides one.
* add ECRGate to `evolve()` tests
* mark gate-dicts as private
* add test evolving by circuit named 'cx'
Test showing issue #12093 is solved.
* add release note for pauli-evolve fixes
* Update test_pauli_list.py
We previously have not added the build-only requirements `setuptools`
and `setuptools-rust` to `requirements-dev.txt` because the majority of
contributors would not to build the Rust components outside of the
regular build isolation provided by `pip` or other PEP-517 compatible
builders.
Since we are accelerating our use of Rust, and it's more likely that
more users will need to touch the Rust components, this adds the build
requirements to the developer environment, so it's easier for people to
do our recommended
python setup.py build_rust --inplace --release
for the cases that they want to test against optimised versions of the
Rust extensions while still maintaining a Python editable installation.
* Update gate dictionary in `two_local.py`
This commit removes the hard coded gate dictionary with the one
generated by the method `get_standard_gate_name_mapping`.
Resolves#1202
* Release note
* fix typo
* Tweak release-note wording
* Add explicit test
---------
Co-authored-by: Jake Lishman <jake.lishman@ibm.com>
* Refactor Rust crates to build a single extension module
Previously we were building three different Python extension modules out
of Rust space. This was convenient for logical separation of the code,
and for making the incremental compilations of the smaller crates
faster, but had the disadvantage that `qasm2` and `qasm3` couldn't
access the new circuit data structures directly from Rust space and had
to go via Python.
This modifies the crate structure to put the Rust-space acceleration
logic into crates that we only build as Rust libraries, then adds
another (`pyext`) crate to be the only that that actually builds as
shared Python C extension. This then lets the Rust crates interact with
each other (and the Rust crates still contain PyO3 Python-binding logic
internally) and accept and output each others' Python types.
The one Python extension is still called `qiskit._accelerate`, but it's
built by the Rust crate `qiskit-pyext`. This necessitated some further
changes, since `qiskit._accelerate` now contains acccelerators for lots
of parts of the Qiskit package hierarchy, but needs to have a single
low-level place to initialise itself:
- The circuit logic `circuit` is separated out into its own separate
crate so that this can form the lowest part of the Rust hierarchy.
This is done so that `accelerate` with its grab-bag of accelerators
from all over the place does not need to be the lowest part of the
stack. Over time, it's likely that everything will start touching
`circuit` anyway.
- `qiskit._qasm2` and `qiskit._qasm3` respectively became
`qiskit._accelerate.qasm2` and `qiskit._accelerate.qasm3`, since they
no longer get their own extension modules.
- `qasm3` no longer stores a Python object on itself during module
initialisation that depended on `qiskit.circuit.library` to create.
Now, the Python-space `qiskit.qasm3` does that and the `qasm3` crate
just retrieves that at Python runtime, since that crate requires
Qiskit to be importable anyway.
* Fix and document Rust test harness
* Fix new clippy complaints
These don't appear to be new from this patch series, but changing the
access modifiers on some of the crates seems to make clippy complain
more vociferously about some things.
* Fix pylint import order
* Remove erroneous comment
* Fix typos
Co-authored-by: Kevin Hartman <kevin@hart.mn>
* Comment on `extension-module`
* Unify naming of `qiskit-pyext`
* Remove out-of-date advice on starting new crates
---------
Co-authored-by: Kevin Hartman <kevin@hart.mn>
* Fix typing-extensions
* Add dagdependency_v2 and converters
* Add V2 tests
* Cleanup testing
* Move import
* Add copy_empty_like and topological_op_nodes
* Lint and providers update
* Change to apply_operation_back
* Fix converter init
* Cleanup and remove indices methods
* Change to _get_node and cleanup
* Add reno
* Fix node_id bug in dag_drawer
* Lint
* Convert to using leading _ for class and functions
* Cleanup test file import
* Fix converters
* Restart CI
* Remove reno and remove_ops functions and fix typo
* Remove init entries for v2
* Remove meaningless methods
These `quantum_successors` etc have no meaning in `DAGDependency` (and
V2) since the edges don't carry the data that the `DAGCircuit` functions
check for; `DAGDependency` is in part a deliberate erasure of this data.
---------
Co-authored-by: Jake Lishman <jake.lishman@ibm.com>
This commit bumps the faer version used to the latest release 0.18.x.
There were some changes to the package structure in 0.18 which reduced
the number of crates used to build faer as it's now a flatter project.
However, the converter to ndarray is now in a seperate faer-ext crate
which is added to the dependencies list.
* find_depreaction is pending-aware
* Apply suggestions from code review
Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
* taking the suggestions from Eric
* pending_arg
---------
Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
The recent Sabre refactor (gh-11977, 3af3cf588) inadvertently switched
the order that physical qubits were considered when modifying the front
layer after a swap insertion. This could occasionally have an impact in
the swap-chooser and extended-set population, if a swap enabled two
separate gates at once.
In the control-flow builders, there are typically two ways the
individual blocks can be reconstructed to be unified over the qubit and
clbit resources. We generally attempt to avoid completely rebuilding
the cirucits unless we have to. In cases where the resources are
visited in an incompatible order, however, we have to construct new
circuit objects, and in these cases, we were failing to transfer the
`Var` use over completely.
This commit is an overdue tidy up of the Sabre code, which had been
through a few growth spurts with the addition of the combined
layout-and-routing pass in Rust, and the support for control-flow
operations and directives. I have a few things I'd like to try with
Sabre, and the code was getting quite unwieldy to modify and extend.
This refactors the Sabre routing internals, encapsulating a "routing
target" into a single view object that is used to define the hardware
target, and the stateful components of the routing algorithm into a
formal `RoutingState` object. The functions that build up the routing
algorithm then become stateful instance methods, avoiding needing to
pass many things through several internal function calls.
In addition to the non-trivial lines-of-code savings, this also made it
clearer to me (while doing the refactor) that routing-state methods were
not all really at similar levels of abstraction, meaning that things
like the escape-valve mechanism took up oversized space in the
description of the main algorithm, and the control-flow block handling
was not as neatly split from the rest of the logic as it could have
been. This reorganises some of the methods to make the important
components of the algorithms clearer; the top level of the algorithm now
fits on one screen.
Lastly, this moves both layout and routing into a unified `sabre`
module, mostly just to simplify all the `use` statements and to put
logically grouped code in the same place.
* Add .apply_layout() to Pauli class
* Update qiskit/quantum_info/operators/symplectic/pauli.py
Co-authored-by: Matthew Treinish <mtreinish@kortar.org>
* Apply comment from code review
* Add consistency test
* Consistency test comparing SparsePauliOps
---------
Co-authored-by: Matthew Treinish <mtreinish@kortar.org>
* Reduce number of decomposers used during UnitarySynthesis default plugin
This commit reduces the number of decomposers we run in the default
synthesis plugin when we're in known targets. Previously the unitary
synthesis plugin was trying to product of all 1q basis and 2q basis for
every 2q pair that is being synthesized. This would result in duplicated
work for several 1q bases where there were potential subsets available
as two different target euler bases, mainly U321 and U3 or ZSX and ZSXX
if the basis gates for a qubit where U3, U2, U1 or Rz, SX, and X
respectively. This reuses the logic from Optimize1qGatesDecomposition to
make the euler basis selection which does the deduplication. Similarly,
in the presence of known 2q gates we can skip the XXDecomposer checks (or
potentially running the XXDecomposer) which should speed up both the
selection of decomposers and also reduce the number of decomposers we
run.
* Update qiskit/transpiler/passes/synthesis/unitary_synthesis.py
---------
Co-authored-by: John Lapeyre <jlapeyre@users.noreply.github.com>
* Improve performance and randomness of `QuantumVolume`
This hugely improves the construction time of `QuantumVolume` circuits,
in part by removing the previous behaviour of using low-entropy
indiviual RNG instances for each SU4 matrix. Now that we need larger
circuits, this would already have been a non-trivial biasing of the
random outputs, but also, calling Scipy random variates in a loop is
_much_ less efficient than vectorising the entire process in one go.
Along with changes to the RNG, this commit also adds a faster path to
`UnitaryGate` to shave off some useless repeated calculations (this can
likely be used elsewhere in Qiskit) and reworks the `QuantumVolume`
builder to use more efficient circuit construction methods.
* Make test explicitly test reproducibility
* Protect best-effort seed retrieval against old Numpy
We always copy compile-time parametric instructions during
`QuantumCircuit.append` to avoid accidental mutation of other references
in a call to `assign_parameters(inplace=True)`. However, mostly our
rotation gates are added through things like `QuantumCircuit.rz`, where
the only reference to the `RZGate` is internal to the `QuantumCircuit`,
so creating and immediately copying it is a waste of time.
This commit adds a `copy` argument to `QuantumCircuit.append`, so we can
set it to `False` in places where we know we own the instruction being
added.
* Add asv benchmarks for "utility scale" compilation
This commit adds new benchmarks that parse and compile "utility scale"
to the asv suite. These scale of circuits are increasingly a user workload
of interest so having nightly benchmarks that cover this is important.
This adds a few benchmarks to time circuit parsing and compilation so we
track this over time.
Additionally to better optimize our output binaries on release a variant
of the same benchmarks is added to the PGO profiling to ensure we have
coverage of this scale problem as part of our profiling data.
* Rename benchmark file
* Use qasm2.load() instead of QuantumCircuit.from_file()
* switched from sphinx.ext.viewcode to sphinx.ext.linkcode
* removed extra line
* Add section header for source code links
Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
* removed docstring
Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
* update return string
Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
* added back blank line
* Added a method to determine the GitHub branch
Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
* add blank line
Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
* remove print statement
Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
* Try to fix error for contextlib file
We got this stacktrace:
Traceback (most recent call last):
File "/home/vsts/work/1/s/.tox/docs/lib/python3.8/site-packages/sphinx/events.py", line 96, in emit
results.append(listener.handler(self.app, *args))
File "/home/vsts/work/1/s/.tox/docs/lib/python3.8/site-packages/sphinx/ext/linkcode.py", line 55, in doctree_read
uri = resolve_target(domain, info)
File "/home/vsts/work/1/s/docs/conf.py", line 216, in linkcode_resolve
file_name = PurePath(full_file_name).relative_to(repo_root)
File "/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/pathlib.py", line 908, in relative_to
raise ValueError("{!r} does not start with {!r}"
ValueError: '/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/contextlib.py' does not start with '/home/vsts/work/1/s'
We shouldn't even attempt to generate a link for the file contextlib.py
* Try to fix error for Jenkins run #20240221.52
New build failed with this stacktrace:
Traceback (most recent call last):
File "/home/vsts/work/1/s/.tox/docs/lib/python3.8/site-packages/sphinx/events.py", line 96, in emit
results.append(listener.handler(self.app, *args))
File "/home/vsts/work/1/s/.tox/docs/lib/python3.8/site-packages/sphinx/ext/linkcode.py", line 55, in doctree_read
uri = resolve_target(domain, info)
File "/home/vsts/work/1/s/docs/conf.py", line 215, in linkcode_resolve
full_file_name = inspect.getsourcefile(obj)
File "/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/inspect.py", line 696, in getsourcefile
filename = getfile(object)
File "/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/inspect.py", line 665, in getfile
raise TypeError('{!r} is a built-in class'.format(object))
TypeError: <class 'builtins.CustomClassical'> is a built-in class
So I added a condition that the obj should not be a built-in
* Check that qiskit in module name sooner
* moved valid code object verification earlier
* added try except statement to getattr call
* added extra try/except block
* Also support Azure Pipelines
* removed unused import
* Revert Azure support to keep things simple
* added extra "/" to final URL
* Move GitHub branch logic to GitHub Action
* switched to importlib and removed redundant block of code
* Apply suggestions from code review
Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>
* added back spaces
* Clarify docs_deploy GitHub logic
1. Remove misleading PR conditional. This worfklow doesn't even run in PRs. It was bad copy-pasta from the runtime repo
2. Clarify why we point tag builds to their stable branch name
* Use pathlib for relativizing file name
* Fix relative_to() path
* Remove tox prefix
---------
Co-authored-by: Eric Arellano <14852634+Eric-Arellano@users.noreply.github.com>