Commit Graph

13 Commits

Author SHA1 Message Date
Matthew Treinish 75d66dd8ba
Add support for Python 3.11 (#9028)
* Add support for Python 3.11

Python 3.11.0 was released on 10-24-2022, this commit marks the start of
support for Python 3.11 in qiskit. It adds the supported Python version in
the package metadata and updates the CI configuration to run test jobs
on Python 3.11 and build Python 3.11 wheels on release.

* Fix inspect.Parameter usage for API change in 3.11

Per the Python 3.11.0 release notes inspect.Parameter now raises a
ValueError if the name argument is a Python identifier. This was causing a
test failure in one case where a parameter named `lambda` was used.
This commit adjusts the parameter name in the tests to be lam to avoid
this issue.

* Set a version cap on the jax dev requirement

Currently jax doesn't publish Python 3.11 wheels which is blocking test
runs with python 3.11. Since jax is an optional package only used for
the gradient package we can just skip it as isn't a full blocker for
using python 3.11. This commit sets an environment marker on the jax
dev requirements to only try to install it on Python < 3.11.

* Set python version cap on cplex in CI

* DNM: Test wheel builds work

* Skip tests on i686/win32 wheel buids with python 3.11

* Revert "DNM: Test wheel builds work"

This reverts commit 725c21b465.

* Run QPY backwards compat tests on trailing edge Python version

This commit moves the qpy backwards compatibility testing from the
leading edge python version, which in this PR branch is Python 3.11, to
the trailing edge Python version which is currently 3.7. Trying to add
support for a new Python version has demonstrated that we can't use the
leading edge version as historical versions of Qiskit used to generate
old QPY payloads are not going to be generally installable with newer
Python versions. So by using the trailing edge version instead we can
install all the older versions of Qiskit as there is Python
compatibility for those Qiskit versions. Eventually we will need to
raise the minimum Qiskit version we use in the QPY tests, when Python
3.9 goes EoL in October 2025 and Qiskit Terra 0.18.0 no longer has any
supported versions of Python it was released for. We probably could
get by another year until Python 3.10 goes EoL in 2026 it just means
we're building 0.18.x and 0.19.x from source for the testing, but when
Python 3.11 becomes our oldest supported version we'll likely have to
bump the minimum version.

This does go a bit counter to the intent of the test matrix to make the
first stage return fast and do a more through check in the second stage.
But, in this case the extra runtime is worth the longer term stability
in the tests.

Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2022-11-03 16:58:14 +00:00
Kevin Hartman 799caa7ad9
Try using Git executable for cargo fetch. (#8987) 2022-10-25 06:34:35 +00:00
Matthew Treinish dfca1fb90d
Revert "Pin setuptools in CI (#8526)" (#8530)
* Revert "Pin setuptools in CI (#8526)"

With the release of setuptools 64.0.1 the issues previously blocking CI
and editable installs more generally should have fixed now. This commit
reverts the pins previously introduced to unblock CI and work around the
broken release.

This reverts commit 82e38d1de0.

* Add back SETUPTOOLS_ENABLE_FEATURES env var for legacy editable install

Co-authored-by: Jake Lishman <jake.lishman@ibm.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2022-08-22 21:20:09 +00:00
Matthew Treinish 82e38d1de0
Pin setuptools in CI (#8526)
* Pin setuptools in CI

The recently released setuptools 64.0.0 release introduced a regression
that prevents editable installs from working (see pypa/setuptools#3498).
This is blocking CI as we use editable installs to build and install
terra for testing. When there is an upstream release fixing this issue
we can remove the pins.

* Remove pip/setuptools/wheel manual install step

* Try venv instead of virtualenv

* Revert "Try venv instead of virtualenv"

This reverts commit 3ada819330.

* Revert "Remove pip/setuptools/wheel manual install step"

This reverts commit 831bc6e0db.

* Pin in constraints.txt too

* Lower version further

* Pin setuptools-rust too

* Set editable install to legacy mode via env var

* Set env variable correctly everywhere we build terra

* Add missing env variable setting for image tests
2022-08-12 09:13:59 +02:00
Jim Garrison a7d66f9aa7
Remove Cython as a build dependency (#7777)
This follows up on #7702, which removed the last of the Cython code.
The current change was was presumably intended to be part of that
PR, I think, based on the release note added there:

> Cython is no longer a build dependency of Qiskit Terra and is no
> longer required to be installed when building Qiskit Terra from
> source.
2022-03-14 22:18:57 +00:00
Matthew Treinish ccc371f8ff
Implement multithreaded stochastic swap in rust (#7658)
* Implement multithreaded stochastic swap in rust

This commit is a rewrite of the core swap trials functionality in the
StochasticSwap transpiler pass. Previously this core routine was written
using Cython (see #1789) which had great performance, but that
implementation was single threaded. The core of the stochastic swap
algorithm by it's nature is well suited to be executed in parallel, it
attempts a number of random trials and then picks the best result
from all the trials and uses that for that layer. These trials can
easily be run in parallel as there is no data dependency between the
trials (there are shared inputs but read-only). As the algorithm
generally scales exponentially the speed up from running the trials in
parallel can offset this and improve the scaling of the pass. Running
the pass in parallel was previously tried in #4781 using Python
multiprocessing but the overhead of launching an additional process and
serializing the input arrays for each trial was significantly larger
than the speed gains. To run the algorithm efficiently in parallel
multithreading is needed to leverage shared memory on shared inputs.

This commit rewrites the cython routine using rust. This was done for
two reasons. The first is that rust's safety guarantees make dealing
with and writing parallel code much easier and safer. It's also
multiplatform because the rust language supports native threading
primatives in language. The second is while writing parallel cython
code using open-mp there are limitations with it, mainly on windows. In
practice it was also difficult to write and maintain parallel cython
code as it has very strict requirements on python and c code
interactions. It was much faster and easier to port it to rust and the
performance for each iteration (outside of parallelism) is the same (in
some cases marginally faster) in rust. The implementation here reuses
the data structures that the previous cython implementation introduced
(mainly flattening all the terra objects into 1d or 2d numpy arrays for
efficient access from C).

The speedups from this PR can be significant, calling transpile() on a
400 qubit (with a depth of 10) QV model circuit targetting a 409 heavy
hex coupling map goes from ~200 seconds with the single threaded cython
to ~60 seconds with this PR locally on a 32 core system, When transpiling
a 1000 qubit (also with a depth of 10) QV model circuit targetting a 1081
qubit heavy hex coupling map goes from taking ~6500 seconds to ~720
seconds.

The tradeoff with this PR is for local qiskit-terra development a rust
compiler needs to be installed. This is made trivial using rustup
(https://rustup.rs/), but it is an additional burden and one that we
might not want to make. If so we can look at turning this PR into a
separate repository/package that qiskit-terra can depend on. The
tradeoff here is that we'll be adding friction to the api boundary
between the pass and the core swap trials interface. But, it does ease
the dependency on development for qiskit-terra.

* Sanitize packaging to support future modules

This commit fixes how we package the compiled rust module in
qiskit-terra. As a single rust project only gives us a single compiled
binary output we can't use the same scheme we did previously with cython
with a separate dynamic lib file for each module. This shifts us to
making the rust code build a `qiskit._accelerate` module and in that we
have submodules for everything we need from compiled code. For this PR
there is only one submodule, `stochastic_swap`, so for example the
parallel swap_trials routine can be imported from
`qiskit._accelerate.stochastic_swap.swap_trials`. In the future we can
have additional submodules for other pieces of compiled code in qiskit.
For example, the likely next candidate is the pauli expectation value
cython module, which we'll likely port to rust and also make parallel
(for sufficiently large number of qubits). In that case we'd add a new
submodule for that functionality.

* Adjust random normal distribution to use correct mean

This commit corrects the use of the normal distribution to have the mean
set to 1.0. Previously we were doing this out of band for each value by
adding 1 to the random value which wasn't necessary because we could
just generate it with a mean of 1.0.

* Remove unecessary extra scope from locked read

This commit removes an unecessary extra scope around the locked read for
where we store the best solution. The scope was previously there to
release the lock after we check if there is a solution or not. However
this wasn't actually needed as we can just do the check inline and the
lock will release after the condition block.

* Remove unecessary explicit type from opt_edges variable

* Fix indices typo in NLayout constructor

Co-authored-by: Jake Lishman <jake@binhbar.com>

* Remove explicit lifetime annotation from swap_trials

Previously the swap_trials() function had an explicit lifetime
annotation `'p` which wasn't necessary because the compiler can
determine this on it's own. Normally when dealing with numpy views and a
Python object (i.e. a GIL handle) we need a lifetime annotation to tell
the rust compiler the numpy view and the python gil handle will have the
same lifetime. But since swap_trials doesn't take a gil handle and
operates purely in rust we don't need this lifetime and the rust
compiler can deal with the lifetime of the numpy views on their own.

* Use sum() instead of fold()

* Fix lint and add rust style and lint checks to CI

This commit fixes the python lint failures and also updates the ci
configuration for the lint job to also run rust's style and lint
enforcement.

* Fix returned layout mapping from NLayout

This commit fixes the output list from the `layout_mapping()`
method of `NLayout`. Previously, it incorrectly would return the
wrong indices it should be a list of virtual -> physical to
qubit pairs. This commit corrects this error

Co-authored-by: georgios-ts <45130028+georgios-ts@users.noreply.github.com>

* Tweak tox configuration to try and reliably build rust extension

* Make swap_trials parallelization configurable

This commit makes the parallelization of the swap_trials() configurable.
This is dones in two ways, first a new argument parallel_threshold is
added which takes an optional int which is the number of qubits to
switch between a parallel and serial version. The second is that it
takes into account the the state of the QISKIT_IN_PARALLEL environment
variable. This variable is set to TRUE by parallel_map() when we're
running in a multiprocessing context. In those cases also running
stochastic swap in parallel will likely just cause too much load as
we're potentially oversubscribing work to the number of available CPUs.
So, if QISKIT_IN_PARALLEL is set to True we run swap_trials serially.

* Revert "Make swap_trials parallelization configurable"

This reverts commit 57790c84b0. That
commit attempted to sovle some issues in test running, mainly around
multiple parallel dispatch causing exceess load. But in practice it was
broken and caused more issues than it fixed. We'll investigate and add
control for the parallelization in a future commit separately after all
the tests are passing so we have a good baseline.

* Add docs to swap_trials() and remove unecessary num_gates arg

* Fix race condition leading to non-deterministic behavior

Previously, in the case of circuits that had multiple best possible
depth == 1 solutions for a layer, there was a race condition in the fast
exit path between the threads which could lead to a non-deterministic
result even with a fixed seed. The output was always valid, but which
result was dependent on which parallel thread with an ideal solution
finished last and wrote to the locked best result last. This was causing
weird non-deterministic test failures for some tests because of #1794 as
the exact match result would change between runs. This could be a bigger
issue because user expectations are that with a fixed seed set on the
transpiler that the output circuit will be deterministically
reproducible.

To address this is issue this commit trades off some performance to
ensure we're always returning a deterministic result in this case. This
is accomplished by updating/checking if a depth==1 solution has been
found in another trial thread we only act (so either exit early or
update the already found depth == 1 solution) if that solution already
found has a trial number that is less than this thread's trial number.
This does limit the effectiveness of the fast exit, but in practice it
should hopefully not effect the speed too much.

As part of this commit some tests are updated because the new
deterministic behavior is slightly different from the previous results
from the cython serial implementation. I manually verified that the
new output circuits are still valid (it also looks like the quality
of the results in some of those cases improved, but this is strictly
anecdotal and shouldn't be taken as a general trend with this PR).

* Apply suggestions from code review

Co-authored-by: georgios-ts <45130028+georgios-ts@users.noreply.github.com>

* Fix compiler errors in previous commit

* Revert accidental commit of parallel reduction in compute_cost

This was only a for local testing to prove it was a bad idea and was
accidently included in the branch. We should not nest the parallel
execution like this.

* Eliminate short circuit for depth == 1 swap_trial() result

This commit eliminates the short circuit fast return in swap_trial()
when another trial thread has found an ideal solution. Trying to do this
in a parallel context is tricky to make deterministic because in cases
of >1 depth == 1 solutions there is an inherent race condition between
the threads for writing out their depth == 1 result to the shared
location. Different strategies were tried to make this reliably
deterministic but there wa still a race condition. Since this was just a
performance optimization to avoid doing unnecessary work this commit
removes this step. Weighing improved performance against repeatability
in the output of the compiler, the reproducible results are more
important. After we've adopted a multithreaded stochastic swap we can
investigate adding this back as a potential future optimization.

* Add missing docstrings

* Add section to contributing on installing form source

* Make rust python classes pickleable

* Add rust compiler install to linux wheel jobs

* Try more tox changes to fix docs builds

* Revert "Eliminate short circuit for depth == 1 swap_trial() result"

This reverts commit c510764a77. The
removal there was premature and we had a fix for the non-determinism in
place, ignoring a typo which was preventing it from working.

Co-Authored-By: Georgios Tsilimigkounakis <45130028+georgios-ts@users.noreply.github.com>

* Fix submodule declaration and module attribute on rust classes

* Fix rust lint

* Fix docs job definition

* Disable multiprocessing parallelism in unit tests

This commit disables the multiprocessing based parallelism when running
unittest jobs in CI. We historically have defaulted the use of
multiprocessing in environments only where the "fork" start method is
available because this has the best performance and has no caveats
around how it is used by users (you don't need an
`if __name__ == "__main__"` guard). However, the use of the "fork"
method isn't always 100% reliable (see
https://bugs.python.org/issue40379), which we saw on Python 3.9 #6188.
In unittest CI (and tox) by default we use stestr which spawns (not using
fork) parallel workers to run tests in parallel. With this PR this means
in unittest we're now running multiple test runner subprocesses, which
are executing parallel dispatched code using multiprocessing's fork
start method, which is executing multithreaded rust code. This three layers
of nesting is fairly reliably hanging as Python's fork doesn't seem to
be able to handle this many layers of nested parallelism. There are 2
ways I've been able to fix this, the first is to change the start method
used by `parallel_map()` to either "spawn" or "forkserver" either of
these does not suffer from random hanging. However, doing this in the
unittest context causes significant overhead and slows down test
executing significantly. The other is to just disable the
multiprocessing which fixes the hanging and doesn't impact runtime
performance signifcantly (and might actually help in CI so we're not
oversubscribing the limited resources.

As I have not been able to reproduce `parallel_map()` hanging in
a standalone context with multithreaded stochastic swap this commit opts
for just disabling multiprocessing in CI and documenting the known issue
in the release notes as this is the simpler solution. It's unlikely that
users will nest parallel processes as it typically hurts performance
(and parallel_map() actively guards against it), we only did it in
testing previously because the tests which relied on it were a small
portion of the test suite (roughly 65 tests) and typically did not have
a significant impact on the total throughput of the test suite.

* Fix typo in azure pipelines config

* Remove unecessary extension compilation for image tests

* Add test script to explicitly verify parallel dispatch

In an earlier commit we disabled the use of parallel dispatch in
parallel_map() to avoid a bug in cpython associated with their fork()
based subprocess launch. Doing this works around the bug which was
reliably triggered by running multiprocessing in parallel subprocesses.
It also has the side benefit of providing a ~2x speed up for test suite
execution in CI. However, this meant we lost our test coverage in CI for
running parallel_map() with actual multiprocessing based parallel
dispatch. To ensure we don't inadvertandtly regress this code path
moving forward this commit adds a dedicated test script which runs a
simple transpilation in parallel and verifies that everything works as
expected with the default parallelism settings.

* Avoid multi-threading when run in a multiprocessing context

This commit adds a switch on running between a single threaded and a
multithreaded variant of the swap_trials loop based on whether the
QISKIT_IN_PARALLEL flag is set. If QISKIT_IN_PARALLEL is set to TRUE
this means the `parallel_map()` function is running in the outer python
context and we're running in multiprocessing already. This means we do
not want to be running in multiple threads generally as that will lead
to potential resource exhaustion by spawn n processes each potentially
running with m threads where `n` is `min(num_phys_cpus, num_tasks)` and
`m` is num_logical_cpus (although only
`min(num_logical_cpus, num_trials)` will be active) which on the typical
system there aren't enough cores to leverage both multiprocessing and
multithreading. However, in case a user does have such an environment
they can set the `QISKIT_FORCE_THREADS` env variable to `TRUE` which
will use threading regardless of the status of `QISKIT_IN_PARALLEL`.

* Apply suggestions from code review

Co-authored-by: Jake Lishman <jake@binhbar.com>

* Minor fixes from review comments

This commits fixes some minor details found during code review. It
expands the section on building from source to explain how to build a
release optimized binary with editable mode, makes the QISKIT_PARALLEL
env variable usage consistent across all jobs, and adds a missing
shebang to the `install_rush.sh` script which is used to install rust in
the manylinux container environment.

* Simplify tox configuration

In earlier commits the tox configuration was changed to try and fix the
docs CI job by going to great effort to try and enforce that
setuptools-rust was installed in all situations, even before it was
actually needed. However, the problem with the docs ci job was unrelated
to the tox configuration and this reverts the configuration to something
that works with more versions of tox and setuptools-rust.

* Add missing pieces of cargo configuration

Co-authored-by: Jake Lishman <jake@binhbar.com>
Co-authored-by: georgios-ts <45130028+georgios-ts@users.noreply.github.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2022-02-28 21:49:54 +00:00
Matthew Treinish 3763e61f16
Bump black version and relax constraint (#7615)
This commit bumps the black version we pin to the latest release,
22.1.0. This release is also the first release not marked as beta and
with that black has introduced a stability policy where no formatting
changes will be introduced on a major version release (which is the
year). [1] With this new policy in place we no longer need to pin to a
single version and can instead constrain the requirement to just the
major version without worrying about a new release breaking ci or local
development. This commit does that and sets the black version
requirement to be any 22.x.y release so that we'll continue to get
bugfixes moving forward without having to manually bump a pinned version.

[1] https://black.readthedocs.io/en/latest/the_black_code_style/index.html#stability-policy
2022-02-03 18:07:22 +00:00
Jake Lishman 01913f41ae
Use manylinux2014 on all Linux builds (#7566)
* Use manylinux2014 on Linux

Numpy and Scipy are moving to drop manylinux2010 wheels on newer
platforms, which gives us some cover to do the same.  To test, we need
to ensure that we pull Numpy and Scipy in binary form only (forcing pip
to install slightly older versions like 1.21 instead of 1.22 on
unsupported Python versions), rather than attempting to build Numpy from
source as part of our testing process.

* Bump cibuildwheel version
2022-01-26 14:22:20 +00:00
Matthew Treinish 371a2bc217
Drop support for 32bit py310 wheel builds (#7553)
* Drop support for 32bit py310 wheel builds

In #7102 we added support for Python 3.10 to Qiskit. However, numpy and
scipy stopped publishing wheels for 32bit platforms in Python 3.10. This
means that to test Python 3.10 32bit wheels we will have to compile
numpy and scipy from source (which for scipy is prohibitively slow).
In response this commit updates our cibuildwheel configuration to skip
tests on 32bit Python 3.10 wheels since we can't reliably test the built
wheels in CI and it would require users to build them from source.
This is the same approach taken in PR #7549 for ppc64le and s390x. For
32 bit platform users on python 3.10 this means they'll need to locally
build numpy, scipy, and potentially other depencies from source to use
Qiskit Terra with Python 3.10, but for the Qiskit components at least
precompiled binaries will be available.

* Update release note
2022-01-21 21:14:24 +00:00
Matthew Treinish b5f87a6f99
Add cross-build wheel jobs for ppc64le and s390x (#7549)
* Add cross-build wheel jobs for ppc64le and s390x

This commit adds a new CI job at release time to build precompiled
binaries for s390x and ppc64le linux platforms.  These platforms do not
have precompiled wheels for any upstream dependencies (except for
retworkx and tweedledum, which both publish binaries for both platforms).
Since the only publically available CI service that has native s390x and
ppc64le environments available is travis which doesn't provide
sufficient quota to open source projects anymore to make it usable we
have to rely on either cross compilation or emulation. Cross compiling
with python wheels while possible is quite tricky to setup and configure
in practice so emulation is used here. This is a much simpler path
configure, especially because cibuildwheel has support for emulating
non-x86 architectures via QEMU to build wheels. While QEMU emulation of
other architectures is exceedingly slow, we have a 6 hour job time limit
with github actions which should hopefully be sufficient to compile the
binary and build wheels for all our supported python versions.

This was previously attempted in #5844 but we hit a timout issue
reliably every time trying to build scipy from source. Compiling scipy
is slow on a native system and it's an order of magnitude slower under
qemu emulation. In #5844 we exceeded the 6 hour job time limit just
compiling scipy once for testing. To avoid this issue (since scipy
doesn't publish s390x or ppc64le wheels) this commit skips the test
runs on these jobs. This means we're solely building the wheels and not
testing if they're valid. This also means installing scipy and other
dependencies is still an exercise for the s390x and ppc64le users. But,
if this works it will mean that at least for Qiskit there will not be a
need to compile anything from source.

* DNM: Test builds in CI

This commit should not be merged without being reverted first. It is
changing the CI trigger to run on open pull requests and removing the
twine upload step to trigger the wheel builds for testing and
iteratively fixing issues. Once the builds work this will be reverted
and the PR is ready for final review/merging.

* Skip musl libc wheel builds

* Fix ppc64le job name

* Revert "DNM: Test builds in CI"

Testing shows this works well and we can build and publish ppc64le and
s390x wheels on release.

This reverts commit f06c0b6176 and opens
up the PR for review.

* Add release note
2022-01-21 17:54:03 +00:00
Matthew Treinish 9a743fb2ea
Add support for Python 3.10 (#7102)
* Add support for Python 3.10

Python 3.10.0 was released on 10-04-2021, this commit marks the support
of Python 3.10 in qiskit-terra. It adds the supported python version in
the package metadata and updates the CI configuration to run test jobs
on Python 3.10 and build Python 3.10 wheels.

* Fix typo

* Update default envlist in tox.ini to include 3.10

* Bump cibuildwheel to the latest version

This also takes the opportunity to deduplicate the cibuildwheel
configuration using the pyproject.toml support in newer versions of
cibuildwheel. The common options for all builds are put there and per
build overrides (which are only for cross compiling arm wheels) are left
as environment variables in the CI configuration.

* Add missing cibuildwheel config to pyproject.toml

* Ignore internal deprecation warning emitted by jupyter in ci

* Fix black

Co-authored-by: Jake Lishman <jake.lishman@ibm.com>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2022-01-20 00:38:28 +00:00
Matthew Treinish 2eee56616d
Switch to using black for code formatting (#6361)
* make compatible with pylint and pycodestyle

* black config

* flake8 compatible config

* Missing newline

* Add black check to CI and dev requirements

This commit updates the CI configuration and local tox configuration to
leverage black instead of pycodestyle. It adds a check job to ci and the
tox lint job so we can quickly check if black has been run. A tox job
named 'black' is added to run black on all the code in the repo.

* Run black on everything

This commit reformats all the code in the qiskit-terra repository to use
black. It changes no functionality in the project and just adjusts the
code formatting to be consistent and automated. If you are looking at
this commit in the git log you can likely safely ignore any diff from
this commit as it is just the result of running '`black' on the repo
and instead you should look at any commits before or after this for
functional changes.

Co-authored-by: Lev S. Bishop <18673315+levbishop@users.noreply.github.com>
2021-05-05 09:53:39 -04:00
Matthew Treinish 48b6792f1a
Add pyproject.toml to declare cython build requirement (#2278)
Right now when you try to build terra >=0.8.0 and you don't have Cython
installed it will fail. This is because we rely on cython being present
to build the stochastic swap code. While we have a setuptools
setup_requires defining this dependency it doesn't work because the
dependency on cython is needed before setuptools can resolve that for
us. PEP518 provides a solution for this problem by adding the concept of
a pyproject.toml file which can be used to outline build requirements
which are build time depdencies needed before the setup.py is run. With
this file present when you run:

pip install git+https://github.com/Qiskit/qiskit-terra

(or running on a local checkout)
and cython isn't installed pip will acquire it and use it for building
the sdist.
2019-05-13 22:13:38 -04:00