* make compatible with pylint and pycodestyle
* black config
* flake8 compatible config
* Missing newline
* Add black check to CI and dev requirements
This commit updates the CI configuration and local tox configuration to
leverage black instead of pycodestyle. It adds a check job to ci and the
tox lint job so we can quickly check if black has been run. A tox job
named 'black' is added to run black on all the code in the repo.
* Run black on everything
This commit reformats all the code in the qiskit-terra repository to use
black. It changes no functionality in the project and just adjusts the
code formatting to be consistent and automated. If you are looking at
this commit in the git log you can likely safely ignore any diff from
this commit as it is just the result of running '`black' on the repo
and instead you should look at any commits before or after this for
functional changes.
Co-authored-by: Lev S. Bishop <18673315+levbishop@users.noreply.github.com>
With the pending support being added to qiskit-bot in
Qiskit/qiskit-bot#9 we're able to rename the default branch for the
repository to 'main'. However, when we do that several things will need
to be updated, most importantly the CI trigger was hardcoded to the
previous default branch 'master. This commit fixes these references and
should be merged after we rename the branch to re-enable CI.
With the use of mergify to auto backport PRs for us, a new branch is
created by the mergify bot account and a PR opened from that branch
against the stable branch. This model however is breaking the
assumptions made by the report_ci_failure.py script which assumes it is
only ever invoked from push event because normally pull requests don't
have credentials to report back to github. This is not the case for
pull requests from a branch, only pull requests from a fork. As branches
on a repo can be trusted since only users (or in this case bots) that
are trusted can create a branch they also have access to secrets and
therefore have credentials. This was causing the QacoBot account to open
issues everytime CI failed on a mergify backport PR which is not the
expected or desired behavior. This commit fixes this issue by actually
checking to ensure we're in a push context instead of assuming it based
on the presence of credentials and only running the script if we are.
This is done using the 'TRAVIS_EVENT_TYPE' environment variable [1] which
specifies what event triggered the travis run.
In the future we should look at migrating this CI job to run on azure
pipelines instead of using travis.
[1] https://docs.travis-ci.com/user/environment-variables/
In #5582 we added a script that is used to ensure we don't import
slow optional dependencies in the default 'import qiskit' path. Since
that PR merged we've managed to remove a few more packages from the
default import path (see #5619, #5485, and #5620) which we should add
the same checks to to ensure we don't regress and accidently start
importing them again in the default path. This commit adds all these
packages to the list of imports not allowed as part of 'import qiskit'.
* Add script to verify slow optional imports aren't used in default path
This commit adds a new script that is run in CI to find whether slow
optional imports are in the default qiskit import path. We've recently
had several instances of PRs being pushed that added sympy to the
default import path which makes the overall qiskit import significantly
slower (see #5576 for the most recent occurrence), this script will
catch those siutations and report the error.
* Use virtualenv in CI for running new script
The azure ci env was using system python instead of the venv where
qiskit-terra was installed for running the new script. This was causing
an error because to work the script imports qiskit to verify nothing
slow is getting imported. This commit fixes this by explicitly using the
venv python instead of the default system python to execute the script.
* Add script to makefile too
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* Add fake backends for Bogota, Montreal, and Toronto
IBM Quantum has added 3 new backends recently: Bogota, Montreal, and
Toronto. This commit creates new fake backends FakeBogota, FakeMontreal,
and FakeToronto from snapshots of the real devices.
* Refactor QobjConfiguration transformations to happen in to_dict and from_dict
* Refactor PulseDefaults transformations to happen in to_dict and from_dict
* Remove conversions from update fake backends scripts.
* Update mock backends. Added ibmq_toronto and ibmq_montreal.
* fix bug in the parametric pulse code.
* Undo from_dict/init changes.
* Do conversion of complex number better.
* Update mock backend files.
* Linting.
* Add conversion tests.
Co-authored-by: Matthew Treinish <mtreinish@kortar.org>
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
* Have bot only report failures on maintained branches
Currently the QacoBot for reporting travis failures on commits is
reporting failures on temporary PR branches created on terra. There are
use cases for creating a quick branch to either test CI with secrets
(like to manually retrigger a failed wheel build) or a quick web ui
edit. However, failures on these branches should not have issues opened
against them because they are not maintained branches. This commit
updates the report_ci_failure.py script to only report a failure when
either master or a stable branch encounters a CI failure.
* Update tools/report_ci_failure.py
* elif branch_name.startswith(stable):
* remove re
Co-authored-by: Luciano Bello <luciano.bello@ibm.com>
* Add tools/verify_headers.py to check copyright headers
Currently we're relying on a pylint plugin to verify copyright headers
conform to the project guidelines. However this is problematic for a
number of reasons. The first is it has proven to be unstable there have
been several instances of both users and CI failing on header files that
do not have any issues. This is pretty common with pylint in general
since it's very sensitive to differences in environment. Additionally,
it was using a regex to try and match the entire header string. This
provided no practical debugability when it fails (especially when the
failure is unexpected) because we end up with only a match error.
This commit addresses these issues by removing the use of the regex
based pylint plugin and adding a small script to verify the header in
every qiskit file and provide useful debugging if there is an issue with
a header in a python file.
Fixes#3127
* Add toxinidir to tox script call
* Explicitly set character encoding for windows
In manual testing running the verify_headers.py script on windows would
fail because it was trying to use the system default character encoding
instead of utf8 and would encounter chartacters outside the windows
charmap encoding. To avoid this issue this commit explicitly sets the
character encoding to utf8. We already declare that in the header (which
is part of what this script is verifying) so we can assume all files are
utf8 encoded and if they're not the script will catch it.
Co-authored-by: Luciano Bello <luciano.bello@ibm.com>
* Update fake backend snapshots
This commit goes through and updates the snapshots for all the fake
backends. This should fix inconsistencies between the current state of
the backends and what is represented by the fakes prior to this commit.
The json files were generated using the script being added in #4327.
* Fix obvious test failures
* Revert updates to fake melbourne backend
The melbourne device has undergone significant changes since the current
snapshot used for the fake device. This includes the restoration of
previously disabled qubits, which changes the coupling map. This had the
unexpected side effect of breaking several test cases that were
dependent on it having 14 qubits instead of 15. To just simplify things
and not have to update numerous test cases this commit just reverts
melbourne to the previous snapshot.
* Adjust dt units in script and update pulse backends
The auto-update script was incorrectly converting the pulse units to
seconds instead of nano-seconds (which is the expected snapshot value).
This commit corrects that in the script and re-updates the json files
for the pulse backends.
* Fix lint and remove dates from docstrings
* Rollback poughkeepsie updates
This adds a new script to the tools/ directory to update the json file
snapshots for the fake backends. This should be used in the future
periodically to go in and update the snapshots from the ibmq devices.
In writing this a couple of edge cases were found in the to_dict()
method of the backendproperties and backendconfiguration classes that
needed to be fixed for the script to function. These errors are
corrected at the same time
The subunit_to_junit script was added in #2927 to leverage the test
results aggregation features in azure pipelines. It converts the subunit
stream result format that stestr natively generates and converts it to
a junitxml format that azure pipelines can understand. This script was
heavily based on the version of subunit2junitxml filter script included
in the upstream python-subunit script. However, the use case for the
filter script is different so some undesired behavior was ported over.
Mainly that the return code mirrors that of the test suite in the
result stream (ie if the test run fails the converter returns non-zero
too). The python-subunit script did this mainly because it was designed
to be run in a pipeline off the test runner as opposed to a separate
stage post-run. However, since our use case for the subunit_to_junit
script is different having this behavior causes result generation task
failures in azure pipelines UI after the test run failed. This is
undesireable because the junitxml generation task was actually
successful it just included failed tests. This commit corrects this by
making the script always return 0 unless there is an internal failure.
At the same time this commit also fixes several code lint issues. Which
includes dropping support for the subunit v1 protocol. The subunit v1
protocol won't ever come up in our use case so there was no reason to
include compat code for that here.
* Stage azure pipelines jobs
This commit reworks our azure pipelines CI flow to use resources more
responsibly. Previously we ran all the CI jobs at once which means we
took 9 jobs from are capacity all at once for each commit. This is
particularly wasteful if we end up running 9 jobs when one can tell us
it won't work. To improve this situation this commit makes the jobs run
in 2 stages. The first stage runs the python 3.5 tests on linux and
macOS first. These jobs also include the lint checker. If these jobs
both succeed then it will continue on to run tests on windows and python
3.6 and 3.7 for macOS and Linux.
* Add test results data to azure ui
* Tweak results generation
* Only run second stage if the first works
* Fix typo
* Create custom script for junit
* Tweak condition for second stage
* Adjust windows result generation and update result names
* Add randomized testing stage to travis.
* Update report_ci_failure to create an issue for failed random tests.
* Remove randomized tests from unit test and coverage.
* Add hypothesis example database to travis cache.
* Utility for reporting a CI failure as a GitHub issue.
This PR adds a new tool for creating an issue on GitHub when the
CI fails. The utility identifies the report with the name of the
branch and commit hash an avoids to open two reports for the same
subject.
* Fixing the master is failing label
* Add automatization in CI
* Remove documentation deploy
Remove documentation script and travis job.
* Remove deploy stage from travis
Remove the "deploy doc and pypi" stage entirely, and revise the
"subclassing" of the osx jobs.
* Fix travis script make invocation
* Fix extra lint script commands
* Pin numpy version to <1.16
Pin the numpy version to <1.16 until the potential incompatibilities
are fixed.
* Remove numpy from whitelist, ignore random instead
Remove `numpy` from the `extension-pkg-whitelist`, and instead manually
ignore the check for the members of `numpy.random`, as it was the
only place where the check was significant.
* Renaming QISKit to Qiskit
* Update qiskit/_qiskiterror.py
Co-Authored-By: jaygambetta <jay.gambetta@us.ibm.com>
* Add backwards compat exception class to __init__
This adds the backwards compatible exception class back to __init__ which is likely the point people are importing from.
* Update _instruction.py
* Update _dagcircuit.py
* Update _dagcircuit.py
* creating conf.py for German language
* initial German translation
* minor fixes and typos
* fixing typos and cleaning
* adding DE version
* fixing formatting issues
* fixing formatting issues
* formatting issues
* first DE version of index.rst
* fixing typos
* adding German translation de
* adding German translation
* adding German intro translation
* fixing minor typo
* adding German translation of real backend example
* minor typo
* adding German translation of getting started
* adding German translation of dev_introduction.rst
* fixing minor typos and beautification
* fixing title underline
* fixing image ref
* fixing broken links
* fixing broken reference
* fixing formatting issue
* add line in changelog
* remove note on downgrade requirements according to main doc
* Update CHANGELOG.rst
* minor fixes
Rename `_snake_case_to_camel_case()` to `_camel_case_to_snake_case()`
in order to match its functionality.
Unify the way random identifiers are generated, using `uuid4()` instead
of manually generated random strings.
* Resolves#262: add qiskit.backend methods...
for getting parameter, calibration, and status.
* linting quantumprogram
* Update _backendutils.py
* Update _backendutils.py
* Update _basebackend.py
* Update _qeremote.py
* Update _basebackend.py
* Update _quantumprogram.py
* Update _qeremote.py
* Warning, updates to test, and backencs
1. Added to the quantum program warnings — may of done this incorrect
2. Removed old versions
3. In the configuration we lost the rewrite of the mapping. I agree we
want to update the API but first we need the backend configuration to
work
4. Made the test use quantum program
Still to be done — check why failing some test
— add some test to backend that test the test_basebackends that test
new methods
* Fixing some spelling and linting
* Spelling fixes
* Lint and spelling
* Linter fixes and some spelling in backendutils
* Cleaning up the coupling_map
@ewinston we decided that all new code should use the new map and all
old code the older one. I have remove the complicated identity map we
inserted :-)
* Renaming configuration as config to fix scope error
* Linting in test
* Linting more
* Fixing the vqe
* More linting
* Update test_quantumprogram.py
* lint errors I found trying to debug the error
* Removing backend from quantum_program test
* Making the backend object based
* Adding tests for the backend object
* Linting tests
* Spelling and linting
* Removing some set_api that are not needed
* set log level to debug
* Removing the api from the tests and needed by the program
* Cleaning up and adding a discover
* correct travis.yml env variable specification
* Removing more old code
* Updated lengths with rename of q_name and c_name
* Lint errors found from travis fixed
* Moving import order to help travis linter
* Revise DeprecationWarnings, add note to docstrings
Add a context manager for ensuring the DeprecationWarnings are shown
regardless of the user configuration, as by default they are hidden for
end users (ie. when not on an interpreter).
Revise the strings for the warnings and add entries to the docstrings
for visually showing them on the online docs.
* Enable deprecation warnings during __init__
Enable the display of deprecation warnings during qiskit.__init__
instead of via a context manager, for simplifying the deprecated calls
and help providing visual aids for the editors.
* fix cyclic import and scope of configuration
These were pylint warnings. The configuration one was due to having a function of
of the same name as a parameter in _backendutils.py.
* fix linter
* remove converting coupling_map to dict
* minor lint fix
* Small fix to the backends.
* Fix LookupErrors, style, comments
Fix several backendutils methods raising ConnectionError instead of
LookupError when the entities were not found.
Update the structure of those methods for simplicity and consistency.
Fix commented out code on tests.
* Revise the travis configuration for using `cmake` for the several
targets, and use "stages" instead of parallel jobs:
* define three stages that are executed if the previous one suceeds:
1. "linter and pure python test": executes the linter and a test
without compiling the binaries, with the idea of providing quick
feedback for PRs.
2. "test": launch the test, including the compilation of binaries,
under GNU/Linux Python 3.6 and 3.6; and osx Python 3.6.
3. "deploy doc and pypi": for the stable branch, deploy the docs
to the landing page, and when using a specific commit message,
build the GNU/Linux and osx wheels, uploading them to test.pypi.
* use yaml anchors and definitions to avoid repeating code (and
working around travis limitations).
* Modify the `cmake``configuration to accomodate the stages flow:
* allow conditional creation of compilation and QA targets, mainly
for saving some time in some jobs.
* move the tests to `cmake/tests.cmake`.
* Update the tests:
* add a `requires_qe_access` decorator that retrieves QE_TOKEN and
QE_URL and appends them to the parameters in an unified manner.
* add an environment variable `SKIP_ONLINE_TESTS` that allows to
skip the tests that need network access.
* replace `TRAVIS_FORK_PULL_REQUEST` with the previous two
mechanisms, adding support for AppVeyor as well.
* fix a problem with matplotlib under osx headless, effectively
skipping `test_visualization.py` during the travis osx jobs.
* Move Sphinx to `requirements-dev.txt`.
* Fix sphinx doc deploy script for Japanese
* Fix sphinx Japanese image links
* Fix sphinx Japanese warnings
Fix warnings during the compilation of the Sphinx documents for the
Japanese language.
* Add link to switch language on Sphinx docs header
* Added japanese doc
* Update Make and structure to allow multiple langs
Update the Makefile "doc" target to build both the English and the
Japanese versions, in separate directories, and the "clean" target to
cleanup the autodoc documentation.
Add a "conf.py" file to the "doc/ja" folder that modifies the relevant
variables, as Sphinx uses that folder as the root document folder when
building the Japanese version.
* Fix missing "install" reference on ja sphinx docs
* Update sphinx doc deploy script for Japanese
Add the Japanese sphinx produced directory to the Github pages deploy
script.
* Fix typo and extra line in Makefile
- concurrence was returning negative values for some state with zero
concurrence
- state_fidelity was returning NaN for pure state density matrices.
Fixed by adding function funm_svd which applies as scalar function to
singular values of a matrix, and implementing sort this way.
- fixed concurrence and purity to work for array_like inputs (i.e.
lists)
- added unit tests for qi functions
Updates:
- Fixed state and process tomography notebooks for new quantum program
structure.
- Added option for Pauli ordering by weight in vectorise, devectorize.
- Changed partial_trace to default qubit-0 to right of tensor product,
with option to reverse to ‘normal’ ordering.