Update example tests
This commit is contained in:
parent
442feed40f
commit
48a1f18410
14
README.md
14
README.md
|
@ -88,7 +88,7 @@ SeleniumBase automatically handles common WebDriver actions such as spinning up
|
|||
```
|
||||
pytest my_first_test.py --browser=chrome
|
||||
|
||||
nosetests my_test_suite.py --browser=firefox
|
||||
nosetests test_suite.py --browser=firefox
|
||||
```
|
||||
|
||||
Python methods that start with ``test_`` will automatically be run when using ``pytest`` or ``nosetests`` on a Python file, (<i>or on folders containing Python files</i>).
|
||||
|
@ -248,14 +248,14 @@ Here are some other useful **nosetest**-specific arguments:
|
|||
--with-id # If -v is also used, will number the tests for easy counting.
|
||||
```
|
||||
|
||||
During test failures, logs and screenshots from the most recent test run will get saved to the ``latest_logs/`` folder. Those logs will get moved to ``archived_logs/`` if you have ARCHIVE_EXISTING_LOGS set to True in [settings.py](https://github.com/seleniumbase/SeleniumBase/blob/master/seleniumbase/config/settings.py), otherwise log files with be cleaned up at the start of the next test run. The ``my_test_suite.py`` collection contains tests that fail on purpose so that you can see how logging works.
|
||||
During test failures, logs and screenshots from the most recent test run will get saved to the ``latest_logs/`` folder. Those logs will get moved to ``archived_logs/`` if you have ARCHIVE_EXISTING_LOGS set to True in [settings.py](https://github.com/seleniumbase/SeleniumBase/blob/master/seleniumbase/config/settings.py), otherwise log files with be cleaned up at the start of the next test run. The ``test_suite.py`` collection contains tests that fail on purpose so that you can see how logging works.
|
||||
|
||||
```
|
||||
cd examples/
|
||||
|
||||
pytest my_test_suite.py --browser=chrome
|
||||
pytest test_suite.py --browser=chrome
|
||||
|
||||
pytest my_test_suite.py --browser=firefox
|
||||
pytest test_suite.py --browser=firefox
|
||||
```
|
||||
|
||||
If you want to run tests headlessly, use ``--headless``, which you'll need to do if your system lacks a GUI interface. Even if your system does have a GUI interface, it may still support headless browser automation.
|
||||
|
@ -311,7 +311,7 @@ pytest my_first_test.py --browser=chrome
|
|||
Using ``--html=report.html`` gives you a fancy report of the name specified after your test suite completes.
|
||||
|
||||
```
|
||||
pytest my_test_suite.py --html=report.html
|
||||
pytest test_suite.py --html=report.html
|
||||
```
|
||||
|
||||
![](https://cdn2.hubspot.net/hubfs/100006/images/PytestReport.png "Example Pytest Report")
|
||||
|
@ -319,7 +319,7 @@ pytest my_test_suite.py --html=report.html
|
|||
You can also use ``--junitxml=report.xml`` to get an xml report instead. Jenkins can use this file to display better reporting for your tests.
|
||||
|
||||
```
|
||||
pytest my_test_suite.py --junitxml=report.xml
|
||||
pytest test_suite.py --junitxml=report.xml
|
||||
```
|
||||
|
||||
#### **Nosetest Reports:**
|
||||
|
@ -327,7 +327,7 @@ pytest my_test_suite.py --junitxml=report.xml
|
|||
The ``--report`` option gives you a fancy report after your test suite completes.
|
||||
|
||||
```
|
||||
nosetests my_test_suite.py --report
|
||||
nosetests test_suite.py --report
|
||||
```
|
||||
<img src="https://cdn2.hubspot.net/hubfs/100006/images/Test_Report_2.png" title="Example Nosetest Report" height="420">
|
||||
|
||||
|
|
|
@ -27,12 +27,12 @@ pytest my_first_test.py --browser=chrome --demo_mode
|
|||
|
||||
Run the example test suite and generate an pytest report: (pytest-only)
|
||||
```bash
|
||||
pytest basic_script.py --html=report.html
|
||||
pytest test_suite.py --html=report.html
|
||||
```
|
||||
|
||||
Run the example test suite and generate a nosetest report: (nosetests-only)
|
||||
```bash
|
||||
nosetests my_test_suite.py --report --show_report
|
||||
nosetests test_suite.py --report --show_report
|
||||
```
|
||||
|
||||
Run a test using a nosetest configuration file: (nosetests-only)
|
||||
|
|
|
@ -1,8 +1,10 @@
|
|||
import pytest
|
||||
from seleniumbase import BaseCase
|
||||
|
||||
|
||||
class MyTestClass(BaseCase):
|
||||
|
||||
@pytest.mark.expected_failure
|
||||
def test_delayed_asserts(self):
|
||||
self.open('https://xkcd.com/993/')
|
||||
self.wait_for_element('#comic')
|
||||
|
|
|
@ -23,7 +23,7 @@ Reports are most useful when running large test suites. Pytest and Nosetest repo
|
|||
Using ``--html=report.html`` gives you a fancy report of the name specified after your test suite completes.
|
||||
|
||||
```bash
|
||||
pytest my_test_suite.py --html=report.html
|
||||
pytest test_suite.py --html=report.html
|
||||
```
|
||||
![](https://cdn2.hubspot.net/hubfs/100006/images/PytestReport.png "Example Pytest Report")
|
||||
|
||||
|
@ -32,7 +32,7 @@ pytest my_test_suite.py --html=report.html
|
|||
The ``--report`` option gives you a fancy report after your test suite completes. (Requires ``--with-testing_base`` to also be set when ``--report`` is used because it's part of that plugin.)
|
||||
|
||||
```bash
|
||||
nosetests my_test_suite.py --report --browser=chrome
|
||||
nosetests test_suite.py --report --browser=chrome
|
||||
```
|
||||
<img src="https://cdn2.hubspot.net/hubfs/100006/images/Test_Report_2.png" title="Example Nosetest Report" height="420">
|
||||
|
||||
|
|
|
@ -52,7 +52,7 @@ class App:
|
|||
fg="blue").pack()
|
||||
self.run5 = Button(
|
||||
frame, command=self.run_5,
|
||||
text=("nosetests my_test_suite.py --report --show_report")).pack()
|
||||
text=("nosetests test_suite.py --report --show_report")).pack()
|
||||
self.title6 = Label(
|
||||
frame,
|
||||
text="Basic Failing Test Run showing the Multiple-Checks feature:",
|
||||
|
@ -66,7 +66,7 @@ class App:
|
|||
fg="blue").pack()
|
||||
self.run7 = Button(
|
||||
frame, command=self.run_7,
|
||||
text=("nosetests my_test_suite.py"
|
||||
text=("nosetests test_suite.py"
|
||||
" --browser=chrome --with-db_reporting")).pack()
|
||||
self.end_title = Label(frame, text="", fg="black").pack()
|
||||
self.quit = Button(frame, text="QUIT", command=frame.quit).pack()
|
||||
|
@ -90,7 +90,7 @@ class App:
|
|||
|
||||
def run_5(self):
|
||||
os.system(
|
||||
'nosetests my_test_suite.py --report --show_report')
|
||||
'nosetests test_suite.py --report --show_report')
|
||||
|
||||
def run_6(self):
|
||||
os.system(
|
||||
|
@ -98,7 +98,7 @@ class App:
|
|||
|
||||
def run_7(self):
|
||||
os.system(
|
||||
'nosetests my_test_suite.py'
|
||||
'nosetests test_suite.py'
|
||||
' --browser=chrome --with-db_reporting')
|
||||
|
||||
|
||||
|
|
|
@ -1,11 +1,13 @@
|
|||
""" This test was made to fail on purpose to demonstrate the
|
||||
logging capabilities of the SeleniumBase Test Framework """
|
||||
|
||||
import pytest
|
||||
from seleniumbase import BaseCase
|
||||
|
||||
|
||||
class MyTestClass(BaseCase):
|
||||
|
||||
@pytest.mark.expected_failure
|
||||
def test_find_army_of_robots_on_xkcd_desert_island(self):
|
||||
self.open("https://xkcd.com/731/")
|
||||
print("\n(This test fails on purpose)")
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
''' NOTE: This test suite contains 2 passing tests and 2 failing tests. '''
|
||||
|
||||
import pytest
|
||||
from seleniumbase import BaseCase
|
||||
|
||||
|
||||
|
@ -7,25 +8,25 @@ class MyTestSuite(BaseCase):
|
|||
|
||||
def test_1(self):
|
||||
self.open("https://xkcd.com/1663/")
|
||||
self.find_text("Garden", "div#ctitle", timeout=3)
|
||||
self.assert_text("Garden", "div#ctitle", timeout=3)
|
||||
for p in range(4):
|
||||
self.click('a[rel="next"]')
|
||||
self.find_text("Algorithms", "div#ctitle", timeout=3)
|
||||
self.assert_text("Algorithms", "div#ctitle", timeout=3)
|
||||
|
||||
@pytest.mark.expected_failure
|
||||
def test_2(self):
|
||||
# This test should FAIL
|
||||
print("\n(This test fails on purpose)")
|
||||
self.open("https://xkcd.com/1675/")
|
||||
raise Exception("FAKE EXCEPTION: This test fails on purpose.")
|
||||
|
||||
def test_3(self):
|
||||
self.open("https://xkcd.com/1406/")
|
||||
self.find_text("Universal Converter Box", "div#ctitle", timeout=3)
|
||||
self.assert_text("Universal Converter Box", "div#ctitle", timeout=3)
|
||||
self.open("https://xkcd.com/608/")
|
||||
self.find_text("Form", "div#ctitle", timeout=3)
|
||||
self.assert_text("Form", "div#ctitle", timeout=3)
|
||||
|
||||
@pytest.mark.expected_failure
|
||||
def test_4(self):
|
||||
# This test should FAIL
|
||||
print("\n(This test fails on purpose)")
|
||||
self.open("https://xkcd.com/1670/")
|
||||
self.find_element("FakeElement.DoesNotExist", timeout=0.5)
|
||||
self.assert_element("FakeElement.DoesNotExist", timeout=0.5)
|
|
@ -28,17 +28,17 @@ pytest my_first_test.py --demo_mode --browser=chrome
|
|||
|
||||
pytest my_first_test.py --browser=firefox
|
||||
|
||||
pytest my_test_suite.py --html=report.html
|
||||
pytest test_suite.py --html=report.html
|
||||
|
||||
nosetests my_test_suite.py --report --show_report
|
||||
nosetests test_suite.py --report --show_report
|
||||
|
||||
pytest my_test_suite.py --headless -n 4
|
||||
pytest test_suite.py --headless -n 4
|
||||
|
||||
pytest my_test_suite.py --server=IP_ADDRESS --port=4444
|
||||
pytest test_suite.py --server=IP_ADDRESS --port=4444
|
||||
|
||||
pytest my_test_suite.py --proxy=IP_ADDRESS:PORT
|
||||
pytest test_suite.py --proxy=IP_ADDRESS:PORT
|
||||
|
||||
pytest my_test_suite.py --proxy=USERNAME:PASSWORD@IP_ADDRESS:PORT
|
||||
pytest test_suite.py --proxy=USERNAME:PASSWORD@IP_ADDRESS:PORT
|
||||
|
||||
pytest test_fail.py --pdb -s
|
||||
```
|
||||
|
@ -72,7 +72,7 @@ Or you can create your own Selenium Grid for test distribution. ([See this ReadM
|
|||
#### **Example tests using Logging:**
|
||||
|
||||
```bash
|
||||
pytest my_test_suite.py --browser=chrome
|
||||
pytest test_suite.py --browser=chrome
|
||||
```
|
||||
(During test failures, logs and screenshots from the most recent test run will get saved to the ``latest_logs/`` folder. Those logs will get moved to ``archived_logs/`` if you have ARCHIVE_EXISTING_LOGS set to True in [settings.py](https://github.com/seleniumbase/SeleniumBase/blob/master/seleniumbase/config/settings.py), otherwise log files with be cleaned up at the start of the next test run.)
|
||||
|
||||
|
@ -120,7 +120,7 @@ The code above will leave your browser window open in case there's a failure. (i
|
|||
Using ``--html=report.html`` gives you a fancy report of the name specified after your test suite completes.
|
||||
|
||||
```bash
|
||||
pytest my_test_suite.py --html=report.html
|
||||
pytest test_suite.py --html=report.html
|
||||
```
|
||||
![](https://cdn2.hubspot.net/hubfs/100006/images/PytestReport.png "Example Pytest Report")
|
||||
|
||||
|
@ -129,7 +129,7 @@ pytest my_test_suite.py --html=report.html
|
|||
The ``--report`` option gives you a fancy report after your test suite completes.
|
||||
|
||||
```bash
|
||||
nosetests my_test_suite.py --report
|
||||
nosetests test_suite.py --report
|
||||
```
|
||||
<img src="https://cdn2.hubspot.net/hubfs/100006/images/Test_Report_2.png" title="Example Nosetest Report" height="420">
|
||||
|
||||
|
|
|
@ -79,7 +79,7 @@ sudo pip install -r requirements.txt --upgrade
|
|||
sudo python setup.py develop
|
||||
```
|
||||
|
||||
#### Step 13. Run an [example test](https://github.com/seleniumbase/SeleniumBase/blob/master/examples/my_first_test.py) in Chrome to verify installation (Takes ~10 seconds)
|
||||
#### Step 13. Run an [example test](https://github.com/seleniumbase/SeleniumBase/blob/master/examples/my_first_test.py) in Chrome to verify installation (May take up to 10 seconds)
|
||||
|
||||
![](http://cdn2.hubspot.net/hubfs/100006/images/gcp_bitnami.png "Linux SSH Terminal")
|
||||
|
||||
|
@ -119,7 +119,7 @@ nosetests examples/my_first_test.py --headless --browser=firefox
|
|||
* Under "Build", click the "Add build step" dropdown and then select "Execute shell".
|
||||
* For the "Command", put:
|
||||
```bash
|
||||
nosetests examples/my_first_test.py --headless --browser=chrome
|
||||
pytest examples/my_first_test.py --headless --browser=chrome
|
||||
```
|
||||
* Click "Save" when you're done.
|
||||
|
||||
|
@ -179,7 +179,7 @@ If you have a web application that you want to test, you'll be able to create Se
|
|||
* For the "Execute shell", use the following as your updated "Command":
|
||||
|
||||
```bash
|
||||
nosetests examples/my_test_suite.py --headless --browser=chrome --with-db_reporting --with-testing_base
|
||||
pytest examples/test_suite.py --headless --browser=chrome --with-db_reporting --with-testing_base
|
||||
```
|
||||
|
||||
* Click "Save" when you're done.
|
||||
|
|
|
@ -32,7 +32,7 @@ When the Grid Hub Console is up and running, you'll be able to find it here: [ht
|
|||
Now you can run your tests on the Selenium Grid:
|
||||
|
||||
```
|
||||
pytest my_test_suite.py --server=IP_ADDRESS --port=4444
|
||||
pytest test_suite.py --server=IP_ADDRESS --port=4444
|
||||
```
|
||||
|
||||
You can also run your tests on [BrowserStack](https://www.browserstack.com/automate#)'s Selenium Grid server (and not worry about managing your own Selenium Grid):
|
||||
|
|
Loading…
Reference in New Issue