Implementation that used warn(DeprecationWarning) wasn't printing to output or being logged
implement it with more straightforward logging messages and by changing taskname to buildbitstream
doesn't make sense to have a method in BuildConfigFile that has expectations regarding
being called after methods of a completely different class. It also seems like a better
design to keep all of the decisions about whether fabric is running something in parallel
or series together in the firesim task method main.build_bitstream
newer conda toolchain has --enable-default-pie. Kernel driver build
on centos doesn't like that. Adding -fno-pic to the build causes string.h
to not be found. Will look into this more later
* moves get_local_shared_librares into runtools.utils so that same helper
can be used for switch and driver
* stop setting LD_LIBRARY_PATH for FireSim-f1 when running, instead
add $ORIGIN to the beginning of RPATH for both driver and switch
* cleaned up a TODO that was TODONE by Sagar a long time ago
* switch makefile updated to use standard env vars (so that it is easier
to tweak the build in standard ways). The only actual change is addition
of $ORIGIN at front of rpath
propagate storing args in runtime_config by no longer needing
terminatesome parameters on the runtime_conf.terminate_run_farm() method
per Abes PR feedback
* switch to using pytest-mock mocker fixture
* create a task_mocker that understands how to mock firesim.TASKS correctly
* write the remaining tests and fix the ones I didn't have correct in initial PR
Abe was curious whether boto would error if you give it a bogus instance
type when launching an instance. I added a test that shows what moto will
do. The boto test might be a good example for how to un-munge the environment
for actually using AWS in unit tests that don't mock it out.
created a check_env function and moved the check from main() into
if __name__ == '__main__' because it made testing less annoying
if we decide that we want it to still be in main(), I'd still like the
function to exist so that we can easily test a fixture early in unit testing
that munges the environment checks for testing
When patching for testing, I was concerned about where to do the patching
and making the symbol table lookups throw seemed like a sensible thing. However,
now that I'm done writing the tests, this seems like it might be overkill
and that's why I'm leaving it in a separate commit in case I want to revert it.
* make it a decorator because that's what it really is
* unify BUILD and RUN tasks by introspecting the config class needed at registration time
* unify special cases by allowing tasks to not take a config class and tweaking terminaterunfarm
so that it only takes the RuntimeConfig (stuffed the args into it like we do with BuildConfigFile)
There are some tests that depend on setup done earlier in the GH workflow (i.e. managerinit being run)
Others that are unclear as to whether they are supposed to fail before buildafi has been run.
Haven't written most of the RUN_TASK tests yet.
* don't use globals() to dispatch the tasks, create a module-level dict for that purpose
* pytest toplevel conftest.py will drop a firesim.py -> firesim symlink for test importing
* construct_argument_parser() should return the parser without actually
parsing so that the testharness can use it with arglists
* mv deploy/tests/test_amis{_snake,}.json
_snake is related to the janky way I manually created MOTO_AMIS_PATH the first time
I'm replacing that with a customized version of the moto/scripts/get_amis.py script
* add scripts/update_test_amis.py as copied from moto
just so that the diff is inspectable later...
* Customize scripts/update_test_amis.py for FireSim
And use it to update deploy/tests/test_amis.json
And add a test to check whether test_amis.json needs to be updated
And make all of the tests that will fail confusingly depend on that test
* add comment about running script when you update the AMI
Co-authored-by: Tim Snyder <snyder.tim@gmail.com>
* make default timeout 0 to match legacy behavior when new
key is not present in runtime.ini
* Add tests requires
* moving scripts/aws-setup.py into a loadable module
* adding sure to machine-launch-script.sh for nicer assertion style
* Generalize awstools.get_instances_by_tag_type to not assume it is
being called with 'fsimcluster' value, uses tags dict. Created
awstools.get_run_instances_by_tag_type that uses old symantics and calls
the general function similar to launch_run_instances()
* Add `additive` parameter to `awstools.launch_instances()` that controls
whether `count` is the number to be launched in that call or a total number
of instances that should be reached including ones already launched that
match the `type` and `tags`
* First pass at porting to python3
* Fix import errors | Setup user argcomplete
* Update awstools CLI with user data file | Bump CI to use it
* Wait until launch is complete
* Add userdata as string | Use sudo for machine-launch-script
* Remove execute permissions on machine-launch-script
* Better match on machine-launch-script complete
* Revert python-devel removal
* Use python3 for pytests
* Update more python3 items
* Remove extra shebang
* Port docs to python3 and add to CI
* Add ISCA experiments to CI build check
* Use yum not apt-get
* Add make to doc section
* Bump multilate-loadgen for sysroot fix
* For BW test build don't use shebang
* Fix docs Makefile options
* Fix more doc warnings
* Add first set of regression tests
* Fix raw_input
* Regression bump | Run workload fix
* Add functools to topology
* Fix linux poweroff test (nic still has issues)
* Update regression scripts
* Ignore machine-launch-script.sh in regression area
* Fix map python3 issues
* Get rid of shebangs
* Fix more regressions
* Print machine-launch.log on fail | More clarification on user_data
* Transfer to CI some shorter regressions
* Add a manual approval to fpga based tests
* Fix indentation in config.yml
* Fix test symlink
* Use spot for CI manager instance | Try to use python3 for aws CI container | Version all pip packages
* Make run-ini-api-tests an executable
* Fix CI terminaterunfarm arg
* Add firesim.pem file to manager
* Bump python in CI instance
* Bump pip in CI container
* Remove pip sudo in CI container
* Fix launch script pip version equals
* Ini converted into strings
* Properly pass test_dir to opts in CI
* First pass at GH-A
* Round 2 CI fixes
* Try changes
* Remove CircleCI | Switch to fancy GH-A
* Rename self-host setup script
* Update chmod
* Use - instead of _ for env. vars
* Rename some defs | Remove extra imports
* Small comment updates
* Forgot to import in ini-api tests | Small comment on Fabric timeouts
* Add sys to linux poweroff
* Update linux timeout, fix small imports
* Update comment
* Fix-up workflow-monitor.py
* Avoid excessive logging in run-linux | Terminate spot instances after max-runtime
* Add more workflow-monitor states | Add pty=False to running workloads
* Update CI documentation | Add CI badge [ci skip]
* Don't use spot instances
* Update CI readme
* Determine runner version from remote repo and check for runner setup
* Address PR comments
* Update CI_README location of where to find IPs | Forgot ret_code
* Only run CI on prs/pushes to dev/main/master
* Fix terminate_workflow_instances in init-manager.py
* Cleanup FireSim repo cloning | Only run CI on PRs (since its runs on merge commit)
* Expose cmdline to launch/terminate instances
* Move instance liveness into the wait function
* More complete cmdline
* Have CI use API
* Forgot import
* Forgot another import
* Debug
* Use yaml parser instead of json
* Use safe yaml loading | Use default amt of instances
* Make sure to use aws-tools for cull-ci-insts
* Bump to AMI 1.11 / Vivado 2021.1
* pip2 removed on new AMIs; install it explicitly
* Knock BOOM AGFI frequency down
* Add a script to run linux boot on some default targets
* [docs] Tell users to use 1.11.0 AMI
Co-authored-by: Cloud User <centos@ip-192-168-3-37.ec2.internal>
Rather than exiting with a unhandled exception when SNS-related permissions failures happen at the end of buildafi, check whether the topic exists (or can be created) early in buildafi and warn the user that the script will be unable to send email (and log the details of the exception) but continue to finish buildafi because failure in sending the notification probably should not be a fatal error for the manager.
This PR also introduces pytest driven unit tests for the firesim manager. The tests make use of moto to mock the backend of boto and prevent tests from actually calling out to AWS API's. They also utilize unittest.mock and botocore.stub.Stubber to inject desired testing stimulus to the code under test.
Adding two tests for the new awstools.get_snsname_arn() function.
To run them:
* make sure you have the new deps in machine-launch-script.sh
* cd deploy
* pytest
Useful primers on pytest & testing AWS:
* https://tensoriot.medium.com/unit-testing-with-pytest-and-moto-e94fc2eefe7a
* https://github.com/boto/boto3/issues/2485
Useful primer on unittest.mock (fka py-mock, not to be confused with pymock):
* https://www.fugue.co/blog/2016-02-11-python-mocking-101
'mock' became part of the stdlib in python 3.3 and was backported to 2.7
as 'py-mock'. Of course, python being python, there is also a pymock
and that's totally different.
Detailed walkthrough of credential protection while using moto:
* https://blog.codecentric.de/en/2020/01/testing-aws-python-code-with-moto/
* [ci] Add manager pytests
* [ci] Provide aws credentials for pytest
* [ci] Remove AWS creds registration, andt push use conftest to provide a region
Co-authored-by: Tim Snyder <snyder.tim@gmail.com>
Co-authored-by: Tim Snyder <timothy.snyder@sifive.com>
Co-authored-by: David Biancolin <david.biancolin@gmail.com>
Enables one to run 'firesim tar2afi --launchtime <timestamp>' to run only
the tar->afi portion of buildafi for the configs listed in config_build.ini
Behavior of buildafi is unchanged
When using FireSim as a library in another project, it is useful to keep
configuration files in the repo of the toplevel project. This is a minimal
change that enables a user to provide a path like ../../../toplevel-configs/somegreat.conf
infrasetup already copies the file correctly. This change only modifies
RuntimeHWConfig.get_local_runtimeconf_binaryname() to explicitly strip the
path to os.path.basename