mirror of https://github.com/abinit/abipy.git
Exclude abipy/gw from tests if nose
This commit is contained in:
parent
38e6557cb8
commit
8c1f069379
28
README.rst
28
README.rst
|
@ -57,16 +57,12 @@ that can be installed with::
|
||||||
|
|
||||||
pip install abipy
|
pip install abipy
|
||||||
|
|
||||||
Note that you may need to install pymatgen and other critical dependencies manually.
|
|
||||||
For this reason, we strongly suggest to install the required python packages through one
|
|
||||||
of the following python distributions:
|
|
||||||
|
|
||||||
Note that you may need to install pymatgen and other critical dependencies manually.
|
Note that you may need to install pymatgen and other critical dependencies manually.
|
||||||
In this case, please consult the detailed installation instructions provided by the
|
In this case, please consult the detailed installation instructions provided by the
|
||||||
`pymatgen howto <http://pymatgen.org/index.html#standard-install>`_ to install pymatgen
|
`pymatgen howto <http://pymatgen.org/index.html#standard-install>`_ to install pymatgen
|
||||||
and then follow the instructions in `ourhowto <http://pythonhosted.org/abipy/installation.html>`_.
|
and then follow the instructions in `our howto <http://pythonhosted.org/abipy/installation.html>`_.
|
||||||
|
|
||||||
Note, however, that the installation process is greatly simplified if you install the required
|
The installation process is greatly simplified if you install the required
|
||||||
python packages through one of the following python distributions:
|
python packages through one of the following python distributions:
|
||||||
|
|
||||||
* `Anaconda <https://continuum.io/downloads>`_
|
* `Anaconda <https://continuum.io/downloads>`_
|
||||||
|
@ -94,7 +90,7 @@ Optional libraries that are required if you need certain features:
|
||||||
``ipython``
|
``ipython``
|
||||||
|
|
||||||
Required to interact with the AbiPy/Pymatgen objects in the ipython shell
|
Required to interact with the AbiPy/Pymatgen objects in the ipython shell
|
||||||
(strongly recommended, already provided by conda).
|
(strongly recommended, already provided by ``conda``).
|
||||||
|
|
||||||
``jupyter`` and ``nbformat``
|
``jupyter`` and ``nbformat``
|
||||||
|
|
||||||
|
@ -102,12 +98,6 @@ Optional libraries that are required if you need certain features:
|
||||||
Install these two packages with ``conda install jupyter nbformat`` or use ``pip``.
|
Install these two packages with ``conda install jupyter nbformat`` or use ``pip``.
|
||||||
Recommended but you will also need a web browser to open the notebook.
|
Recommended but you will also need a web browser to open the notebook.
|
||||||
|
|
||||||
``seaborn``
|
|
||||||
|
|
||||||
Python visualization library based on matplotlib.
|
|
||||||
It provides a high-level interface for drawing attractive statistical graphics.
|
|
||||||
Used in AbiPy for particular plots.
|
|
||||||
|
|
||||||
``wxPython`` and ``wxmplot`` for the GUI
|
``wxPython`` and ``wxmplot`` for the GUI
|
||||||
|
|
||||||
Use ``conda install wxpython``
|
Use ``conda install wxpython``
|
||||||
|
@ -135,7 +125,7 @@ or alternately::
|
||||||
to install the package in developmental mode
|
to install the package in developmental mode
|
||||||
(this is the recommended approach, especially if you are planning to implement new features).
|
(this is the recommended approach, especially if you are planning to implement new features).
|
||||||
|
|
||||||
The documentation of the developmental version is hosted on `github pages <http://abinit.github.io/abipy>`_.
|
The documentation of the **developmental** version is hosted on `github pages <http://abinit.github.io/abipy>`_.
|
||||||
|
|
||||||
The Github version include test files for complete unit testing.
|
The Github version include test files for complete unit testing.
|
||||||
To run the suite of unit tests, make sure you have ``py.test`` (recommended)
|
To run the suite of unit tests, make sure you have ``py.test`` (recommended)
|
||||||
|
@ -152,6 +142,16 @@ Unit tests require two additional packages that can be installed with::
|
||||||
|
|
||||||
$ pip install nose-exclude scripttest
|
$ pip install nose-exclude scripttest
|
||||||
|
|
||||||
|
Note that several unit tests check the integration between AbiPy and Abinit.
|
||||||
|
In order to run the tests, you need a working set of Abinit executables and
|
||||||
|
a ``manager.yml`` configuration file.
|
||||||
|
A pre-compiled sequential version of Abinit for Linux and OSx can be installed directly from the anaconda cloud with::
|
||||||
|
|
||||||
|
$ conda install abinit -c gmatteo
|
||||||
|
|
||||||
|
For further information on the syntax of the configuration file, please consult the
|
||||||
|
`workflows <http://pythonhosted.org/abipy/workflows.html>`_ section.
|
||||||
|
|
||||||
Contributing to AbiPy is relatively easy.
|
Contributing to AbiPy is relatively easy.
|
||||||
Just send us a `pull request <https://help.github.com/articles/using-pull-requests/>`_.
|
Just send us a `pull request <https://help.github.com/articles/using-pull-requests/>`_.
|
||||||
When you send your request, make ``develop`` the destination branch on the repository
|
When you send your request, make ``develop`` the destination branch on the repository
|
||||||
|
|
|
@ -23,7 +23,7 @@ AbiPy supports both Python 2.7 as well as Python >= 3.4.
|
||||||
Note however that Python 2.7 is more intensively tested than py3k especially at the level of workflows
|
Note however that Python 2.7 is more intensively tested than py3k especially at the level of workflows
|
||||||
so we still recommend py2.7 if you plan to run automatic calculations with AbiPy.
|
so we still recommend py2.7 if you plan to run automatic calculations with AbiPy.
|
||||||
|
|
||||||
Note that the majority of the post-processing tools available in AbiPy require output files in
|
Note also that the majority of the post-processing tools available in AbiPy require output files in
|
||||||
``netcdf`` format so we strongly suggest to compile Abinit with netcdf support
|
``netcdf`` format so we strongly suggest to compile Abinit with netcdf support
|
||||||
(use ``--with_trio_flavor="netcdf-fallback"`` at configure time to activate the internal netcdf library,
|
(use ``--with_trio_flavor="netcdf-fallback"`` at configure time to activate the internal netcdf library,
|
||||||
to link Abinit against an external netcdf library please consult the configuration examples
|
to link Abinit against an external netcdf library please consult the configuration examples
|
||||||
|
|
|
@ -16,7 +16,7 @@ In this case, please consult the detailed installation instructions provided by
|
||||||
`pymatgen howto <http://pymatgen.org/index.html#standard-install>`_ to install pymatgen
|
`pymatgen howto <http://pymatgen.org/index.html#standard-install>`_ to install pymatgen
|
||||||
and then follow the instructions in the :ref:`netcdf4_installation` section.
|
and then follow the instructions in the :ref:`netcdf4_installation` section.
|
||||||
|
|
||||||
Note, however, that the installation process is greatly simplified if you install the required
|
The installation process is greatly simplified if you install the required
|
||||||
python packages through one of the following python distributions:
|
python packages through one of the following python distributions:
|
||||||
|
|
||||||
* `Anaconda <https://continuum.io/downloads>`_
|
* `Anaconda <https://continuum.io/downloads>`_
|
||||||
|
@ -49,7 +49,7 @@ Optional libraries that are required if you need certain features:
|
||||||
``ipython``
|
``ipython``
|
||||||
|
|
||||||
Required to interact with the AbiPy/Pymatgen objects in the ipython shell
|
Required to interact with the AbiPy/Pymatgen objects in the ipython shell
|
||||||
(strongly recommended, already provided by conda).
|
(strongly recommended, already provided by ``conda``).
|
||||||
|
|
||||||
``jupyter`` and ``nbformat``
|
``jupyter`` and ``nbformat``
|
||||||
|
|
||||||
|
@ -57,12 +57,6 @@ Optional libraries that are required if you need certain features:
|
||||||
Install these two packages with ``conda install jupyter nbformat`` or use ``pip``.
|
Install these two packages with ``conda install jupyter nbformat`` or use ``pip``.
|
||||||
Recommended but you will also need a web browser to open the notebook.
|
Recommended but you will also need a web browser to open the notebook.
|
||||||
|
|
||||||
``seaborn``
|
|
||||||
|
|
||||||
Python visualization library based on matplotlib.
|
|
||||||
It provides a high-level interface for drawing attractive statistical graphics.
|
|
||||||
Used in AbiPy for particular plots.
|
|
||||||
|
|
||||||
``wxPython`` and ``wxmplot`` for the GUI
|
``wxPython`` and ``wxmplot`` for the GUI
|
||||||
|
|
||||||
Use ``conda install wxpython``
|
Use ``conda install wxpython``
|
||||||
|
@ -149,6 +143,8 @@ or alternately::
|
||||||
to install the package in developmental mode
|
to install the package in developmental mode
|
||||||
(this is the recommended approach, especially if you are planning to implement new features).
|
(this is the recommended approach, especially if you are planning to implement new features).
|
||||||
|
|
||||||
|
The documentation of the **developmental** version is hosted on `github pages <http://abinit.github.io/abipy>`_.
|
||||||
|
|
||||||
The Github version include test files for complete unit testing.
|
The Github version include test files for complete unit testing.
|
||||||
To run the suite of unit tests, make sure you have ``py.test`` (recommended)
|
To run the suite of unit tests, make sure you have ``py.test`` (recommended)
|
||||||
or ``nose`` installed and then just type::
|
or ``nose`` installed and then just type::
|
||||||
|
@ -164,6 +160,15 @@ Unit tests require two additional packages that can be installed with::
|
||||||
|
|
||||||
$ pip install nose-exclude scripttest
|
$ pip install nose-exclude scripttest
|
||||||
|
|
||||||
|
Note that several unit tests check the integration between AbiPy and Abinit.
|
||||||
|
In order to run the tests, you need a working set of Abinit executables and
|
||||||
|
a ``manager.yml`` configuration file.
|
||||||
|
For further information on the syntax of the configuration file, please consult the :ref:`workflows` section.
|
||||||
|
|
||||||
|
A pre-compiled sequential version of Abinit for Linux and OSx can be installed directly from the anaconda cloud with::
|
||||||
|
|
||||||
|
$ conda install abinit -c gmatteo
|
||||||
|
|
||||||
Contributing to AbiPy is relatively easy.
|
Contributing to AbiPy is relatively easy.
|
||||||
Just send us a `pull request <https://help.github.com/articles/using-pull-requests/>`_.
|
Just send us a `pull request <https://help.github.com/articles/using-pull-requests/>`_.
|
||||||
When you send your request, make ``develop`` the destination branch on the repository
|
When you send your request, make ``develop`` the destination branch on the repository
|
||||||
|
|
|
@ -16,4 +16,4 @@ Mar 10 2017
|
||||||
This is the first stable release in which we have reached a relatively stable API
|
This is the first stable release in which we have reached a relatively stable API
|
||||||
and a well-defined interface with the netcdf files produced by Abinit.
|
and a well-defined interface with the netcdf files produced by Abinit.
|
||||||
We recommend Abinit >= 8.0.8b, version 8.2.2 is required to analyze the electronic fatbands
|
We recommend Abinit >= 8.0.8b, version 8.2.2 is required to analyze the electronic fatbands
|
||||||
saved in ``FATBANDS.nc``.
|
saved in the ``FATBANDS.nc`` file.
|
||||||
|
|
|
@ -20,6 +20,12 @@ environment variables, load modules with ``module load``, run MPI applications w
|
||||||
It's also a very good idea to run the Abinit test suite with the `runtest.py script <https://asciinema.org/a/40324>`_
|
It's also a very good idea to run the Abinit test suite with the `runtest.py script <https://asciinema.org/a/40324>`_
|
||||||
before running production calculations.
|
before running production calculations.
|
||||||
|
|
||||||
|
.. TIP::
|
||||||
|
|
||||||
|
A pre-compiled sequential version of Abinit for Linux and OSx can be installed directly from the anaconda cloud with::
|
||||||
|
|
||||||
|
$ conda install abinit -c gmatteo
|
||||||
|
|
||||||
.. _task_manager:
|
.. _task_manager:
|
||||||
|
|
||||||
--------------------------------
|
--------------------------------
|
||||||
|
@ -243,7 +249,7 @@ to have a summary with the status of the different tasks and::
|
||||||
|
|
||||||
$ abirun.py flow_si_ebands deps
|
$ abirun.py flow_si_ebands deps
|
||||||
|
|
||||||
to print the interconnection among the tasks in textual format.
|
to print the dependencies of the tasks in textual format.
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
|
@ -268,7 +274,7 @@ There are two commands that can be used to launch tasks: ``single`` and ``rapid`
|
||||||
The ``single`` command executes the first task in the flow that is in the ``READY`` state that is a task
|
The ``single`` command executes the first task in the flow that is in the ``READY`` state that is a task
|
||||||
whose dependencies have been fulfilled.
|
whose dependencies have been fulfilled.
|
||||||
``rapid``, on the other hand, submits **all tasks** of the flow that are in the ``READY`` state.
|
``rapid``, on the other hand, submits **all tasks** of the flow that are in the ``READY`` state.
|
||||||
Let's try to run the flow with the ``rapid`` command to see what happens.
|
Let's try to run the flow with the ``rapid`` command...
|
||||||
|
|
||||||
.. code-block:: console
|
.. code-block:: console
|
||||||
|
|
||||||
|
@ -381,7 +387,7 @@ At this point, AbiPy starts to look at the output files produced by the task to
|
||||||
When the first task completes, the status of the second task is automatically changed to ``READY``,
|
When the first task completes, the status of the second task is automatically changed to ``READY``,
|
||||||
the ``irdden`` input variable is added to the input file of the second task and a symbolic link to
|
the ``irdden`` input variable is added to the input file of the second task and a symbolic link to
|
||||||
the ``DEN`` file produced by ``w0/t0`` is created in the ``indata`` directory of ``w0/t1``.
|
the ``DEN`` file produced by ``w0/t0`` is created in the ``indata`` directory of ``w0/t1``.
|
||||||
Another autoparallel run is executed for the NSCF calculation and the second task is finally submitted.
|
Another auto-parallel run is executed for the NSCF calculation and the second task is finally submitted.
|
||||||
|
|
||||||
The command line interface is very flexible and sometimes it's the only tool available.
|
The command line interface is very flexible and sometimes it's the only tool available.
|
||||||
However, there are cases in which we would like to have a global view of what's happening.
|
However, there are cases in which we would like to have a global view of what's happening.
|
||||||
|
@ -487,11 +493,11 @@ You should see the following output on the terminal
|
||||||
PyFlowScheduler, Pid: 72038
|
PyFlowScheduler, Pid: 72038
|
||||||
Scheduler options: {'seconds': 10, 'hours': 0, 'weeks': 0, 'minutes': 0, 'days': 0}
|
Scheduler options: {'seconds': 10, 'hours': 0, 'weeks': 0, 'minutes': 0, 'days': 0}
|
||||||
|
|
||||||
``Pid`` is the process identifier of the scheduler (also reported in the ... file)
|
``Pid`` is the process identifier associated the scheduler (also saved in in the ``_PyFlowScheduler.pid`` file).
|
||||||
We will see that the scheduler pid is extremely important when we start to run large flows on clusters.
|
|
||||||
|
|
||||||
.. IMPORTANT::
|
.. IMPORTANT::
|
||||||
|
|
||||||
|
A ``_PyFlowScheduler.pid`` file in ``FLOWDIR`` means that there's a scheduler running the flow.
|
||||||
Note that there must be only one scheduler associated to a given flow.
|
Note that there must be only one scheduler associated to a given flow.
|
||||||
|
|
||||||
As you can easily understand the scheduler brings additional power to the AbiPy flow because
|
As you can easily understand the scheduler brings additional power to the AbiPy flow because
|
||||||
|
@ -517,92 +523,127 @@ Configuring AbiPy on a cluster
|
||||||
------------------------------
|
------------------------------
|
||||||
|
|
||||||
In this section we discuss how to configure the manager to run flows on a cluster.
|
In this section we discuss how to configure the manager to run flows on a cluster.
|
||||||
The configuration depends on specific queue management system (Slurm, PBS, etc) so
|
The configuration depends on specific queue management system (Slurm, PBS, etc) hence
|
||||||
we assume that you are already familiar with job submissions and you know the options
|
we assume that you are already familiar with job submissions and you know the options
|
||||||
that mush be specified in the submission script in order to have your job accepted
|
that mush be specified in the submission script in order to have your job accepted
|
||||||
and executed by the management system (username, name of the queue, memory ...)
|
and executed by the management system (username, name of the queue, memory ...)
|
||||||
|
|
||||||
Let's assume that your computing center uses Slurm and your jobs must be submitted to the `Oban` partition
|
Let's assume that our computing center uses ``Slurm`` and our jobs must be submitted to the ``default_queue`` partition.
|
||||||
A `manager.yml` with a single `qadapter` looks like:
|
Hopefully, the system administrator of our cluster already provides an ``Abinit module`` that can be loaded
|
||||||
|
directly with ``module load`` before invoking the code.
|
||||||
|
To make thinks a little bit more difficult, however, we assume the we had to compile our own version of Abinit
|
||||||
|
inside the build directory ``${HOME}/git_repos/abinit/build_impi`` using the following two modules
|
||||||
|
already installed by the system administrator::
|
||||||
|
|
||||||
|
compiler/intel/composerxe/2013_sp1.1.106
|
||||||
|
intelmpi
|
||||||
|
|
||||||
|
In this case, we have to be careful with the configuration of our environment because the Slurm submission
|
||||||
|
script should load the modules and modify our ``$PATH`` so that our version of Abinit can be found.
|
||||||
|
A ``manager.yml`` with a single ``qadapter`` looks like:
|
||||||
|
|
||||||
.. code-block:: yaml
|
.. code-block:: yaml
|
||||||
|
|
||||||
# Resource manager e.g slurm, pbs, shell
|
qadapters:
|
||||||
qtype: slurm
|
- priority: 1
|
||||||
|
|
||||||
# Options passed to the resource manager
|
queue:
|
||||||
# (the syntax depends on qtype, consult the manual of your resource manager)
|
qtype: slurm
|
||||||
qparams:
|
qname: default_queue
|
||||||
ntasks: 2
|
qparams: # Slurm options added to job.sh
|
||||||
time: 0:20:00
|
mail_type: FAIL
|
||||||
partition: Oban
|
mail_user: john@doe
|
||||||
|
|
||||||
# List of modules to import before running the calculation
|
job:
|
||||||
modules:
|
modules:
|
||||||
- intel/compilerpro/13.0.1.117
|
- compiler/intel/composerxe/2013_sp1.1.106
|
||||||
- fftw3/intel/3.3
|
- intelmpi
|
||||||
|
shell_env:
|
||||||
|
PATH: ${HOME}/git_repos/abinit/build_impi/src/98_main:$PATH
|
||||||
|
pre_run:
|
||||||
|
- ulimit -s unlimited
|
||||||
|
mpi_runner: mpirun
|
||||||
|
|
||||||
mpi_runner: /path/to/mpirun
|
limits:
|
||||||
|
timelimit: 0:20:0
|
||||||
|
max_cores: 16
|
||||||
|
min_mem_per_proc: 1Gb
|
||||||
|
|
||||||
# Shell environment
|
hardware:
|
||||||
shell_env:
|
num_nodes: 120
|
||||||
PATH: /home/user/local/bin/:$PATH
|
sockets_per_node: 2
|
||||||
LD_LIBRARY_PATH: /home/user/local/lib:$LD_LIBRARY_PATH
|
cores_per_socket: 8
|
||||||
|
mem_per_node: 64Gb
|
||||||
|
|
||||||
# Options for the automatic parallelization (Abinit autoparal feature)
|
.. TIP::
|
||||||
policy:
|
|
||||||
autoparal: 1
|
|
||||||
max_ncpus: 2
|
|
||||||
|
|
||||||
|
$ abirun.py FLOWDIR doc_manager script
|
||||||
|
|
||||||
Description:
|
prints to screen the submission script that will be generated by AbiPy at runtime.
|
||||||
|
|
||||||
|
Let's discuss the different options in more detail. Let's start from the ``queue`` section:
|
||||||
|
|
||||||
``qtype``
|
``qtype``
|
||||||
string specifying the resource manager. This option tells AbiPy how to generate the submission
|
String specifying the resource manager. This option tells AbiPy how to generate the submission
|
||||||
script, submit them, kill jobs in the queue and how to interpret the other options passed by the user.
|
script, submit them, kill jobs in the queue and how to interpret the other options passed by the user.
|
||||||
|
|
||||||
|
``qname``
|
||||||
|
Name of the submission queue (string, MANDATORY)
|
||||||
|
|
||||||
``qparams``
|
``qparams``
|
||||||
Dictionary with the parameters passed to the resource manager.
|
Dictionary with the parameters passed to the resource manager.
|
||||||
We use the *normalized* version of the options i.e dashes in the official name of the parameter
|
We use the *normalized* version of the options i.e dashes in the official name of the parameter
|
||||||
are replaced by underscores (for the list of supported options see ...)
|
are replaced by underscores e.g. ``--mail-type`` becomes ``mail_type``.
|
||||||
|
For the list of supported options use the ``doc_manager`` command.
|
||||||
|
Use ``qverbatim`` to pass additional options that are not included in the template.
|
||||||
|
|
||||||
``modules``
|
Note that we are not specifying the number of cores in ``qparams`` because AbiPy will find an appropriate value
|
||||||
List of modules to load.
|
at run-time.
|
||||||
|
|
||||||
``shell_env``
|
The ``job`` section is the most critical one because it defines how to configure the environment
|
||||||
Allows the user to specify or to modify the values of the environment variables.
|
before executing the application and how to run the code.
|
||||||
|
The ``modules`` entry specifies the list of modules to load, ``shell_env`` allows us to modify the
|
||||||
|
``$PATH`` environment variables so that the OS can find our Abinit executable.
|
||||||
|
We also increase the size of the stack with ``ulimit`` before running the code and we run Abinit
|
||||||
|
with the ``mpirun`` provided by the modules.
|
||||||
|
|
||||||
``policy``
|
The ``limits`` section defines the constraints that must be fulfilled in order to run on this queue
|
||||||
This section governs the automatic parallelization of the run: in this case AbiPy will use
|
while ``hardware`` is a dictionary with info on the hardware available on this queue.
|
||||||
the ``autoparal`` capabilities of Abinit to determine an optimal configuration with
|
Every job will have a ``timelimit`` of 20 minutes, cannot use more that ``max_cores`` cores,
|
||||||
**maximum** ``max_ncpus`` MPI nodes. Setting ``autoparal`` to 0 disables the automatic parallelization.
|
and the first job submission will request 1 Gb of memory.
|
||||||
Other values of autoparal are not supported*
|
Note that the actual number of cores will be determined at runtime by calling Abinit in ``autoparal`` mode
|
||||||
|
to get all parallel configurations up to ``max_cores``.
|
||||||
|
If the job is killed due to insufficient memory, AbiPy will resubmit the task with increased resources
|
||||||
|
and it will stop when it reaches the maximum amount given by ``mem_per_node``.
|
||||||
|
|
||||||
The complete list of ``qparams`` options supported with Slurm is obtained with
|
Note that there are more advances options supported by ``limits`` and other options
|
||||||
|
will be added as time goes by
|
||||||
|
|
||||||
|
The get the complete list of options supported by the Slurm ``qadapter`` use:
|
||||||
|
|
||||||
.. command-output:: abirun.py . doc_manager slurm
|
.. command-output:: abirun.py . doc_manager slurm
|
||||||
|
|
||||||
If for some reason you need to cancel all tasks that have been submitted to the resource manager, use::
|
.. IMPORTANT::
|
||||||
|
|
||||||
|
If you need to cancel all tasks that have been submitted to the resource manager, use::
|
||||||
|
|
||||||
$ abirun.py FLOWDIR cancel
|
$ abirun.py FLOWDIR cancel
|
||||||
|
|
||||||
Note that the script will ask for confirmation before killing all the jobs belonging to the flow.
|
Note that the script will ask for confirmation before killing all the jobs belonging to the flow.
|
||||||
|
|
||||||
.. TODO: Section about QPARAMS
|
|
||||||
|
|
||||||
Once you have a ``manager.yml`` properly configured for your cluster, you can start
|
Once you have a ``manager.yml`` properly configured for your cluster, you can start
|
||||||
to use the scheduler to automate job submission.
|
to use the scheduler to automate job submission.
|
||||||
Very likely your flows will require hours or even days to complete and, in principle,
|
Very likely your flows will require hours or even days to complete and, in principle,
|
||||||
you should maintain an active connection to the machine in order to keep your scheduler alive
|
you should maintain an active connection to the machine in order to keep your scheduler alive
|
||||||
(if your session expires, all subprocesses launched within your terminal,
|
(if your session expires, all subprocesses launched within your terminal,
|
||||||
including the python scheduler, will be automatically killed).
|
including the python scheduler, will be automatically killed).
|
||||||
Fortunately there's a standard Unix tool called ``nohup`` that comes to our rescue.
|
Fortunately there is a standard Unix tool called ``nohup`` that comes to our rescue.
|
||||||
|
|
||||||
For long-running jobs, we strongly suggest to start the scheduler with::
|
For long-running jobs, we strongly suggest to start the scheduler with::
|
||||||
|
|
||||||
$ nohup abirun.py FLOWDIR scheduler > sched.stdout 2> sched.stderr &
|
$ nohup abirun.py FLOWDIR scheduler > sched.stdout 2> sched.stderr &
|
||||||
|
|
||||||
This command executes the scheduler in background and redirects the stdout and stderr
|
This command executes the scheduler in background and redirects the ``stdout`` and ``stderr``
|
||||||
to ``sched.log`` and ``sched.err``, respectively.
|
to ``sched.log`` and ``sched.err``, respectively.
|
||||||
The process identifier of the scheduler is saved in the ``_PyFlowScheduler.pid`` file inside ``FLOWDIR``
|
The process identifier of the scheduler is saved in the ``_PyFlowScheduler.pid`` file inside ``FLOWDIR``
|
||||||
and this file is removed automatically when the scheduler completes its execution.
|
and this file is removed automatically when the scheduler completes its execution.
|
||||||
|
@ -744,44 +785,53 @@ by the event handlers.
|
||||||
New error handlers will be added in the new versions of Abipy/Abinit.
|
New error handlers will be added in the new versions of Abipy/Abinit.
|
||||||
Please, let us know if you need handlers for errors commonly occuring in your calculations.
|
Please, let us know if you need handlers for errors commonly occuring in your calculations.
|
||||||
|
|
||||||
.. _task_policy:
|
..
|
||||||
|
|
||||||
-----------
|
.. _task_policy:
|
||||||
TaskPolicy
|
|
||||||
-----------
|
|
||||||
|
|
||||||
At this point, you may wonder why we need to specify all these parameters in the configuration file.
|
-----------
|
||||||
The reason is that, before submitting a job to a resource manager, AbiPy will use the autoparal
|
TaskPolicy
|
||||||
feature of ABINIT to get all the possible parallel configurations with ``ncpus <= max_cores``.
|
-----------
|
||||||
On the basis of these results, AbiPy selects the "optimal" one, and changes the ABINIT input file
|
|
||||||
and the submission script accordingly .
|
|
||||||
(this is a very useful feature, especially for calculations done with ``paral_kgb=1`` that require
|
|
||||||
the specification of ``npkpt``, ``npfft``, ``npband``, etc).
|
|
||||||
If more than one `QueueAdapter` is specified, AbiPy will first compute all the possible
|
|
||||||
configuration and then select the "optimal" `QueueAdapter` according to some kind of policy
|
|
||||||
|
|
||||||
In some cases, you may want to enforce some constraint on the "optimal" configuration.
|
At this point, you may wonder why we need to specify all these parameters in the configuration file.
|
||||||
For example, you may want to select only those configurations whose parallel efficiency is greater than 0.7
|
The reason is that, before submitting a job to a resource manager, AbiPy will use the autoparal
|
||||||
and whose number of MPI nodes is divisible by 4.
|
feature of ABINIT to get all the possible parallel configurations with ``ncpus <= max_cores``.
|
||||||
One can easily enforce this constraint via the ``condition`` dictionary whose syntax is similar to the one used in `mongodb`
|
On the basis of these results, AbiPy selects the "optimal" one, and changes the ABINIT input file
|
||||||
|
and the submission script accordingly .
|
||||||
|
(this is a very useful feature, especially for calculations done with ``paral_kgb=1`` that require
|
||||||
|
the specification of ``npkpt``, ``npfft``, ``npband``, etc).
|
||||||
|
If more than one `QueueAdapter` is specified, AbiPy will first compute all the possible
|
||||||
|
configuration and then select the "optimal" `QueueAdapter` according to some kind of policy
|
||||||
|
|
||||||
.. code-block:: yaml
|
In some cases, you may want to enforce some constraint on the "optimal" configuration.
|
||||||
|
For example, you may want to select only those configurations whose parallel efficiency is greater than 0.7
|
||||||
|
and whose number of MPI nodes is divisible by 4.
|
||||||
|
One can easily enforce this constraint via the ``condition`` dictionary whose syntax is similar to
|
||||||
|
the one used in ``mongodb``
|
||||||
|
|
||||||
policy:
|
.. code-block:: yaml
|
||||||
autoparal: 1
|
|
||||||
max_ncpus: 10
|
|
||||||
condition: {$and: [ {"efficiency": {$gt: 0.7}}, {"tot_ncpus": {$divisible: 4}} ]}
|
|
||||||
|
|
||||||
The parallel efficiency is defined as $\epsilon = \dfrac{T_1}{T_N * N}$ where $N$ is the number
|
policy:
|
||||||
of MPI processes and $T_j$ is the wall time needed to complete the calculation with $j$ MPI processes.
|
autoparal: 1
|
||||||
For a perfect scaling implementation $\epsilon$ is equal to one.
|
max_ncpus: 10
|
||||||
The parallel speedup with N processors is given by $S = T_N / T_1$.
|
condition: {$and: [ {"efficiency": {$gt: 0.7}}, {"tot_ncpus": {$divisible: 4}} ]}
|
||||||
Note that ``autoparal = 1`` will automatically change your ``job.sh`` script as well as the input file
|
|
||||||
so that we can run the job in parallel with the optimal configuration required by the user.
|
The parallel efficiency is defined as $\epsilon = \dfrac{T_1}{T_N * N}$ where $N$ is the number
|
||||||
For example, you can use ``paral_kgb = 1`` in GS calculations and AbiPy will automatically set the values
|
of MPI processes and $T_j$ is the wall time needed to complete the calculation with $j$ MPI processes.
|
||||||
of ``npband``, ``npfft``, ``npkpt`` ... for you!
|
For a perfect scaling implementation $\epsilon$ is equal to one.
|
||||||
Note that if no configuration fulfills the given condition, AbiPy will use the optimal configuration
|
The parallel speedup with N processors is given by $S = T_N / T_1$.
|
||||||
that leads to the highest parallel speedup (not necessarily the most efficient one).
|
Note that ``autoparal = 1`` will automatically change your ``job.sh`` script as well as the input file
|
||||||
|
so that we can run the job in parallel with the optimal configuration required by the user.
|
||||||
|
For example, you can use ``paral_kgb = 1`` in GS calculations and AbiPy will automatically set the values
|
||||||
|
of ``npband``, ``npfft``, ``npkpt`` ... for you!
|
||||||
|
Note that if no configuration fulfills the given condition, AbiPy will use the optimal configuration
|
||||||
|
that leads to the highest parallel speedup (not necessarily the most efficient one).
|
||||||
|
|
||||||
|
``policy``
|
||||||
|
This section governs the automatic parallelization of the run: in this case AbiPy will use
|
||||||
|
the ``autoparal`` capabilities of Abinit to determine an optimal configuration with
|
||||||
|
**maximum** ``max_ncpus`` MPI nodes. Setting ``autoparal`` to 0 disables the automatic parallelization.
|
||||||
|
Other values of autoparal are not supported*
|
||||||
|
|
||||||
|
|
||||||
.. _flow_troubeshooting:
|
.. _flow_troubeshooting:
|
||||||
|
|
|
@ -13,3 +13,4 @@ with-doctest=1
|
||||||
# Use `pip install nose-exclude`
|
# Use `pip install nose-exclude`
|
||||||
# if nosetest fails with error: Error reading config file 'setup.cfg': no such option 'exclude-dir
|
# if nosetest fails with error: Error reading config file 'setup.cfg': no such option 'exclude-dir
|
||||||
exclude-dir=abipy/gui
|
exclude-dir=abipy/gui
|
||||||
|
exclude-dir=abipy/gw
|
||||||
|
|
Loading…
Reference in New Issue