quantum-espresso/UtilXlib
Ye Luo 69fbdcb0b1 Add unit test runner. 2021-05-23 09:58:06 -05:00
..
tests [skip-CI] executables should be executable 2020-10-15 21:17:38 +02:00
CMakeLists.txt Add unit test runner. 2021-05-23 09:58:06 -05:00
Makefile completin MR with changes suggested by reviewers 2021-01-27 00:19:34 +01:00
Makefile.test First draft of CUDA Fortran enabled UtilXlib 2018-02-15 18:08:38 +01:00
README.md Added information regarding MPI interfaces 2018-09-28 16:01:30 +02:00
clocks_handler.f90 adding independent routine for setting max_print_depth value 2021-01-29 09:30:36 +01:00
data_buffer.f90 Remove or reduce MPI message splitting in CUDA Fortran implementation of UtilXlib 2019-02-11 09:49:52 +00:00
device_helper.f90 Replaced custom calls with devxlib 2020-03-14 13:22:31 +01:00
divide.f90 Remove all unnecessary mem ops in cegterg. 2018-05-27 21:54:46 -05:00
error_handler.f90 Typo correction. 2021-01-24 10:24:27 -06:00
export_gstart_2_solvers.f90 - LAXlib made independent from other module 2019-08-07 14:27:02 +02:00
find_free_unit.f90 partially reversed previuos commit, in case of failure find_free_unit returns a negative value and prints an info message without stopping the program. 2017-07-30 17:28:48 +00:00
fletcher32_mod.f90 Fletcher-32 checksum implemented in clib/fletcher32.c 2017-09-03 15:01:51 +00:00
hash.f90 simple example of use of the fletcher32 check sum functionality 2017-09-03 15:12:46 +00:00
make.depend more files related to PP moved from PW/src to upflib 2021-02-15 10:09:20 +01:00
mem_counter.f90 Update 3 polar EPW benchmark and the memcounter 2019-04-28 12:47:35 +01:00
mp.f90 New utilXlib interface for ParO (courtesy of Ivan Carnimeo) 2020-12-03 11:17:03 +01:00
mp_bands_util.f90 Replicated routine "set_bgrp_index" replaced by "divide" 2017-12-23 22:00:32 +01:00
mp_base.f90 update mp_base.f90 2020-04-15 19:46:24 -05:00
mp_base_gpu.f90 Introduce CUDA support in CMake with some refactoring. 2021-01-18 14:50:50 +00:00
nvtx_wrapper.f90 completin MR with changes suggested by reviewers 2021-01-27 00:19:34 +01:00
parallel_include.f90 UtilXlib directory created to contain a library (libutil.a) for 2017-07-26 11:15:20 +00:00
set_mpi_comm_4_solvers.f90 - LAXlib made independent from other module 2019-08-07 14:27:02 +02:00
thread_util.f90 Fixes for NAG compiler glitches, courtesy Themos Tsikas 2019-06-12 20:55:06 +02:00
util_param.f90 Debug for large integers 2020-01-12 19:28:45 +01:00

README.md

UtilXlib

This library implements various basic tasks such as timing, tracing, optimized memory accesses and an abstraction layer for the MPI subroutines.

The following pre-processor directives can be used to enable/disable some features:

  • __MPI : activates MPI support.
  • __TRACE : activates verbose output for debugging purposes
  • __CUDA : activates CUDA Fortran based interfaces.
  • __GPU_MPI : use CUDA aware MPI calls instead of standard sync-send-update method (experimental).

Usage of wrapper interfaces for MPI

This library offers a number of interfaces to abstract the MPI APIs and to optionally relax the dependency on a MPI library.

mp_* interfaces present in the library can only be called after the initialization performed by the subroutine mp_start and before the finalization done by mp_end. All rules have exceptions and indeed subroutines mp_count_nodes, mp_type_create_column_section and mp_type_free can also be called outside the aforementioned window.

If CUDA Fortran support is enabled, almost all interfaces accept input data declared with the device attribute. Note however that CUDA Fortran support should be considered experimental.

CUDA specific notes

All calls to message passing interfaces are synchronous with respect to both MPI and CUDA streams. The code will synchronize the device before starting the communication, also in those cases where communication may be avoided (for example in serial version). A different behaviour may be observed when the default stream synchronization behaviour is overridden by the user (see cudaStreamCreateWithFlags).

Be careful when using CUDA-aware MPI. Some implementations are not complete. The library will not check for the CUDA-aware MPI APIs during the initialization, but may report failure codes during the execution. If you encounter problems when adding the flag __GPU_MPI it might be that the MPI library does not support some CUDA-aware APIs.

Known Issues

Owing to the use of the source option in data allocations, PGI versions older than 17.10 may fail with arrays having initial index different from 1.

Testing

Partial unit testing is available in the tests sub-directory. See the README in that directory for further information.