quantum-espresso/Doc/developer_man.tex

2243 lines
87 KiB
TeX

\documentclass[12pt,a4paper]{article}
\def\version{6.6}
\def\QE{{\sc Quantum ESPRESSO}}
\def\qe{QE}
\textwidth = 17cm
\textheight = 25cm
\topmargin =-1 cm
\oddsidemargin = 0 cm
\usepackage{html}
\usepackage[colorlinks=true,urlcolor=blue,unicode]{hyperref}
% BEWARE: don't revert from graphicx for epsfig, because latex2html
% doesn't handle epsfig commands !!!
\usepackage{graphicx}
\def\configure{\texttt{configure}}
\def\configurac{\texttt{configure.ac}}
\def\autoconf{\texttt{autoconf}}
\def\make.inc{\texttt{make.inc}}
\def\Makefile{\texttt{Makefile}}
\def\qeImage{quantum_espresso}
\begin{document}
\author{}
\date{}
\title{
\includegraphics[width=5cm]{\qeImage} \\
% title
\Huge Developer's Manual for \\ \QE\ (v.\version)
}
\maketitle
\tableofcontents
\newpage
\section{Introduction}
{\em Important notice: due to the lack of time and of manpower,
this manual does not cover all the topics it should,
may occasionally contain outdated or incorrect information.}
\subsection{Who should read (and who should {\em write}) this guide}
The intended audience of this guide is everybody who wants to:
\begin{itemize}
\item know how \QE\ (from now on, \qe) works internally;
\item modify/customize/add/extend/improve/clean up \qe;
\item know how to read and use data produced by \qe.
\end{itemize}
The same category of people should also {\em write} this guide, of course.
\subsection{Who may read this guide but will not necessarily profit from it}
People who want to know about the capabilities of \qe,
or who want just to use it, should read the User Guide
instead of (or in addition to) this guide. In addition
to the general User Guide, there are also package-specific
guides.
People who want to know about the methods or the physics
behind \qe\ should read first the relevant
literature (some pointers in the User Guide).
\subsection{How to contribute to \qe\ as a user}
You can contribute to a better \qe, even as an ordinary user, by:
\begin{itemize}
\item Answering other people's questions on the users' mailing list
(correct answers are strongly preferred to wrong ones).
\item Porting to new/unsupported architectures or configurations: see
Sect. \ref{SubSec:Inst}, "Installation mechanism". You should
not need to add new preprocessing flags, but if you do,
see Sect. \ref{SubSec:CPP}, "Preprocessing".
\item Pointing out bugs in the software and in the documentation
(reports of real bugs are strongly preferred to reports of
nonexistent bugs). See Sect. \ref{SubSec:Bugs}, "Guidelines
for reporting bugs".
\item Improving the documentation (generic complaints or suggestions
that "there should be this and that" do not qualify as improvements).
\item Suggesting changes: contact developers submitting an ''Issue'' on
GitLab\footnote{\tt git.quantum-espresso.org}, or posting to the
developers mailing list\footnote{\tt developers@lists.quantum-espresso.org}.
Unless there are technical reasons not to follow your suggestion,
we will try to make you happy. Note however that suggestions requiring
a significant amount of work are more welcome if accompanied by
implementation or by a promise of future implementation (fulfilled
promises are strongly preferred to forgotten ones).
\item Adding new features to the code: see Sect.\ref{SubSec:BeDev},
"How to become a developer", in particular Sect.\ref{SubSec:Contrib},
"Contributing new developments".
\end{itemize}
\newpage
\section{\qe\ as a distribution}
\qe\ is not a monolithic code, but a {\em distribution}
(an integrated suite) of ``packages'', with
varying degrees of integration, that can be installed on demand,
or sometimes independently. The core distribution includes:
\begin{itemize}
\item scripts, installation tools, libraries, common source files;
\item basic packages
\begin{itemize}
\item \texttt{PWscf}: self-consistent calculations, structural optimization,
molecular dynamics on the ground state;
\item \texttt{CP}: Car-Parrinello molecular dynamics;
\item \texttt{PostProc}: data analysis and plotting (requires \texttt{PWscf}).
\end{itemize}
\item additional packages, using routines from the basic packages,
developed and packaged together with the core distribution:
\begin{itemize}
\item \texttt{atomic}: pseudopotential generation
\item \texttt{PHonon}: Density-Functional Perturbation Theory
\item \texttt{NEB}: reaction pathways and energy barriers
\item \texttt{PWCOND}: ballistic conductance
\item \texttt{XSPECTRA}: calculation of X-ray spectra
\item \texttt{TDDFPT}: Time-dependent DFPT
\item \texttt{EPW}: electron-phonon coupling coefficients
\item \texttt{GWL}: GW and BSE using Lanczos chains
\end{itemize}
\end{itemize}
There are also external (separately developed) packages that make usage
of \qe\ routines:
\begin{itemize}
\item \texttt{GIPAW}: NMR coefficients and chemical shifts,
\item \texttt{West}: Many-body perturbation corrections to DFT.
\end{itemize}
or that just read data produced by \qe\ but do not need it to work:
\begin{itemize}
\item \texttt{Yambo}: Many-body perturbation Theory
\item \texttt{Wannier90}: Wannier Functions utilities
\item \texttt{WanT}: Transport with Wannier Functions
\end{itemize}
Most of them can be automatically downloaded and installed from the core
distribution using \texttt{make}.
Finally there are {\em plugins}: these modify \qe\ packages, adding
new functionalities. Currently the following plugins are available:
\begin{itemize}
\item \texttt{Plumed}, v.1.3 only, for metadynamics;
\item \texttt{Environ}, for calculations with a solvent.
\end{itemize}
\section{How to become a developer}
\label{SubSec:BeDev}
If you want to get involved as a developer and contribute serious
or nontrivial stuff (or even simple and trivial stuff), you should
first of all register on GitLab.com, following the instructions
(you may even use other pre-existing accounts).
All \qe\ developers are {\em strongly} invited to subscribe to the
developers' mailing list using the link
in \texttt{https://lists.quantum-espresso.org/mailman/listinfo/developers}.
Those who don't, i) miss the opportunity to follow what is going on,
ii) lose the right to complain if something has gone into a direction
they don't like.
{\bf Important notice:} the development model of \qe\ has undergone
significant changes after release 6.2.1. The development has moved to
GitLab. The official git repository is visible at
\texttt{git.quantum-espresso.org} and can be downloaded as follows:
\begin{verbatim}
git clone https://gitlab.com/QEF/q-e.git
\end{verbatim}
There is also a GitHub mirror \texttt{github.com/QEF/q-e}, only for
``pull'' (i.e., read) operations, automatically kept aligned with
the official GitLab repository. The GitHub repository can be downloaded
as follows: \texttt{git clone https://github.com/QEF/q-e.git}.
See Sect.\ref{Sec:git}, ``Using git'', and file \texttt{CONTRIBUTING.md},
for instructions on how to use \texttt{git}.
\subsection{Contributing new developments}
\label{SubSec:Contrib}
It is possible to contribute:
\begin{itemize}
\item a small, or large, piece of code to an existing package; or
\item a new package that uses \qe\ as a library; or
\item a ``plugin'' that modifies \qe, adding a new functionality; or
\item a new ``external'' package that just reads data file produced by QE.
\end{itemize}
The ideal procedure depends upon the kind of project you have in mind.
As a rule: if you plan to make a public release of your work, you
should always keep your work aligned to the current development version
of \qe. This is especially important if your project
\begin{itemize}
\item involves major or extensive, or even small but critical or numerous,
changes to existing \qe\ routines,
\item makes usage of existing (modified or unmodified) \qe\ routines.
\end{itemize}
Modifying the latest stable version is not a good idea. Modifying
an {\em old} stable version is an {\em even worse idea}. New code
based on old versions will invariably be obsolete after a few months,
{\em very} obsolete after a few years. Experience shows that new projects
may take a long time before reaching a public release, and that the
major stumbling block is the alignment to the newer \qe\ distribution.
The sole exception is when your changes are either relatively small, or
localized to a small part of \qe, or they are quite independent anyway
from the rest of \qe. In that case, you may just send a patch or the
modified routine(s) to an expert developer who will review it and
take the appropriate steps. The preferred path is however a ``merge request''
on GitLab (see Sect.\ref{Sec:git}),
{\em Important:} keep your modified copy of the distribution aligned to the
repository. Don't work for years, or even for months, without keeping an eye
to what is going on in the repository. This is especially true for projects
that modify or use \qe\ code and routines. Update your copy frequently,
verify if changes made meanwhile by other developers conflict with your
changes.
If your project just uses the \qe\ installation procedure and/or data files,
it is less likely to run into problems, since major incompatible changes are
quite rare. You may still need to verify from time to time that everything keeps
working, though.
\subsection{Hints, Caveats, Do's and Dont's for developers}
\begin{itemize}
\item Before doing anything, inquire whether it is already there,
or under development. In particular, check (and update) the "Road Map"
page \texttt{www.quantum-espresso.org/road-map}, send a message to
\texttt{developers@lists.quantum-espresso.org}.
\item Before starting writing code, inquire whether you can reuse
code that is already available in the distribution. Avoid redundancy:
{\em the only bug-free software line is the one that doesn't exist}
(citation adapted from Henry Ford).
\item When you make some changes:
\begin{itemize}
\item Check that are not spoiling other people's work. In particular,
search the distribution for codes using the routine or module you are
modifying and change its usage or its calling arguments everywhere.
Use the commit message to notify all developers if you introduce any
``dangerous'' change (i.e. susceptible to break some features or
packages, including external packages using \qe).
\item Do not forget that your changes must work on many different
combinations of hardware and software, in both serial and parallel execution.
\item Do not forget that your changes must work for a wide variety of
different case: if you implement something that works only in some
selected cases, that's ok, as long as the code stops (or at least,
issues a warning) in all other cases. There is something worse than
no results: wrong results.
\item Do not forget that your changes must work on systems of wildly
different computational complexity: a piece of code that works fine for
crystal silicon may gobble a disproportionate amount of time and/or
memory in a 1000-atom cell.
\end{itemize}
\item Document your contributions:
\begin{itemize}
\item If you modify what a code can do, or introduce
incompatibilities with previous versions (e.g. old data file
no longer readable, old input no longer valid), {\em please}
report it in file \texttt{Doc/release-notes}.
\item If you add/modify/remove input variables, document
it in the appropriate \texttt{INPUT\_*.def} file;
update tests and examples accordingly.
\item All newly introduced features or variables must be
accompanied by an example or a test or both (either a
new one or a modified existing test or example).
\end{itemize}
\item Please do not include files (any kind, including
pseudopotential files) with MS-DOS \^{}M characters or
tabulators \^{}I. In case, remove e.g. with
\texttt{cat file| tr -d "\^{}V\^{}M"}.
\item When you modify the program sources, run the
\texttt{install/makedeps.sh} script, or type \texttt{make depend}
to update files \texttt{make.depend} in the various
subdirectories. These files are in the repository:
if modified, they should be saved there.
\end{itemize}
\subsection{Guidelines for reporting bugs}
\label{SubSec:Bugs}
\begin{itemize}
\item Before deciding that a problem is due to a bug in the codes,
verify if it is reproducible on different machines/architectures/phases
of the moon: erratic or irreproducible problems, especially in parallel
execution, are often an indication of buggy compilers or libraries
\item Bug reports should preferably be filed as "Issues" on GitLab:
\texttt{GitLab.com/QEF/q-e/Issues},
or reported to the developers' mailing list:
\texttt{developers@lists.quantum-espresso.org}.
\item Bug reports should include enough information to be reproduced:
the error message alone is seldom a sufficient piece of information.
Typically, one should report
\begin{itemize}
\item QE version, hardware/software combination(s) for which
the problem arises (most important: compiler information)
\item whether it happens in serial or parallel execution or both
(if in parallel only, how executed),
\item an output for a test case showing the presumed bug
\item all the needed info and data to re-run the test case showing
the bug
\end{itemize}
The provided input should be simple and quick to execute.
\item If a bug is found in a stable (released) version of \qe,
he/she who fixes it must report it in the \texttt{Doc/release-notes}
file.
\end{itemize}
\section{Stable releases and development cycle}
Stable releases are usually labelled as $N.M.p$, where $N$=major,
$M$=minor, $p$=bugfix (the latter may occasionally be absent).
The logic goes more or less as follows:
\begin{itemize}
\item {\em Major}: when something really important changes, e.g.
\begin{enumerate}
\item[v.1] First public release of PWscf
\item[v.2] Conversion from f77 to f90
\item[v.3] Merge with the CP and FPMD codes (beginning of \qe)
\item[v.4] New XML-based data file format
\item[v.5] Major package and directory reorganization
\item[v.6] New I/O once again
\end{enumerate}
(the above numbers are a slightly idealized versions of how things have
gone until now)
\item {\em Minor}: when some important new functionality is being added
\item {\em Bugfix}: only bug fixes; occasionally, minor new functionalities
that don't break any existing ones are allowed to sneak into a bugfix release.
\end{itemize}
Since Release 6.2.1, releases are distributed as ``Tags'' on GitLab.
V. 6.2.1 is also the last release distributed as ``tarballs'' on qe-forge.org.
The automatic downloading of packages is implemented in file
\texttt{install/plugins\_makefile} and configured in file
\texttt{install/plugins\_list}. For independently released packages,
it is sufficient to update links.
\paragraph{Preparing for a release}
When the release date approaches, development of new stuff is temporarily
stopped: nothing new or potentially ''dangerous'' is added, and all
attention is dedicated to fix bugs and to stabilize the distribution.
This manual and the user manual have to be updated. After the release
is tagged, the documentation produced by \texttt{make doc} must be copied
to directory:\\
\texttt{quantumespresso@qe.safevps.it:/storage/vhosts/quantum-espresso.org/htdocs/Doc}.
\section{Structure of the distribution}
The directory structure of \qe\ reflects its organization
into packages. Each package is stored into a specific subdirectory.
In addition, there is a set of directories, common to all packages,
containing common code,
libraries, installation utilities, general documentation.
The most important files and directories in the root (\texttt{q-e/})
directory are:
\begin{itemize}
\item {\em Installation} (i.e. compilation and linking):\\
\texttt{install/}, \texttt{dev-tools/}, \texttt{archive/},
\configure, \make.inc
\item {\em Testing} (running tests and examples):\\
\texttt{pseudo/}, \texttt{environment\_variables}, \texttt{test-suite/}
\item {\em General documentation} (not package-specific):\\
\texttt{Doc/}, \texttt{License}, \texttt{README.md}, \texttt{CONTRIBUTING.md}
\item {\em Libraries: FFT, Linear Algebra, Utility, Solvers, DFT-D3}:\\
\texttt{FFTXlib/}, \texttt{LAXlib/}, \texttt{UtilXlib/},
\texttt{KS\_Solvers/}, \texttt{dft-d3/}
\item {\em C and Fortran sources}:\\
\texttt{include/}, \texttt{clib/}, \texttt{Modules/}
\item {\em utilities to call \qe\ programs from external codes}:\\
\texttt{COUPLE/}
\item {\em Linear-response specific modules}:\\
\texttt{LR\_Modules/}.
\end{itemize}
The core distribution also contains package-specific directories,
e.g., \texttt{PW/}, \texttt{PP/}, \texttt{CPV/}, for
\texttt{PWscf}, \texttt{PostProc}, \texttt{CP}, respectively.
The typical subdirectory structure of a directory containing a package
is
\begin{verbatim}
Makefile
examples/
Doc/
src/
...
\end{verbatim}
but some packages have a slightly different structure (e.g.,
\texttt{PHonon} has three directories for sources and none
is called \texttt{src/} ).
\subsection{Installation Mechanism}
\label{SubSec:Inst}
Let us review the files related to compilation and linking:
\begin{itemize}
\item[--] \texttt{install/}: documentation and utilities for compilation
and linking
\item[--] \configure: wrapper for \texttt{install/configure} script
\item[--] \make.inc: produced by \texttt{configure}, contains
machine-specific compilation and linking options
\item[--] \Makefile: contains dependencies and targets used by
command \texttt{make}.
\item[--] \texttt{include/}: files to be included into sources, to be
pre-processed.
\end{itemize}
\texttt{./configure} {\em options} cleans executables,
runs \texttt{install/configure}, produces file \make.inc.
See Sec.\ref{SubSec:conf} for some details on how to change the
behavior of \configure.
\texttt{make} {\em target} checks for dependencies, recursively goes
into subdirectories executing \texttt{make} again. The behavior of
\texttt{make} is thus
determined by many \Makefile's in the various directories. The
most important files are \Makefile's in the directories containing
sources, e.g. \texttt{Modules/Makefile}, \texttt{PW/src/Makefile}.
Dependencies of Fortran files are contained in \texttt{make.depend} files
in each source directory. These files {\em must be updated} if you change
the sources, running script \texttt{install/makedeps.sh} or using command
\texttt{make depend}.
\paragraph{make.inc}
This file is produced by \configure\ using the template in
\texttt{install/make.inc.in} and contains all system-specific
information on
\begin{itemize}
\item C and Fortran compilers name, pre-processing and compilation options
\item whether the Fortran compiler performs C-style preprocessing or not
(likely obsolete)
\item whether compiling for parallel or serial execution
\item available optimized mathematical libraries, libraries to be downloaded
\item Miscellaneous stuff
\end{itemize}
The \make.inc\ file is included into all \Makefile's,
using the corresponding syntax. The best documentation for the
\make.inc\ file is the file itself. Note that if you want to make
permanent changes or to add more documentation to this file,
you have to modify the template file \texttt{install/make.inc.in}.
\paragraph{Makefile}
The top-level \Makefile\ contains the instructions to download,
unpack, compile and link what is required. Sample contents
(comments in italic):
\begin{verbatim}
include make.inc
\end{verbatim}
{\em Contains machine- and \qe-specific definitions}
\begin{verbatim}
default :
@echo 'to install, type at the shell prompt:'
...
\end{verbatim}
{\em If no target specified, ask for one, giving a list of possibilities}
\begin{verbatim}
pw : pwlibs
if test -d PW ; then \
( cd PW ; $(MAKE) TLDEPS= all || exit 1) ; fi
\end{verbatim}
{\em Target {\tt pw}: first check the list of dependencies ({\tt pwlib} in this case),
do what is needed; then go into {\tt PW/} and give command
{\tt make all}. Note the use of {\tt exit 1}, which is required to forward
the exit status of the sub-directory make to this makefile, since the section
in parenthesis is run in a subshell and the {\tt if / fi} block will otherwise
``hide'' its return status and ``make'' will continue in case of errors. }
See below for the meaning of TLDEPS.
\begin{verbatim}
gipaw : pwlibs
( cd install ; $(MAKE) -f plugins_makefile $@ || exit 1 )
\end{verbatim}
{\em Target {\tt gipaw}: do target {\tt pwlibs}, then go into directory
{\tt install/}, execute {\tt make gipaw} using {\tt plugins\_makefile}
as Makefile. This will check if GIPAW is there, download from the network
if not, compile and link it}
\begin{verbatim}
libblas :
cd install ; $(MAKE) -f extlibs_makefile $@
\end{verbatim}
{\em Target {\tt libblas}: this is an external library, that may or may
not be needed, depending upon what is written in {\tt make.inc}. If
needed, go into directory {\tt install/} where {\tt make libblas} using
{\tt extlibs\_makefile} as Makefile will check if BLAS are there, download
from the network if not,
compile and build the library}
\paragraph{PW/Makefile}
Second-level \Makefile\ contains only targets related to a given
subdirectory or package. Sample contents:
\begin{verbatim}
sinclude ../make.inc
default : all
all: pw pwtools
pw:
( cd src ; $(MAKE) all || exit 1 )
pwtools: pw
( cd tools ; $(MAKE) all || exit 1 )
...
\end{verbatim}
{\em Target {\tt pw}: go into {\tt src/} if it exists, and (apart
from \texttt{make} wizardry) give command {\tt make pw}. It is important
that {\tt pwtools} explicitly depends upon {\tt pw} or else this
makefile will break when calling parallel make using {\tt make -j\# }
Other targets are quite similar: go into a subdirectory, e.g.
{\tt Doc/} and '{\tt make} something', e.g. {\tt make clean}.}
\paragraph{PW/src/Makefile}
The most important and most complex Makefile is the one in the
source directory. It is also the one you need to modify if you
add something.
\begin{verbatim}
include ../../make.inc
\end{verbatim}
{\em Contains machine- and \qe-specific definitions}
\begin{verbatim}
MODFLAGS= $(BASEMOD_FLAGS) \
$(MOD_FLAG)../../dft-d3/
\end{verbatim}
{\em Location of needed modules, used in \texttt{make.inc};
\texttt{BASEMOD\_FLAG, MOD\_FLAG} are defined in \texttt{make.inc}}
\begin{verbatim}
PWOBJS = \
pwscf.o
\end{verbatim}
{\em Object file containing main program (this is actually redundant)}
\begin{verbatim}
PWLIBS = \
a2fmod.o \
...
wannier_enrg.o
\end{verbatim}
{\em List of objects - add here new objects, or delete from this list. Do not
forget the backslash! It ensure continuation of the line}
\begin{verbatim}
QEMODS=../../Modules/libqemod.a \
../../KS_Solvers/libks_solvers.a \
...
\end{verbatim}
{\em F95 module objects needed for compiling and linking}
\begin{verbatim}
TLDEPS=bindir libs mods libks_solvers dftd3
\end{verbatim}
{\em TLDEPS=Top-Level DEPendencieS: a machinery to ensure proper
compilation with correct dependencies also if compiling from inside
a package directory and not from top level. In the top-level Makefile,
TLDEPS is set to an empty string (TLDEPS=) and transmitted to Makefiles
in the subdirectories. These contain a target "tldeps" that does nothing
if TLDEPS is empty (i.e. executing from the top level); it goes to the
top-level directory and executes TLDEPS otherwise. TLDEPS in subdirectory
Makefiles must be set to the same dependencies of the intended target
(e.g. pw.x in the following example) in the top-level Makefile.
}
\begin{verbatim}
all : tldeps pw.x ...
\end{verbatim}
{\em Targets that will be build - add here new executables}
\begin{verbatim}
pw.x : $(PWOBJS) libpw.a $(LIBOBJS) $(QEMODS)
$(LD) $(LDFLAGS) -o $@ \
$(PWOBJS) libpw.a $(QEMODS) $(LIBOBJS) $(QELIBS)
- ( cd ../../bin; ln -fs ../PW/src/$@ . ; \
ln -fs ../PW/src/$@ dist.x ; ln -fs ../PW/src/$@ manypw.x ; )
\end{verbatim}
{\em Target {\tt pw.x} - produces executable with the same name.
It also produces a link to the executable in {\tt espresso/bin/}
and two more links with different names (and different functionalities).
Do not forget tabulators even if you do not see them!
All variables (introduced by \$) are either defined locally
in {\tt Makefile} or imported from {\tt make.inc}}
\begin{verbatim}
libpw.a : $(PWLIBS)
$(AR) $(ARFLAGS) $@ $?
$(RANLIB) $@
\end{verbatim}
{\em This builds the library libpw.a - again, do not forget tabulators}
\begin{verbatim}
tldeps:
test -n "$(TLDEPS)" && ( cd ../.. ;
$(MAKE) $(TLDEPS) || exit 1) || :
\end{verbatim}
{\em second part of the TLDEPS machinery}
\begin{verbatim}
clean :
- /bin/rm -f *.x *.o *.a *~ *_tmp.f90 *.d *.mod *.i *.L
\end{verbatim}
{\em There should always be a ''clean'' target, removing all compiled (*.o)
or preprocessed (*.F90) stuff - compiled F95 modules may have different
filenames: the four last items cover most cases}
\begin{verbatim}
include make.depend
\end{verbatim}
{\em Contains dependencies of objects upon other objects. Sample
content of file {\tt make.depend} (can be produced by {\tt install/makedep.sh}):}
\begin{verbatim}
a2fmod.o : ../../Modules/io_files.o
a2fmod.o : ../../Modules/io_global.o
a2fmod.o : ../../Modules/ions_base.o
a2fmod.o : ../../Modules/kind.o
a2fmod.o : pwcom.o
a2fmod.o : start_k.o
a2fmod.o : symm_base.o
\end{verbatim}
{\em tells us that the listed objects must have been compiled
prior to compilation of a2fmod.o - {\tt make} will take care of this.}
{\bf BEWARE:} the Makefile system is in a stable but delicate equilibrium,
resulting from many years of experiments on many different machines.
Handle with care: what works for you may break other cases.
{\bf BEWARE 2:} parallel make (\texttt{make -j N}) works only if all
needed dependencies are in place. Note that parallel make does not
necessarily execute targets in the order they appear, e.g.: if you have
\begin{verbatim}
all : a b c d
\end{verbatim}
and "d" depends upon "a", that dependency must be explicitly present
in the Makefiles, or else parallel make will not work (or will work
only erratically).
\subsubsection{Preprocessing}
\label{SubSec:CPP}
Fortran source code contains preprocessing option with
the same syntax used by the C preprocessor \texttt{cpp}.
Most Fortran compilers understand preprocessing options \texttt{-D ...}
or some similar form. Some old compilers however do not support
or do not properly implement preprocessing. In this case the
preprocessing is done using \texttt{cpp}.
Normally, \configure\ takes care of this, by selecting the
appropriate rule \texttt{@f90rule@} below, in this section
of file \texttt{make.inc.in}:
\begin{verbatim}
.f90.o:
@f90rule@
\end{verbatim}
and producing the appropriate file \make.inc.
Preprocessing is useful to
\begin{itemize}
\item account for machine dependency in a unified source tree
\item distinguish between parallel and serial execution when they
follow different paths (i.e. there is a substantial difference between
serial execution and parallel execution on a single processor)
\item introduce experimental or special-purpose stuff
\end{itemize}
Use with care and {\em only when needed}. See file
\texttt{include/defs.README} for a list of preprocessing
options. Please {\em keep that list updated}.
The following capabilities of the C preprocessor are used:
\begin{itemize}
\item assign a value to a given expression. For instance, command
\texttt{\#define THIS that}, or the option in the command line:
\texttt{-DTHIS=that}, will replace all occurrences of \texttt{THIS}
with \texttt{that}.
\item include file (command \texttt{\#include})
\item expand macros (command \texttt{\#define})
\item execute conditional expressions such as
\begin{verbatim}
#if defined (__expression)
...code A...
#else
...code B...
#endif
\end{verbatim}
If \texttt{\_\_expression} is defined (with a \texttt{\#define} command
or from the command line with option \texttt{-D\_\_expression}),
then \texttt{...code A...} is sent to output; otherwise
\texttt{...code B...} is sent to output.
\end{itemize}
In order to make preprocessing options
easy to see, preprocessing variables should start with
two underscores, as \texttt{\_\_expression} in the above
example. Traditionally ''preprocessed'' variables are also written in
uppercase. Please use \verb|#if defined (XXX)|, not
\verb|#if defined XXX| or \verb|#ifdef XXX|.
\subsubsection{\configure}
\label{SubSec:conf}
The \configure\ script in the root directory of \qe\ is a wrapper
that calls install/\configure. This is in turn generated, using the
\autoconf\ GNU utility (\texttt{http://www.gnu.org/software/autoconf/}),
from its source file \configurac\ and the m4 files \texttt{install/m4/*.m4}.
Don't edit install/\configure\ directly: whenever it gets regenerated, your
changes will be lost. Instead, in the \texttt{install/} directory, edit
\configurac\ and files \texttt{install/m4/*.m4}, then run \autoconf.
If you want to keep the old \configure, make a copy first.
GNU \autoconf\ is installed by default on most Unix/Linux systems. If
you don't have it on your system, you'll have to install it. You will
need \autoconf\ v.2.69 or later.
\configurac\ is a regular Bourne shell script (i.e., "sh" -- not csh!),
except that:
\begin{itemize}
\item[--] \texttt{AC\_QE\_SOMETHING} is a m4 macro, defined in file
\texttt{install/m4/x\_ac\_qe\_something.m4}. This is what you should
normally modify.
\item[--] all other capitalized names starting with \texttt{AC\_} are
\autoconf\ macros. Normally you shouldn't have to touch them.
\item[--] square brackets are normally removed by the macro processor.
If you need a square bracket (that should be very rare), you'll have
to write two.
\end{itemize}
You may refer to the GNU \autoconf\ Manual for more info.
\texttt{make.inc.in} is the source file for \make.inc, that
\configure\ generates: you might want to edit that file as well.
The generation procedure is as follows: if \configurac\ contains the macro
"AC\_SUBST(name)", then every occurrence of "@name@" in the source
file will be substituted with the value of the shell variable "name"
at the point where AC\_SUBST was called.
Similarly, \configure\texttt{.msg} is generated from \configure\texttt{.msg.in}: this
file is only used by \configure\ to print its final report, and isn't
needed for the compilation. We did it this way so that our
\configure\ may also be used by other projects, just by replacing the
\qe-specific \configure\texttt{.msg.in} by your own.
\configure\ writes a detailed log of its operation to \texttt{config.log}.
When any configuration step fails, you may look there for the relevant
error messages. Note that it is normal for some checks to fail.
\subsection{Libraries}
Subdirectory \texttt{clib/} contains libraries written in C
(\texttt{*.c}). To ensure that fortran can call C routines,
use the fortran-95 intrinsic \texttt{iso\_c\_binding} module.
See \texttt{Modules/wrappers.f90} for inspiration and examples.
Reference documentation can be found for instance here:\\
{\tt
https://gcc.gnu.org/onlinedocs/gfortran/Interoperable-Subroutines-and-Functions.html}
% \subsection{Adding new directories or routines}
\section{Algorithms}
\subsection{$G$-vectors and plane waves}
$G$-vectors are generated in the \texttt{ggen} and \texttt{ggens}
subroutines of \texttt{Modules/recvec\_subs.f90}. You may also have
a look at routine \texttt{PW/src/n\_plane\_waves.f90} to understand
how things work. For the simple case of a single grid,
$G$-vectors are determined by the condition
\begin{equation}
{ \hbar^2 G^2 \over 2m_e} \le E^\rho_c = 4 E^w_c
\end{equation}
(without the k point; the code always uses Rydberg atomic units unless
otherwise specified). This is a sphere in reciprocal space centered
around (0,0,0).
Plane waves used in the expansion of orbitals at a specific k point
are determined by the condition
\begin{equation}
{ \hbar^2 ({\bf k}+{\bf G})^2 \over 2m_e} \le E^w_c
\end{equation}
In this case the $G$ vectors are a subset of the vectors used for the
density and form a sphere in reciprocal space shifted from the origin.
Depending on k you can have a different set of $G$-vectors included
in the sphere and also their number could differ.
In order to manage the $G$-vectors for each $k$-point, you can use the arrays
ngk (number of $G$-vectors for each $k$-point) and igk\_k (index of $G$
corresponding to a given index of $k+G$; basically an index that allows
you to identify the $G$-vectors corresponding to a given $k$ and order them).
For example the kinetic energy corresponding to a given $k$-point \texttt{ik}
is
\begin{verbatim}
g2kin(1:ngk(ik)) = ( ( xk(1,ik) + g(1,igk_k(1:ngk(ik),ik)) )**2 + &
( xk(2,ik) + g(2,igk_k(1:ngk(ik),ik)) )**2 + &
( xk(3,ik) + g(3,igk_k(1:ngk(ik),ik)) )**2 ) * tpiba2
\end{verbatim}
where \texttt{tpiba2} $= (2\pi/a)^2$.
There is only one FFT for the wavefunctions so the grid does not depend
upon the k-points; however, for a given wavefunction, only the components
corresponding to $G$-vectors that satisfy
$ { \hbar^2 ({\bf k}+{\bf G})^2 / 2m_e} \le E^w_c$
are different from 0.
(adapted from an answer by Dario Rocca).
\subsection{Gamma tricks}
In calculations using only the $\Gamma$ point (k=0),
the Kohn-Sham orbitals can be chosen to be real functions in
real space, so that
$
\psi(G) = \psi^*(-G).
$
This allows us to store only half of the Fourier components.
Moreover, two real FFTs can be performed as a single complex FFT.
The auxiliary complex function $\Phi$ is introduced:
$
\Phi(r) = \psi_j(r)+ i \psi_{j+1}(r)
$
whose Fourier transform $\Phi(G)$ yields
$
\psi_j (G) = {\Phi(G) + \Phi^*(-G)\over 2},
\psi_{j+1}(G) = {\Phi(G) - \Phi^*(-G)\over 2i}.
$
A side effect on parallelization is that $G$ and $-G$ must
reside on the same processor. As a consequence, pairs of columns
with $G_{n'_1,n'_2,n'_3}$ and $G_{-n'_1,-n'_2,n'_3}$
(with the exception of the case $n'_1=n'_2=0$),
must be assigned to the same processor.
\subsection{Format of arrays containing charge density, potential, etc.}
The index of arrays used to store functions defined on 3D meshes is
actually a shorthand for three indices, following the FORTRAN convention
("leftmost index runs faster"). An example will explain this better.
Suppose you have a 3D array \texttt{psi(nr1x,nr2x,nr3x)}. FORTRAN
compilers store this array sequentially in the computer RAM in the
following way:
\begin{verbatim}
psi( 1, 1, 1)
psi( 2, 1, 1)
...
psi(nr1x, 1, 1)
psi( 1, 2, 1)
psi( 2, 2, 1)
...
psi(nr1x, 2, 1)
...
...
psi(nr1x,nr2x, 1)
...
psi(nr1x,nr2x,nr3x)
etc
\end{verbatim}
Let \texttt{ind} be the position of the \texttt{(i,j,k)} element in the above list:
the following relation
\begin{verbatim}
ind = i + (j - 1) * nr1x + (k - 1) * nr2x * nr1x
\end{verbatim}
holds. This should clarify the relation between 1D and 3D indexing. In real
space, the \texttt{(i,j,k)} point of the FFT grid with dimensions
\texttt{nr1} ($\le$\texttt{nr1x}),
\texttt{nr2} ($\le$\texttt{nr2x}), , \texttt{nr3} ($\le$\texttt{nr3x}), is
$$
r_{ijk}=\frac{i-1}{nr1} \tau_1 + \frac{j-1}{nr2} \tau_2 +
\frac{k-1}{nr3} \tau_3
$$
where the $\tau_i$ are the basis vectors of the Bravais lattice.
The latter are stored row-wise in the \texttt{at} array:
$\tau_1 = $ \texttt{at(:, 1)},
$\tau_2 = $ \texttt{at(:, 2)},
$\tau_3 = $ \texttt{at(:, 3)}.
The distinction between the dimensions of the FFT grid,
\texttt{(nr1,nr2,nr3)} and the physical dimensions of the array,
\texttt{(nr1x,nr2x,nr3x)} is done only because it is computationally
convenient in some cases that the two sets are not the same.
In particular, it may be convenient to have \texttt{nrx1}=\texttt{nr1}+1
to reduce memory conflicts. Note however that this possibility is not
present with most common FFT's any longer, so it may be consider
as obsolescent.
\subsection{Restart}
The two main packages, \texttt{PWscf} and \texttt{CP}, support
restarting from interrupted calculations. Restarting is trivial
in \texttt{CP}: it is sufficient to save from time to time a
restart file containing wavefunctions, orthogonality matrix,
forces, atomic positions, at the current and previous time step.
Restarting is much more complicated in \texttt{PWscf}. Since v.5.1.
restarting from interrupted calculations is possible ONLY if the code
has been explicitly stopped by user. It is not practical to try to
restart from any possible case, such as e.g. crashes. This would
imply saving lots of data all the time. With modern machines, this is
not a good idea. Restart in \texttt{PWscf} currently works as follows:
\begin{itemize}
\item Each loop calls \texttt{check\_stop\_now} just before the end.
If a user request to stop is found, a small file \texttt{restart\_*}
is created, containing only loop-specific local variables;
files used by the loop, if any, are closed and saved:
variable \texttt{conv\_elec} is set to .false.;
the loop is exited and the routine returns.
\item When a routine containing a loop returns, a check is done
if the code was stopped there or no convergence was achieved;
if so, data for the current loop, if needed, is saved, return.
\item Return after return, all loops and exited and control is
transferred to the main program, which must save needed global
variables to file. The only difference with normal exit is that
temporary files are kept in their format, not in portable format.
\item If variable \texttt{restart\_mode} is set in input to
\texttt{'restart'}:
\begin{itemize}
\item starting potential and wavefunctions are read from file
\item each routine containing a loop checks for the existence of a
\texttt{restart\_*} file before starting its own loop.
\end{itemize}
\end{itemize}
Since April 2013, all electronic loops are organized this way. Loops
on nuclear positions will be organized in the same manner once their
re-organization is completed.
% To be done:
% \begin{itemize}
% \item wg and et should be read from data file
% \item rho(+paw/U/metagga info) should be written to and read from
% unformatted data file similar to the file used in \texttt{mix\_rho};
% portable format should be written only at convergence.
% \end{itemize}
%\section{Structure of the code}
% \subsection{Modules and global variables}
% \subsection{Meaning of the most important variables}
% \subsection{Conventions for indices}
% \subsection{Diagonalization}
% \subsection{Self-consistency}
% \subsection{Structural optimization}
% \subsection{Symmetrization}
% \subsection{Performance issues}
% \subsection{Portability issues}
\section{Parallelization (MPI)}
In MPI parallelization, a number of independent processes are started
on as many processors, communicating via calls to MPI libraries
(the code will work even with more than one process per processor,
but this is not a smart thing to do). Each process has its own
set of variables and knows nothing about other processes' variables.
Variables that take little memory are replicated on all processors,
those that take a lot of memory (wavefunctions, G-vectors, R-space grid)
are distributed.
\subsection{General rules}
Calls to MPI libraries should be confined to a few selected places,
not scattered everywhere into the source code. The vast majority
of parallel operations consist either in broadcasts from one processor
to all others, or in global operations: parallel sums and transpose.
All you need is the MPI communicator (plus the ID of the root processor
for broadcasts), and the appropriate call to wrapper routines, contained
in {\tt UtilXlib/mp.f90} and {\tt UtilXlib/mp\_base.f90}.
For instance: {\tt mp\_sum} is a wrapper to {\tt mpi\_reduce},
{\tt mp\_bcast} to {\tt mpi\_bcast}.
For efficiency reasons (latency is very significant), performing many
parallel operations on a small amount of data each must be avoided.
If you can, store a sizable amount of data and transmit it in a single
MPI call. An example of REALLY BAD code:
\begin{verbatim}
COMPLEX, ALLOCATABLE :: wfc(:,:), swfc(:,:)
ALLOCATE (wfc(npwx,m),swfc(npwx,m))
DO i=1,m
DO j=1,m
ps = zdotc(npw,wfc(1,i),1,swfc(1,j)1)
CALL mp_sum(ps,intra_bgrp_group)
END DO
END DO
\end{verbatim}
MUCH better code, both for serial and parallel speed:
\begin{verbatim}
COMPLEX, ALLOCATABLE :: ps(:,:), wfc(:,:), swfc(:,:)
ALLOCATE (ps(m,m), wfc(npwx,m),swfc(npwx,m))
CALL zgemm ('c', 'n', m, m, npw, (1.d0, 0.d0), wfc, &
npwx, swfc, npwx, (0.d0, 0.d0), ps, m)
CALL mp_sum(ps,intra_bgrp_group)
\end{verbatim}
\subsubsection{Preprocessing for parallel usage}
Calls to MPI libraries require variables contained into a
\texttt{mpif.h} file, or in a \texttt{mpi} module in more
recent implementations, that is usually absent on serial machines.
In order to prevent compilation problems on serial machines,
the following rules {\em must} be followed:
\begin{itemize}
\item Direct calls to MPI library routines must be replaced by
calls to wrapper routines like those in module \texttt{mp.f90}.
If this is not possible or not convenient, use \verb|\#if defined (__MPI)}|
to prevent compilation and usage in the serial case. Note that
some compilers do not like empty files or modules containing nothing!
\item Wrapper routines do not need to be conditionally called:
preprocessing is done inside them. Keep the difference between serial
and parallel code to a minimum: \verb|\#if defined (__MPI)}| are needed
only when the flux of parallel and serial execution differ.
\item Unneeded preprocessing may be removed if already present;
obsolete preprocessing option \verb|__PARA| must not be used.
\end{itemize}
\subsection{Parallelization levels and communicators}
\texttt{mp\_world.f90} is the module containing all processors
on which QE is running. \texttt{world\_comm}
is the communicator between all such processors. In QE, its usage
should be confined to parallel environment initialization. It
should not be used in source code, unless this is used only by
stand-alone executables that perform simple auxiliary tasks
and do not allow for multiple parallelization levels.
Unless QE is started from an external code, \texttt{world\_comm}
will in practice coincides with MPI\_WORLD\_COMM.
\texttt{mp\_image.f90} is the module containing information about
``image" parallelization, i.e. division into quasi-independent similar
calculations, each taking care of a different set of atomic
positions (NEB, PWscf) or of different irreps/phonon wavevectors
(PHonon). \texttt{intra\_image\_comm} is the communicator between
processors of the same image (most of the action will happen here);
\texttt{inter\_image\_comm} is the communicator between processors
belonging to different images (should be used only when communication
between images is necessary).
{\tt intra\_image\_comm} and {\tt world\_comm} coincide if there
is just one image running.
\texttt{mp\_pools.f90} is the module containing information about k-point
(``pool") parallelization. \texttt{intra\_pool\_comm} is the communicator
between processors working on the same group (``pool") of k-points;
\texttt{inter\_pool\_comm} is the communicator between different
k-point pools. Note that:
\begin{quote}
\framebox{$\sum_{\bf k}\equiv$ sum over local {\bf k}-points +
{\tt mp\_sum} on {\tt inter \_pool\_comm}}
\end{quote}
{\tt intra\_pool\_comm} and {\tt intra\_image\_comm} coincide if there
is just one k-point pool.
\texttt{mp\_bands.f90} is the module containing information about band
parallelization. \texttt{intra\_bgrp\_comm} is the communicator between
processors of the same group of bands; \texttt{inter\_band\_comm} is the
communicator between processors belonging to different groups of bands.
Note that band parallelization is currently implemented only in CP and
for hybrid functionals in PW. When a sum over all bands is needed:
\begin{quote}
\framebox{$\sum_i\equiv$ sum over local bands + {\tt mp\_sum} on
{\tt inter\_bgrp\_comm}}
\end{quote}
{\tt intra\_bgrp\_comm} and {\tt intra\_pool\_comm} coincide
if there is just one band group.
Plane waves (${\bf k}+{\bf G}$ or ${\bf G}$ vectors up to the specified
kinetic energy cutoff) are distributed across processors of the
{\tt intra\_bgrp\_comm} communicators. Sums over all plane waves
or {\bf G}-vectors (as e.g. in scalar products $\langle\phi_i|\phi_j\rangle$)
should be performed as follows:
\begin{quote}
\framebox{$\sum_{\bf G} \equiv$ {\tt mp\_sum} on {\tt intra\_bgrp\_comm}}
\end{quote}
The same holds for real-space FFT's grid.
\subsection{Tricks and pitfalls}
\begin{itemize}
\item
Replicated calculations may either be performed independently on
each processor, or performed on one processor and broadcast to all
others. The first approach requires less programming, but it is unsafe:
in principle all processors should yield exactly the same results, if
they work on the same data, but sometimes they don't (depending on the
machine, compiler, and libraries). Even a tiny difference in the last
significant digit can eventually cause serious trouble if allowed to
build up, especially when a replicated check is performed (in which
case the code may ''hang'' if the check yields different results on
different processors). Never assume that the value of a variable produced
by replicated calculations is exactly the same on all processors: when in
doubt, broadcast the value calculated on a specific processor (the ''root''
processor) to all others.
\item
Routine \texttt{errore} should be called in parallel by all processors,
or else it will hang
\item
I/O operations: file opening, closing, and so on, are as a rule performed
only on processor \texttt{ionode}. The correct way to check for errors is
the following:
\begin{verbatim}
IF ( ionode ) THEN
OPEN ( ..., IOSTAT=ierr )
...
END IF
CALL mp_bcast( ierr, ... , intra_image_comm )
CALL errore( 'routine','error', ierr )
\end{verbatim}
The same applies to all operations performed on a single processor,
or a subgroup of processors: any error code must be broadcast before
the check.
\end{itemize}
\subsection{Data distribution}
Quantum ESPRESSO employ arrays whose memory requirements fall
into three categories.
\begin{itemize}
\item {\em Fully Scalable}:
Arrays that are distributed across processors of a pool.
Fully scalable arrays are typically large to very large and contain one
of the following dimensions:
\begin{itemize}
\item number of plane waves, npw (or max number, npwx)
\item number of Gvectors, ngm
\item number of grid points in the R space, dfft\%nnr
\end{itemize}
Their size decreases linearly with the number of processors in a pool.
\item {\em Partially Scalable}:
Arrays that are distributed across processors of the
ortho or diag group. Typically they are much smaller than fully scalable
array, and small in absolute terms for moderate-size system. Their size
however increases quadratically with the number of atoms in the system,
so they have to be distributed for large systems (hundreds to thousands
atoms). Partially scalable arrays contain none of the dimensions listed
above, two of the following dimensions:
\begin{itemize}
\item number of states, nbnd
\item number of atomic states, natomwfc
\item number of projectors, nkb
\end{itemize}
Their size decreases linearly with the number of processors in a ortho
or diag group.
\item
{\em Nonscalable}: All the remaining arrays, that are not distributed across
processors. These are typically small arrays, having dimensions like for
instance:
\begin{itemize}
\item number of atoms, nat
\item number of species of atoms, nsp
\end{itemize}
The size of these arrays is independent on the number of processors.
\end{itemize}
% \subsubsection{Parallel fft}
\section{File Formats}
\subsection{Data file(s)}
\subsubsection{Rationale}
Requirements: the data file should be
\begin{itemize}
\item efficient (quick to read and write)
\item easy to read, parse and write without special libraries
\item easy to understand (self-documented)
\item portable across different software packages
\item portable across different computer architectures
\end{itemize}
Solutions:
\begin{itemize}
\item use binary I/O for large records
\item exploit the file system for organizing data
\item use XML
\item use a specialized library (FoX) to read, parse, write
\item ensure the possibility to convert to a portable formatted file
\end{itemize}
Integration with other packages:
\begin{itemize}
\item provide a self-standing (code-independent) library to read/write this format
\item the use of this library is intended to be at high level, hiding low-level details
\end{itemize}
\subsection{Restart files}
\section{Modifying/adding/extending \qe}
\subsection{Programming style (or lack of it)}
There are currently no strict guidelines for developers. You
should however follow at least the following loose ones:
\begin{itemize}
\item Preprocessing options should be capitalized and start with
two underscores. Examples: \_\_MPI, \_\_LINUX, ... Use
preprocessing syntax \verb|#if defined (XXX)|, not \verb|#if defined XXX|
or \verb|#ifdef XXX|
\item Fortran commands should be capitalized:
CALL something( )
\item Variable names should be lowercase: \texttt{foo = bar/2}
\item Indent DO's and IF's with three white spaces (editors like emacs will do this automatically for you)
\item Do not write crammed code: leave spaces, insert empty separation lines
\item Use comments (introduced by a !) to explain what is not obvious from
the code. Remember that what is obvious to you may not be obvious to other
people. It is especially important to document what a routine does, what
it needs on input, what it produces on output. A few words of comment
may save hours of searching into the code for a piece of missing information.
\item do not use non-standard machine-dependent extensions or sloppy syntax.
An example: Standard Fortran requires that a \& is needed both at end of
line AND at the beginning of continuation line if there is a character
variable (inside ' ' or " ") spanning two lines. Some compilers do not
complain if the latter \& is missing, others do.
% Another example: empty strings are nonstandard,
% use \texttt{empty='~'}, not \texttt{empty=''}.
\item try to stick to F2003 standard: \qe\ must work even if you do not have
the latest and the greatest compiler. Use F2008 syntax only if really
useful, and after verifying that it doesn't break too many compilers.
\item use "dp" (defined in module ''kinds'') to define the type of real and
complex variables
\item all constants should be defined to be of kind "dp". Preferred syntax:
0.0\_dp.
\item use "generic" intrinsic functions: SIN, COS, etc.
\item conversions should be explicitly indicated. For conversions to real,
use DBLE, or else REAL(...,KIND=dp). For conversions to complex, use
CMPLX(...,...,KIND=dp). For complex conjugate, use CONJG. For imaginary part,
use AIMAG. IMPORTANT: Do not use REAL or CMPLX without KIND=dp, or else you
will lose precision (except when you take the real part of a
double precision complex number).
\item Do not use automatic arrays (e.g. \texttt{REAL(dp) :: A(N)} with
\texttt{N} defined at run time) unless you are sure that the array is
small in all cases: large arrays may easily exceed the stack size,
or the memory size.
\item Do not use pointers unless you have a good reason to:
pointers may hinder optimization. Allocatable arrays should be used instead.
\item If you use pointers, nullify them before performing tests on their
status.
\item Beware fancy constructs like structures: they look great on paper,
but they also have the potential to make a code unreadable, or inefficient,
or unusable with some compilers. Avoid nested structures unless you have a
valid reason to use them.
\item Be careful with array syntax and in particular with
array sections. Passing an array section to a routine may look elegant
but it may turn out to be inefficient: a copy will be silently done
if the section is not contiguous in memory (or if the compiler
decides it is the right thing to do), increasing the memory footprint.
\item Do not pass unallocated arrays or pointers as non-optional arguments,
even in those cases where they are not actually used inside the subroutine:
some compilers don't like it. Also note that if passed as optional argument
--provided the argument has not the pointer or allocatable attribute--
unallocated arrays or pointers are interpreted as non present (this is a
F2008 feature, already used since v.6.4).
\item Do not use any construct that is susceptible to be flagged as
out-of-bounds error, even if no actual out-of-bound error takes place.
\item Always use IMPLICIT NONE and declare all local variables.
All variables passed as arguments to a routine should be declared as
INTENT (IN), (OUT), or (INOUT). All variables from modules should be
explicitly specified via USE module, ONLY : variable. Variables used
in an array declaration must be declared first, as in the following
example:
\begin{verbatim}
INTEGER, INTENT(IN) :: N
REAL(dp), INTENT(OUT) :: A(N)
\end{verbatim}
in this order (some compilers complain if you put the second line
before the first).
\end{itemize}
The \texttt{dev-tools/} directory contains some useful tools for developers;
see the \texttt{dev-tools/README.md} file.
\subsection{Adding or modifying input variables}
New input variables should be added to
''Modules/input\_parameters.f90'',
then copied to the code internal variables in the ''input.f90''
subroutine. The namelists and cards parsers are in
''Modules/read\_namelists.f90'' and ''Modules/read\_cards.f90''.
Files ''input\_parameters.f90'', ''read\_namelists.f90'',
''read\_cards.f90'' are shared by all codes, while each code
has its own version of ''input.f90'' used to copy input values
into internal variables
EXAMPLE:
suppose you need to add a new input variable called ''pippo''
to the namelist control, then:
\begin{enumerate}
\item add pippo to the input\_parameters.f90 file containing the
namelist control
\begin{verbatim}
INTEGER :: pippo = 0
NAMELIST / control / ....., pippo
\end{verbatim}
Remember: always set an initial value!
\item add pippo to the control\_default subroutine (contained in
module read\_namelists.f90 )
\begin{verbatim}
subroutine control_default( prog )
...
IF( prog == 'PW' ) pippo = 10
...
end subroutine
\end{verbatim}
This routine sets the default value for pippo (can be different in
different codes)
\item add pippo to the control\_bcast subroutine (contained in module
read\_namelists.f90 )
\begin{verbatim}
subroutine control_bcast( )
...
call mp_bcast( pippo, intra_image_comm )
...
end subroutine
\end{verbatim}
\end{enumerate}
\subsection{Updating documentation}
Input variable documentation for most codes is contained into a
\texttt{*/Doc/INPUT\_*.def} file. Simple utilities may have instead
the input documentation in the header of the code source.
Files .def are processed to produce .xml, .txt, .html files.
The latter is the most important, being the one that is available
online in the web site.
The documentation must be processed with command ``make doc'' before
the release. Note that:
\begin{itemize}
\item in order to produce .xml, .txt, .html file,
"tcl", "tcllib", "xsltproc" are needed;
\item in order to build .pdf files from LaTeX, "pdflatex" is needed;
\item in order to build html files for user guide and developer manual,
"latex2html" and "convert" (from Image-Magick) are needed.
\end{itemize}
\section{Using git}
\label{Sec:git}
The following notes cover the \qe-specific work organization, plus the
essential git commands. Those interested in mastering git may consult
one of the many available on-line guides and tutorials, e.g.,
\verb|git-scm.com/book/en/v2|.
The git repository is hosted on GitLab: \verb|https://gitlab.com/QEF/q-e|.
A mirror, automatically aligned every week, is available
on GitHub: \verb|https://github.com/QEF/q-e|. To download the repository:
\begin{quote}
\verb|git clone https://gitlab.com/QEF/q-e.git| or\\
\verb|git clone git@gitlab.com:QEF/q-e.git|
\end{quote}
Registration on GitLab is not needed at this stage but it is useful anyway.
GitLab accepts a number of other accounts (Google, GitHub, ...) to sign in.
The repository contains a ``master'' and a ``develop'' branch,
plus other branches for specific goals. The ``develop'' branch is the
default and is where development goes on. The ``master'' branch is
aligned to the ``develop'' branch from time to time, when ``develop''
looks stable enough. No other changes are allowed in ``master'' except
for serious bugs.
\subsection{Developing with git}
Development can proceed in different ways:
\begin{enumerate}
\item Via ``merge'' requests from a branch in the developer's repository
forked from the q-e repository ({\em recommended})
\item Via ``merge'' requests from a branch of the q-e repository
({\em not recommended})
\item Directly into the ``develop'' branch, or in a ``backport'' branch
if existing ({\em strongly discouraged}).
\end{enumerate}
The first option is the recommended one. To start:
\begin{itemize}
\item Register on gitlab.com, {\em save your public ssh keys}
on your GitLab account (in this page: \verb|https://gitlab.com/profile/keys|)
\item {\em fork} the QEF/q-e project: point your browser to
\verb|https://gitlab.com/QEF/q-e|, use the ``fork'' button
\item {\em clone} your GitLab fork on your workstation, e.g.:\\
\verb|git clone git@gitlab.com:<your-username>/q-e.git|
\item optionally, switch to the ``develop'' branch of your fork
(just to keep the symmetry):
\verb|git checkout --track origin/develop|
\end{itemize}
Every time you start to do some work, align your fork to the
``develop'' branch:
\begin{quote}
\verb|git pull git@gitlab.com:QEF/q-e.git develop|
\end{quote}
then create a new branch and work on it:
\begin{quote}
\verb|git checkout -b| {\em my-new-branch}
\end{quote}
(if the branch exists, use \verb|git checkout | {\em my-branch} to
switch to it. If your local repository contains unsaved changes,
commit or "stash" them before switching branch).
Once you have made your changes, {\em commit} (save) then:
\begin{quote}
\verb|git add| {\em list-of-changed-or-added-files} , then
\verb| git commit | , or \\
\verb|git commit| {\em list-of-changed-files}, or \\
\verb|git commit -a| (commits all modified files)
\end{quote}
Please when committing BE EXTREMELY CAREFUL not to add files that do
not belong to the repository, e.g.: data files, executables, objects.
These files MUST NOT BE COMMITTED.
Once you are finished with changes, you have to {\em push} (publish)
to the remote repository:
\begin{quote}
\verb|git push origin | {\em my-new-branch}
\end{quote}
or just \verb|git push| if the remote branch {\em my-new-branch} already
exists. The reply message should contain a link that allows you to make
a ``merge request'' (the link works only if you are signed in to gitlab.com).
You can also use the GitLab web interface to that end.
The branch may be automatically removed after the merge (set the appropriate
option if you approve your merge) or using \verb|git branch -d |
{\em my-new-branch}.
Note: if a merge request is pending and you push further changes to the
branch to be merged, the merge request will be automatically updated.
If this is not what you want, commit to a different branch, or wait
until the merge is done.
It may be a good idea to align your branch to the current development
version before the merge request. If you have modified a file that has
meanwhile been modified in the repository, a conflict will arise. You
can use \texttt{git stash} to resolve the conflict:
\begin{quote}
\verb|git stash save | (save and remove modified files)\\
\verb|git pull ... | (update files)\\
\verb|git stash apply| (overwrite with locally modified files)
\end{quote}
Beware! you may need to manually merge files that have been modified both
by you and in the repository. \texttt{git stash -l} list all stashed
changes in reverse chronological order. The stash can be cleared using
\texttt{git stash clear}.
Note: if you do not change sources but only documentation, tests, examples,
add \verb|[skip ci]| in the commit message. This prevents execution of
automatic compilation on GitLab (that sometimes fails for no good reason,
blocking a subsequent merge request).
Note: a commit message containing \verb|[fixes issue #N]| should
automatically mark issue N to as solved once it is merged.
\subsection{Working directly into the develop branch}
NOT RECOMMENDED - only for people knowing what they are doing
(or ready to fix the mess in case they didn't know what they were doing):
\begin{itemize}
\item \verb|git clone git@gitlab.com:QEF/q-e.git| if not already done
\item Ensure you switch to the ``develop'' branch:
\verb|git checkout --track origin/develop|
\item Work on it as in the previous subsection
\item When you are ready, commit and push.
\end{itemize}
\subsection{A few useful commands}
\begin{itemize}
\item \texttt{git status}: information on the current state of the repository
(use option \texttt{-uno} to get rid of messages about untracked file)
\item \texttt{git diff}: difference between the local copy and the repository
\item \texttt{git fetch}:
looks at the remote repository, signals if new files or conflicts are present
in case the local copy is updated
\item \texttt{git pull}:
downloads updates from the remote repository, applies them to the local one.
If there are conflicts that cannot be easily solved, conflicts are flagged
and one has to proceed with a manual merge
\item \texttt{git add}: adds files or directories to the "stage area", a pool
of files to be committed
\item \texttt{git commit}: commits files in the "stage area". Unlike
\texttt{svn commit}, files are committed only in the local repository
\item \texttt{git push}: publishes the new commits of the local repository
to one or more remote repositories
\item \texttt{git merge}: merges two branches (typically the local one and
the remote one). A merge may be easy or complex, depending upon the type of
conflicts.
\item \texttt{git branch}: information and operations on branches
\end{itemize}
\section{The \qe \, test-suite}
\label{Sec:testfarm}
The \qe\, test-suite is used to ensure continuous integration and long-term numerical stability of the code.
The test-suite is also used by different Buildbot test-farm to automatically test the code nightly after a commit to the repository has been detected.
The currently active test-farm machine can be accessed at
\texttt{test-farm.quantum-espresso.org} (points to a machine at CINECA:
\verb|{http://130.186.13.169:8010/#/}|
\subsection{How to add tests for a new executable}
Let us take the example of adding a new test for the TDDFPT module.
\begin{description}
\item [extract-PROG\_NAME.x] This script extracts the physical quantities from the output and parse it in a format for the testcode.py script. The script need to contain all the different output you want to parse (for chain calculations). For example, in this case we want to parse the output of \texttt{pw.x}, \texttt{turbo\_lanczos.x} and \texttt{turbo\_spectrum.x}. It is crucial to add as many parameter to be tested as possible to increase the code coverability.
\item [run-PROG\_NAME.sh] This bash script contains the paths of the different programs and source the \texttt{ENVIRONMENT} file
\item [jobconfig] You need to edit this file to add all the new tests as well as the new program. You can chain different programs with different output in one test. In this case we added
\begin{verbatim}
[tddfpt_*/]
program = TDDFPT
\end{verbatim}
This means that all the new tests related to TDDFPT must be placed in a folder with a name that starts with \texttt{tddfpt\_}. You can also add it to a new category.
\item[userconfig.tmp] This file contains the accepted accuracies for the different physical quantities defined in \texttt{extract-PROG\_NAME.x}. You need to add a new section for your program. For the tolerance variable, the first column is the absolute accepted value, the second one is the relative accepted value and the third column contains the name of the physical quantity as defined earlier. Note that you need to add the values for all the code that you intend to test. In our case we need to add variable from \texttt{pw.x} as well (although already defined for other program).
To estimate the acceptable tolerance, it is advised to start with very strict tolerance (very low value, e.g. 1d-6 or so) and then make some local tests (for example comparing the results in sequential or in parallel). One can then raise slightly the accepted tolerance.
\item[Makefile] One need to add a line to check for potential pseudopotential to be downloaded by adding the line \texttt{@./check\_pseudo.sh tddfpt\_}.
\item[PROG\_NAME\_TEST\_NAME] Create one folder for each new test you want to add following the convention prog\_name and test\_name. In our case we create a folder name \texttt{tddfpt\_CH4}. The folder must contain all the input files, the pseudopotentials that are needed for that test and the reference files. The reference files must have a name that starts with \texttt{benchmark.out.git.inp=}. However, the easiest is to run the test suite for that test and the code will tell you what is the name he expects to have. You can then rename your reference output with that name. In our case we will therefore do
\begin{verbatim}
make run-custom-test testdir=tddfpt_CH4
\end{verbatim}
We can then rename the output by doing
\begin{verbatim}
cp test.out.030117.inp=CH4.pw-in.args=1 benchmark.out.git.inp=CH4.pw-in.args=1
\end{verbatim}
We now have a reference file for the first step of the calculation. We can do the same for the two other steps.
Once this is done. We can clean all the unwanted files and we should be left with a clean folder that can be committed to the git repo. In our case the test folder contains the following files:
\begin{verbatim}
benchmark.out.git.inp=CH4.pw-in.args=1
benchmark.out.git.inp=CH4.tddfpt_pp-in.args=3
benchmark.out.git.inp=CH4.tddfpt-in.args=2
CH4.tddfpt-in
CH4.pw-in
CH4.tddfpt_pp-in
\end{verbatim}
It is very important to then re-run the tests in parallel (4 cores) to be sure that the results are within the accepted tolerance.
\end{description}
\subsection{How to add tests for an existing executable}
You have to create a new folder following the convention prog\_name and test\_name and then follow the structure outline above.
If you want to test new physical quantities, you need to parse them using the script \texttt{extract-PROG\_NAME.x}. Finally, the new test should
be added in \texttt{jobconfig}.
\section{OBSOLETE STUFF}
\subsection{How to add support for a new architecture}
In order to support a previously unsupported architecture, first you
have to figure out which compilers, compilation flags, libraries
etc. should be used on that architecture.
In other words, you have to write a \make.inc\ that works: you may use
the manual configuration procedure for that (see the
User Guide). Then, you have to modify \configure\ so that it can
generate that \make.inc\ automatically.
To do that, you have to add the case for your architecture in several
places throughout \configurac:
\begin{enumerate}
\item Detect architecture
Look for these lines:
\begin{verbatim}
if test "$arch" = ""
then
case $host in
ia64-*-linux-gnu ) arch=ia64 ;;
x86_64-*-linux-gnu ) arch=x86_64 ;;
*-pc-linux-gnu ) arch=ia32 ;;
etc.
\end{verbatim}
Here you must add an entry corresponding to your architecture and
operating system. Run \texttt{config.guess} to obtain the string identifying
your system.
For instance on a PC it may be "i686-pc-linux-gnu", while on IBM SP4
"powerpc-ibm-aix5.1.0.0". It is convenient to put some asterisks to
account for small variations of the string for different machines of
the same family. For instance, it could be "aix4.3" instead of
"aix5.1", or "athlon" instead of "i686"...
\item Select compilers
Look for these lines:
\begin{verbatim}
# candidate compilers and flags based on architecture
case $arch in
ia64 | x86_64 )
...
ia32 )
...
aix )
...
etc.
\end{verbatim}
Add an entry for your value of \$arch, and set there the appropriate
values for several variables, if needed (all variables are assigned
some reasonable default value, defined before the "case" block):
- "try\_f90" should contain the list of candidate Fortran 90 compilers,
in order of decreasing preference (i.e. configure will use the first
it finds). If your system has parallel compilers, you should list
them in "try\_mpif90".
- "try\_ar", "try\_arflags": for these, the values "ar" and "ruv" should
be always fine, unless some special flag is required (e.g., -X64
With sp4).
- you should define "try\_dflags" if there is any preprocessing
option specific to your machine: for instance, on IBM machines,
"try\_dflags=-D\_\_AIX" . A list of such flags can be found in file
\texttt{include/defs.h.README}.
You shouldn't need to define the following:
- "try\_iflags" should be set to the appropriate "-I" option(s)
needed by the preprocessor or by the compiler to locate *.h files
to be included; try\_iflags="-I../include" should be good for most cases
For example, here's the entry for IBM machines running AIX:
\begin{verbatim}
aix )
try_mpif90="mpxlf90_r mpxlf90"
try_f90="xlf90_r xlf90 $try_f90"
try_arflags="-X64 ruv"
try_arflags_dynamic="-X64 ruv"
try_dflags="-D__AIX -D__XLF"
;;
\end{verbatim}
The following step is to look for both serial and parallel fortran
compilers:
\begin{verbatim}
# check serial Fortran 90 compiler...
...
AC_PROG_F77($f90)
...
# check parallel Fortran 90 compiler
...
AC_PROG_F77($mpif90)
...
echo setting F90... $f90
echo setting MPIF90... $mpif90
\end{verbatim}
A few compilers require some extra work here: for instance, if the
Intel Fortran compiler was selected, you need to know which version
because different versions need different flags.
At the end of the test,
- \$mpif90 is the parallel compiler, if any; if no parallel compiler
is found or if \texttt{--disable-parallel} was specified, \$mpif90
is the serial compiler
- \$f90 is the serial compiler
Next step: the choice of (serial) C and Fortran 77 compilers.
Look for these lines:
\begin{verbatim}
# candidate C and f77 compilers good for all cases
try_cc="cc gcc"
try_f77="$f90"
case "$arch:$f90" in
*:f90 )
....
etc.
\end{verbatim}
Here you have to add an entry for your architecture, and since the
correct choice of C and f77 compilers may depend on the fortran-90
compiler, you may need to specify the f90 compiler as well.
Again, specify the compilers in try\_cc and try\_f77 in order of
decreasing preference. At the end of the test,
- \$cc is the C compiler
- \$f77 is the Fortran 77 compiler, used to compile *.f files
(may coincide with \$f90)
\item Specify compilation flags.
Look for these lines:
\begin{verbatim}
# check Fortran compiler flags
...
case "$arch:$f90" in
ia64:ifort* | x86_64:ifort* )
...
ia64:ifc* )
...
etc.
\end{verbatim}
Add an entry for your case and define:
- "try\_fflags": flags for Fortran 77 compiler.
- "try\_f90flags": flags for Fortran 90 compiler.
In most cases they will be the same as in Fortran 77 plus some
others. In that case, define them as "\$(FFLAGS) -something\_else".
- "try\_fflags\_noopt": flags for Fortran 77 with all optimizations
turned off: this is usually "-O0".
These flags used to be needed to compile LAPACK dlamch.f; likely obsolete
- "try\_ldflags": flags for the linking phase (not including the list
of libraries: this is decided later).
- "try\_ldflags\_static": additional flags to select static compilation
(i.e., don't use shared libraries).
- "try\_dflags": must be defined if there is in the code any preprocessing
option specific to your compiler (for instance, -D\_\_INTEL for Intel
compilers). Define it as "\$try\_dflags -D..." so that pre-existing
flags, if any, are preserved.
- if the Fortran compiler is not able to invoke the C preprocessor
automatically before compiling, set "have\_cpp=0" (the opposite case
is the default). The appropriate compilation rules will be generated
accordingly. If the compiler requires that any flags be specified in
order to invoke the preprocessor (for example, "-fpp " -- note the
space), specify them in "pre\_fdflags".
For example, here's the entry for ifort on Linux PC:
\begin{verbatim}
ia32:ifort* )
try_fflags="-O2 -tpp6 -assume byterecl"
try_f90flags="\$(FFLAGS) -nomodule"
try_fflags_noopt="-O0 -assume byterecl"
try_ldflags=""
try_ldflags_static="-static"
try_dflags="$try_dflags -D__INTEL"
pre_fdflags="-fpp "
;;
\end{verbatim}
Next step: flags for the C compiler. Look for these lines:
\begin{verbatim}
case "$arch:$cc" in
*:icc )
...
*:pgcc )
...
etc.
\end{verbatim}
Add an entry for your case and define:
- "try\_cflags": flags for C compiler.
- "c\_ldflags": flags for linking, when using the C compiler as linker.
This is needed to check for libraries written in C, such as FFTW.
- if you need a different preprocessor from the standard one (\$CC -E),
define it in "try\_cpp".
For example for XLC on AIX:
\begin{verbatim}
aix:mpcc* | aix:xlc* | aix:cc )
try_cflags="-q64 -O2"
c_ldflags="-q64"
;;
\end{verbatim}
Finally, if you have to use a nonstandard preprocessor, look for these
lines:
\begin{verbatim}
echo $ECHO_N "setting CPPFLAGS... $ECHO_C"
case $cpp in
cpp) try_cppflags="-P -traditional" ;;
fpp) try_cppflags="-P" ;;
...
\end{verbatim}
and set "try\_cppflags" as appropriate.
\item Search for libraries
To instruct \configure\ to search for libraries, you must tell it two
things: the names of libraries it should search for, and where it
should search.
The following libraries are searched for:
- BLAS or equivalent.
Some vendor replacements for BLAS that are supported by \qe\ are:
\begin{quote}
MKL on Linux, 32- and 64-bit Intel CPUs\\
ACML on Linux, 64-bit AMD CPUs\\
ESSL on AIX\\
SCSL on sgi altix\\
SUNperf on sparc
\end{quote}
Moreover, ATLAS is used over BLAS if available.
- LAPACK or equivalent. Some vendor replacements for LAPACK are supported
by \qe, e.g.: Intel MKL, IBM ESSL
- FFTW (version 3) or another supported FFT library (e.g Intel DFTI,
IBM ESSL)
- the IBM MASS vector math library
- an MPI library. This is often automatically linked by the compiler
If you have another replacement for the above libraries, you'll have
to insert a new entry in the appropriate place.
This is unfortunately a little bit too complex to explain.
Basic info: \\
"AC\_SEARCH\_LIBS(function, name, ...)" looks for symbol
"function" in library "libname.a". If that is found, "-lname" is
appended to the LIBS environment variable (initially empty).
The real thing is more complicated than just that because the
"-Ldirectory" option must be added to search in a nonstandard
directory, and because a given library may require other libraries as
prerequisites (for example, Lapack requires BLAS).
\end{enumerate}
\subsection{\qe\ restart file specifications}
Written by Paolo Giannozzi 2005-11-11,
Last modified by Andrea Ferretti 2006-10-29
Format name: QEXML \\
Format version: 1.4.0 \\
The "restart file" is actually a "restart directory", containing
several files and sub-directories. For CP/FPMD, the restart directory
is created as "\$prefix\_\$ndw/", where \$prefix is the value of the
variable "prefix". \$ndw the value of variable ndw, both read in
input; it is read from "\$prefix\_\$ndr/", where \$ndr the value of
variable ndr, read from input. For PWscf, both input and output
directories are called "\$prefix.save/".
The content of the restart directory is as follows:
\begin{verbatim}
data-file.xml which contains:
- general information that doesn't require large data set:
atomic structure, lattice, k-points, symmetries,
parameters of the run, ...
- pointers to other files or directories containing bulkier
data: grids, wavefunctions, charge density, potentials, ...
charge_density.dat contains the charge density
spin_polarization.dat contains the spin polarization (rhoup-rhodw) (LSDA case)
magnetization.x.dat
magnetization.y.dat contain the spin polarization along x,y,z
magnetization.z.dat (noncollinear calculations)
lambda.dat contains occupations (Car-Parrinello dynamics only)
mat_z.1 contains occupations (ensemble-dynamics only)
<pseudopotentials> A copy of all pseudopotential files given in input
<k-point dirs> Subdirectories K00001/, K00002/, etc, one per k-point.
\end{verbatim}
Each k-point directory contains:
\begin{verbatim}
evc.dat wavefunctions for spin-unpolarized calculations, OR
evc1.dat
evc2.dat spin-up and spin-down wavefunctions, respectively,
for spin polarized (LSDA) calculations;
gkvectors.dat the details of specific k+G grid;
eigenval.xml eigenvalues for the corresponding k-point
for spin-unpolarized calculations, OR
eigenval1.xml spin-up and spin-down eigenvalues,
eigenval2.xml for spin-polarized calculations;
\end{verbatim}
in a molecular dynamics run, also wavefunctions at the preceding time step:
\begin{verbatim}
evcm.dat for spin-unpolarized calculations OR
evcm1.dat
evcm2.dat for spin polarized calculations;
\end{verbatim}
\begin{itemize}
\item All files "*.xml" are XML-compliant, formatted file;
\item Files "mat\_z.1", "lambda.dat" are unformatted files, containing a single record;
\item All other files "*.dat", are XML-compliant files, but they
contain an unformatted record.
\end{itemize}
\subsubsection{Structure of file "data-file.xml"}
\begin{verbatim}
XML Header: whatever is needed to have a well-formed XML file
Body: introduced by <Root>, terminated by </Root>. Contains first-level tags
only. These contain only other tags, not values. XML syntax applies.
First-level tags: contain either
second-level tags, OR
data tags: tags containing data (values for a given variable), OR
file tags: tags pointing to a file
\end{verbatim}
data tags syntax ( [...] = optional ) :
\begin{verbatim}
<TAG type="vartype" size="n" [UNIT="units"] [LEN="k"]>
values (in appropriate units) for variable corresponding to TAG:
n elements of type vartype (if character, of length k)
</TAG>
\end{verbatim}
where TAG describes the variable into which data must be read;\\
"vartype" may be "integer", "real", "character", "logical";\\
if type="logical", LEN=k" must be used to specify the length
of the variable character; size="n" is the dimension.\\
Acceptable values for "units" depend on the specific tag.
Short syntax, used only in a few cases:
\begin{verbatim}
<TAG attribute="something"/> .
\end{verbatim}
For instance:
\begin{verbatim}
<FFT_GRID nr1="NR1" nr2="NR2" nr3="NR3"/>
\end{verbatim}
defines the value of the FFT grid parameters nr1, nr2, nr3
for the charge density
\subsubsection{Sample}
Header:
\begin{verbatim}
<?xml version="1.0"?>
<?iotk version="1.0.0test"?>
<?iotk file_version="1.0"?>
<?iotk binary="F"?>
\end{verbatim}
These are meant to be used only by iotk (actually they aren't)
First-level tags:
\begin{verbatim}
- <HEADER> (global information about fmt version)
- <CONTROL> (miscellanea of internal information)
- <STATUS> (information about the status of the CP simulation)
- <CELL> (lattice vector, unit cell, etc)
- <IONS> (type and positions of atoms in the unit cell etc)
- <SYMMETRIES> (symmetry operations)
- <ELECTRIC_FIELD> (details for an eventual applied electric field)
- <PLANE_WAVES> (basis set, cutoffs etc)
- <SPIN> (info on spin polarizaztion)
- <MAGNETIZATION_INIT> (info about starting or constrained magnetization)
- <EXCHANGE_CORRELATION>
- <OCCUPATIONS> (occupancy of the states)
- <BRILLOUIN_ZONE> (k-points etc)
- <PARALLELISM> (specialized info for parallel runs)
- <CHARGE-DENSITY>
- <TIMESTEPS> (positions, velocities, nose' thermostats)
- <BAND_STRUCTURE_INFO> (dimensions and basic data about band structure)
- <EIGENVALUES> (eigenvalues and related data)
- <EIGENVECTORS> (eigenvectors and related data)
* Tag description
<HEADER>
<FORMAT> (name and version of the format)
<CREATOR> (name and version of the code generating the file)
</HEADER>
<CONTROL>
<PP_CHECK_FLAG> (whether file is complete and suitable for post-processing)
<LKPOINT_DIR> (whether kpt-data are written in sub-directories)
<Q_REAL_SPACE> (whether augmentation terms are used in real space)
<BETA_REAL_SPACE> (whether projectors are used in real space, not implemented)
</CONTROL>
<STATUS> (optional, written only by CP)
<STEP> (number $n of steps performed, i.e. we are at step $n)
<TIME> (total simulation time)
<TITLE> (a job descriptor)
<ekin> (kinetic energy)
<eht> (hartree energy)
<esr> (Ewald term, real-space contribution)
<eself> (self-interaction of the Gaussians)
<epseu> (pseudopotential energy, local)
<enl> (pseudopotential energy, nonlocal)
<exc> (exchange-correlation energy)
<vave> (average of the potential)
<enthal> (enthalpy: E+PV)
</STATUS>
<CELL>
<NON-PERIODIC_CELL_CORRECTION>
<BRAVAIS_LATTICE>
<LATTICE_PARAMETER>
<CELL_DIMENSIONS> (cell parameters)
<DIRECT_LATTICE_VECTORS>
<UNITS_FOR_DIRECT_LATTICE_VECTORS>
<a1>
<a2>
<a3>
<RECIPROCAL_LATTICE_VECTORS>
<UNITS_FOR_RECIPROCAL_LATTICE_VECTORS>
<b1>
<b2>
<b3>
</CELL>
<MOVING_CELL> (optional, PW only)
<CELL_FACTOR>
<IONS>
<NUMBER_OF_ATOMS>
<NUMBER_OF_SPECIES>
<UNITS_FOR_ATOMIC_MASSES>
For each $n-th species $X:
<SPECIE.$n>
<ATOM_TYPE>
<MASS>
<PSEUDO>
</SPECIE.$n>
<PSEUDO_DIR>
<UNITS_FOR_ATOMIC_POSITIONS>
For each atom $n of species $X:
<ATOM.$n SPECIES="$X" INDEX=nt tau=(x,y,z) if_pos=...>
</IONS>
<SYMMETRIES> (optional, PW only)
<NUMBER_OF_SYMMETRIES>
<NUMBER_OF_BRAVAIS_SYMMETRIES>
<INVERSION_SYMMETRY>
<DO_NOT_USE_TIME_REVERSAL>
<TIME_REVERSAL_FLAG>
<NO_TIME_REV_OPERATIONS>
<NUMBER_OF_ATOMS>
<UNITS_FOR_SYMMETRIES>
For each symmetry $n:
<SYMM.$n>
<INFO>
<ROTATION>
<FRACTIONAL_TRANSLATION>
<EQUIVALENT_IONS>
</SYMM.$n>
For the remaining bravais symmetries:
<SYMM.$n>
<INFO>
<ROTATION>
</SYMM.n>
</SYMMETRIES>
<ELECTRIC_FIELD> (optional, sawtooth field in PW only))
<HAS_ELECTRIC_FIELD>
<HAS_DIPOLE_CORRECTION>
<FIELD_DIRECTION>
<MAXIMUM_POSITION>
<INVERSE_REGION>
<FIELD_AMPLITUDE>
</ELECTRIC_FIELD>
<PLANE_WAVES>
<UNITS_FOR_CUTOFF>
<WFC_CUTOFF>
<RHO_CUTOFF>
<MAX_NUMBER_OF_GK-VECTORS>
<GAMMA_ONLY>
<FFT_GRID>
<GVECT_NUMBER>
<SMOOTH_FFT_GRID>
<SMOOTH_GVECT_NUMBER>
<G-VECTORS_FILE> link to file "gvectors.dat"
<SMALLBOX_FFT_GRID>
</PLANE_WAVES>
<SPIN>
<LSDA>
<NON-COLINEAR_CALCULATION>
<SPIN-ORBIT_CALCULATION>
<SPINOR_DIM>
<SPIN-ORBIT_DOMAG>
</SPIN>
<MAGNETIZATION_INIT>
<CONSTRAINT_MAG>
<NUMBER_OF_SPECIES>
For each species X:
<SPECIE.$n>
<STARTING_MAGNETIZATION>
<ANGLE1>
<ANGLE2>
<CONSTRAINT_1,2,3>
</SPECIE.$n>
<FIXED_MAGNETIZATION_1,2,3>
<MAGNETIC_FIELD_1,2,3>
<TWO_FERMI_ENERGIES>
<UNITS_FOR_ENERGIES>
<FIXED_MAGNETIZATION>
<ELECTRONS_UP>
<ELECTRONS_DOWN>
<FERMI_ENERGY_UP>
<FERMI_ENERGY_DOWN>
<LAMBDA>
</MAGNETIZATION_INIT>
<EXCHANGE_CORRELATION>
<DFT>
<LDA_PLUS_U_CALCULATION>
if LDA_PLUS_U_CALCULATION
<NUMBER_OF_SPECIES>
<HUBBARD_LMAX>
<HUBBARD_L>
<HUBBARD_U>
<LDA_PLUS_U_KIND>
<U_PROJECTION_TYPE>
<HUBBARD_J>
<HUBBARD_J0>
<HUBBARD_ALPHA>
<HUBBARD_BETA>
endif
if <DFT_D2>
<SCALING_FACTOR>
<CUTOFF_RADIUS>
<C6>
<RADIUS_VDW>
if <XDM>
if <TKATCHENKO-SCHEFFLER>
<ISOLATED_SYSTEM>
</EXCHANGE_CORRELATION>
if hybrid functional
<EXACT_EXCHANGE>
<x_gamma_extrapolation>
<nqx1>
<nqx2>
<nqx3>
<exxdiv_treatment>
<yukawa>
<ecutvcut>
<exx_fraction>
<screening_parameter>
</EXACT_EXCHANGE>
endif
<OCCUPATIONS>
<SMEARING_METHOD>
if gaussian smearing
<SMEARING_TYPE>
<SMEARING_PARAMETER>
endif
<TETRAHEDRON_METHOD>
if use tetrahedra
<NUMBER_OF_TETRAHEDRA>
for each tetrahedron $t
<TETRAHEDRON.$t>
endif
<FIXED_OCCUPATIONS>
if using fixed occupations
<INFO>
<INPUT_OCC_UP>
if lsda
<INPUT_OCC_DOWN>
endif
endif
</OCCUPATIONS>
<BRILLOUIN_ZONE>
<NUMBER_OF_K-POINTS>
<UNITS_FOR_K-POINTS>
<MONKHORST_PACK_GRID>
<MONKHORST_PACK_OFFSET>
For each k-point $n:
<K-POINT.$n>
<STARTING_F_POINTS>
For each starting k-point $n:
<K-POINT_START.$n> kx, ky, kz, wk
<NORM-OF-Q>
</BRILLOUIN_ZONE>
<PARALLELISM>
<GRANULARITY_OF_K-POINTS_DISTRIBUTION>
<NUMBER_OF_PROCESSORS>
<NUMBER_OF_PROCESSORS_PER_POOL>
<NUMBER_OF_PROCESSORS_PER_IMAGE>
<NUMBER_OF_PROCESSORS_PER_TASKGROUP>
<NUMBER_OF_PROCESSORS_PER_POT>
<NUMBER_OF_PROCESSORS_PER_BAND_GROUP>
<NUMBER_OF_PROCESSORS_PER_DIAGONALIZATION>
</PARALLELISM>
<CHARGE-DENSITY>
link to file "charge_density.rho"
</CHARGE-DENSITY>
<TIMESTEPS> (optional)
For each time step $n=0,M
<STEP$n>
<ACCUMULATORS>
<IONS_POSITIONS>
<stau>
<svel>
<taui>
<cdmi>
<force>
<IONS_NOSE>
<nhpcl>
<nhpdim>
<xnhp>
<vnhp>
<ekincm>
<ELECTRONS_NOSE>
<xnhe>
<vnhe>
<CELL_PARAMETERS>
<ht>
<htve>
<gvel>
<CELL_NOSE>
<xnhh>
<vnhh>
</CELL_NOSE>
</TIMESTEPS>
<BAND_STRUCTURE_INFO>
<NUMBER_OF_BANDS>
<NUMBER_OF_K-POINTS>
<NUMBER_OF_SPIN_COMPONENTS>
<NON-COLINEAR_CALCULATION>
<NUMBER_OF_ATOMIC_WFC>
<NUMBER_OF_ELECTRONS>
<UNITS_FOR_K-POINTS>
<UNITS_FOR_ENERGIES>
<FERMI_ENERGY>
</BAND_STRUCTURE_INFO>
<EIGENVALUES>
For all kpoint $n:
<K-POINT.$n>
<K-POINT_COORDS>
<WEIGHT>
<DATAFILE> link to file "./K$n/eigenval.xml"
</K-POINT.$n>
</EIGENVALUES>
<EIGENVECTORS>
<MAX_NUMBER_OF_GK-VECTORS>
For all kpoint $n:
<K-POINT.$n>
<NUMBER_OF_GK-VECTORS>
<GK-VECTORS> link to file "./K$n/gkvectors.dat"
for all spin $s
<WFC.$s> link to file "./K$n/evc.dat"
<WFCM.$s> link to file "./K$n/evcm.dat" (optional)
containing wavefunctions at preceding step
</K-POINT.n>
</EIGENVECTORS>
\end{verbatim}
\section{Bibliography}
Fortran books:
\begin{itemize}
\item
M. Metcalf, J. Reid, Fortran 95/2003 Explained, Oxford University Press (2004)
\item
S. J. Chapman, Fortran 95/2003 for Scientists and Engineers, McGraw Hill (2007)
\item
J. C. Adams, W. S. Brainerd, R. A. Hendrickson, R. E. Maine, J. T. Martin,
B. T. Smith, The Fortran 2003 Handbook, Springer (2009)
\item
W. S. Brainerd, Guide to Fortran 2003 Programming, Springer (2009)
\end{itemize}
On-line tutorials:
\begin{itemize}
\item Fortran:
http://www.cs.mtu.edu/\~{}shene/COURSES/cs201/NOTES/fortran.html
\item Make:
http://en.wikipedia.org/wiki/Make\_(software)
\item Configure script:
http://en.wikipedia.org/wiki/Configure\_script
\end{itemize}
(info courtesy of Goranka Bilalbegovic)
\end{document}