From 5E-3 to 1E-2 absolute. This is because of
the recent change since one do not divide by epsilon_HEG anymore.
The the results are 12 times bigger, hence larger absolute error.
relaxation may terminate in a different amount of steps - hence we only compare
the first few (3) steps to check for consistency
then we compare the final result (structure/cell)
This solves the issues with PGI17 from the test-farm.
Courtesy of M. Schlipf
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@13891 c92efa57-630b-4861-b058-cf58834340f0
- el-ph coefficients of degenerate phonons
- very small eigenvalues giving large relative errors on small absolute errors
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@13838 c92efa57-630b-4861-b058-cf58834340f0
after the last round of changes, leading to discrepancies with old results
and between serial and parallel execution in some cases. USPP EXX test
with k-points re-added: there are still some discrepancies but very small
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@13835 c92efa57-630b-4861-b058-cf58834340f0
- "make compare" compares phonons as well
- add possibility to use pools for pw.x as well
- commented out annoying printout of execution line
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@13726 c92efa57-630b-4861-b058-cf58834340f0
- Update of some EPW reference files because of remove of DOS for electron
- small teaking of some PW tolerence
- only consider the 5 first bands for eig test-suite
- addition of some pw folders to jobconfig (usefull when doing custom tests)
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@13682 c92efa57-630b-4861-b058-cf58834340f0
In real space processors are organized in a 2D pattern.
Each processor owns data from a sub-set of Z-planes and a sub-set of Y-planes.
In reciprocal space each processor owns Z-columns that belong to a sub set of
X-values. This allows to split the processors in two sets for communication
in the YZ and XY planes.
In alternative, if the situation allows for it, a task group paralelization is used
(with ntg=nyfft) where complete XY planes of ntg wavefunctions are collected and Fourier
trasnformed in G space by different task-groups. This is preferable to the Z-proc + Y-proc
paralleization if task group can be used because a smaller number of larger ammounts of
data are transferred. Hence three types of fft are implemented:
!
!! ... isgn = +-1 : parallel 3d fft for rho and for the potential
!
!! ... isgn = +-2 : parallel 3d fft for wavefunctions
!
!! ... isgn = +-3 : parallel 3d fft for wavefunctions with task group
!
!! ... isgn = + : G-space to R-space, output = \sum_G f(G)exp(+iG*R)
!! ... fft along z using pencils (cft_1z)
!! ... transpose across nodes (fft_scatter_yz)
!! ... fft along y using pencils (cft_1y)
!! ... transpose across nodes (fft_scatter_xy)
!! ... fft along x using pencils (cft_1x)
!
!! ... isgn = - : R-space to G-space, output = \int_R f(R)exp(-iG*R)/Omega
!! ... fft along x using pencils (cft_1x)
!! ... transpose across nodes (fft_scatter_xy)
!! ... fft along y using pencils (cft_1y)
!! ... transpose across nodes (fft_scatter_yz)
!! ... fft along z using pencils (cft_1z)
!
! If task_group_fft_is_active the FFT acts on a number of wfcs equal to
! dfft%nproc2, the number of Y-sections in which a plane is divided.
! Data are reshuffled by the fft_scatter_tg routine so that each of the
! dfft%nproc2 subgroups (made by dfft%nproc3 procs) deals with whole planes
! of a single wavefunciton.
!
fft_type module heavily modified, a number of variables renamed with more intuitive names
(at least to me), a number of more variables introduced for the Y-proc parallelization.
Task_group module made void. task_group management is now reduced to the logical component
fft_desc%have_task_groups of fft_type_descriptor type variable fft_desc.
In term of interfaces, the 'easy' calling sequences are
SUBROUTINE invfft/fwfft( grid_type, f, dfft, howmany )
!! where:
!!
!! **grid_type = 'Dense'** :
!! inverse/direct fourier transform of potentials and charge density f
!! on the dense grid (dfftp). On output, f is overwritten
!!
!! **grid_type = 'Smooth'** :
!! inverse/direct fourier transform of potentials and charge density f
!! on the smooth grid (dffts). On output, f is overwritten
!!
!! **grid_type = 'Wave'** :
!! inverse/direct fourier transform of wave functions f
!! on the smooth grid (dffts). On output, f is overwritten
!!
!! **grid_type = 'tgWave'** :
!! inverse/direct fourier transform of wave functions f with task group
!! on the smooth grid (dffts). On output, f is overwritten
!!
!! **grid_type = 'Custom'** :
!! inverse/direct fourier transform of potentials and charge density f
!! on a custom grid (dfft_exx). On output, f is overwritten
!!
!! **grid_type = 'CustomWave'** :
!! inverse/direct fourier transform of wave functions f
!! on a custom grid (dfft_exx). On output, f is overwritten
!!
!! **dfft = FFT descriptor**, IMPORTANT NOTICE: grid is specified only by dfft.
!! No check is performed on the correspondence between dfft and grid_type.
!! grid_type is now used only to distinguish cases 'Wave' / 'CustomWave'
!! from all other cases
Many more files modified.
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@13676 c92efa57-630b-4861-b058-cf58834340f0
truncated to a small number of significant digits. This is completely
irrelevant in terms of resuts but stll sufficient to yield very small but
visible discrepancies with respect to other XC implementations. I have
converted to full precision all such constans I have spotted. There might
be more cases like these.
PW tests updated: a number of small changes, fixes and corrections
affecting the numerical results had accumulated.
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@13592 c92efa57-630b-4861-b058-cf58834340f0