(and energies of charged cells as well) in line with those calculated by PW.
For the time being, the shift is just printed (see line "Delta V(G=0)=" )
in subroutine formf but not used. Some cleanup here and there.
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@4675 c92efa57-630b-4861-b058-cf58834340f0
lambda matrixes on a square mesh PxP of processors.
The number of processors used in the run not necessarily should be equal
to a perfect square PxP, the code, in distributing lambda,
try to use an optimal (for performances) square PxP less or equal than the
number of procs used.
- the size (Np=PxP) of the processor mesh to be used in distributing lambda
and ortho, can be suggested using the namelist keyword
ortho_para = Np
in the electrons namelist
- the distribution of lambda matrixes is required to save
memory in run with an high number of bands.
In a system with 2800 bands, the memory saved is about 200Mbyte
per proc/core if a sufficient number of proc ( some hundreds )
is used.
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@3643 c92efa57-630b-4861-b058-cf58834340f0
- correct an unquoted string (iosys) in PW/input.f90:483
(this was the cause of tonight compilation failere)
- correct a few incorrect format strings
- make more use of the constants module and thus
provide more consistent units. NOTE, this has some
numerical changes in the outputs, as in some places
rather low precision and inconsistent numbers were
used for unit conversion.
- convert all(?) single precision constants to double
using the attached little perl program.
exceptions: efermi.f90 (as it is supposed to be rewritten
anyways), plotbands.f90 (it uses single precision everywhere,
which may result in saving a significant amount of memory,
so i converted the two double precision constants to single).
Unused routine 'set_fft_grid' removed
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@3602 c92efa57-630b-4861-b058-cf58834340f0
bad only for l > 0 terms in q functions: same problem of sign as in NCPP).
There are still minor discrepancies between the stress in PW and CP/FPMD,
also in the norm-conserving case. The discrepancies are small but not
so small to be negligible. More investigation is needed...
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@3008 c92efa57-630b-4861-b058-cf58834340f0
was carrying a wrong sign. Now checked and working with NCPP,
to be further checked with USPP. The stress was wrong when using
PP with nonlocality P or greather.
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@2987 c92efa57-630b-4861-b058-cf58834340f0
first step needed to parallelize neb over images also for CP.
Next we need to add the right communicator to all communications
- subroutine reduce substituted everywhere with mp_sum
- mp_sum for array with 4dims added in mp.f90
- workaround for xlf compiler, it has problems compiling file with
initialization of large array in the definition line,
see Modules/input_parameters.f90 , initialization moved to
Modules/read_cards.f90 .
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@2946 c92efa57-630b-4861-b058-cf58834340f0
- the logic of the combined index for US PP is now the same everywhere
(i.e in PW, CP, and in the pseudopotential format):
do iv=1,N
do jv=iv,N
ijv=jv*(jv-1)/2+iv
(in PW the indices are called nb, mb). In order to get ijv from (iv,jv):
if (iv > jv) then
ijv=iv*(iv-1)/2+jv
else
ijv=jv*(jv-1)/2+iv
end if
- the above change also fixes a serious bug affecting Vanderbilt US PP
in UPF format (only half of the qfcoef array was present, but not the
good half)
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@2916 c92efa57-630b-4861-b058-cf58834340f0
PLEASE NOTE: the logic for packed index is as follows everywhere
do i=1,N
do j=1,i
ij = (i-1)*i/2 + j
This is equivalent to
ij = 0
do i=1,N
do j=1,i
ij = ij + 1
This is not (yet) the same as used in PW, though
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@2871 c92efa57-630b-4861-b058-cf58834340f0
of beta functions is smaller than the fixed maimum nbrx
- routines for herman-skillman integration moved together to other
integration routines in flib/ . We should one day decide which one of
these routines should be used: they all do basically the same thing
- routine reading ultrasoft PP in the old Vanderbilt format moved to
Modules/. More USPP cleanup coming soon.
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@2868 c92efa57-630b-4861-b058-cf58834340f0
archaic USPP format with Herman-Skilman grid, removed. The integration
is now performed used the same logic (but not yet the same routine) of
the other cases.
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@2536 c92efa57-630b-4861-b058-cf58834340f0
conversion to real => DBLE
(including real part of a complex number)
conversion to complex => CMPLX
complex conjugate => CONJG
imaginary part => AIMAG
All functions are uppercase.
CMPLX is preprocessed by f_defs.h and performs an explicit cast:
#define CMPLX(a,b) cmplx(a,b,kind=DP)
This implies that 1) f_defs.h must be included whenever a CMPLX is present,
2) CMPLX should stay in a single line, 3) DP must be defined.
All occurrences of real, float, dreal, dfloat, dconjg, dimag, dcmplx
removed - please do not reintroduce any of them.
Tested only with ifc7 and g95 - beware unintended side effects
Maybe not the best solution (explicit casts everywhere would be better)
but it can be easily changed with a script if the need arises.
The following code might be used to test for possible trouble:
program test_intrinsic
implicit none
integer, parameter :: dp = selected_real_kind(14,200)
real (kind=dp) :: a = 0.123456789012345_dp
real (kind=dp) :: b = 0.987654321098765_dp
complex (kind=dp) :: c = ( 0.123456789012345_dp, 0.987654321098765_dp)
print *, ' A = ', a
print *, ' DBLE(A)= ', DBLE(a)
print *, ' C = ', c
print *, 'CONJG(C)= ', CONJG(c)
print *, 'DBLE(c),AIMAG(C) = ', DBLE(c), AIMAG(c)
print *, 'CMPLX(A,B,kind=dp)= ', CMPLX( a, b, kind=dp)
end program test_intrinsic
Note that CMPLX and REAL without a cast yield single precision numbers on
ifc7 and g95 !!!
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@2133 c92efa57-630b-4861-b058-cf58834340f0
dion, beta, bec everyware.
- subroutines formfn, compute_beta, nlsm1, nlsm2, ecc ... now are common
between FPMD and CPV, a lot of clean ups!
- Changes in stdout: relevant physical quantities ( positions velocities an cell )
are now printed with the seme format of the corresponding input card,
like in PW, as was suggested by SdG.
- exemple23 updated to reflect the new input namelist "wannier"
- Subroutine init_run now is used in FPMD too.
- WARNING in the stress computed with CP, for a pseudo with core-corrections,
a contribution is missing! Not yet fixed, I need to talk with PG for the
box staff.
- WARNING the examples reference are not updated, I'm on the IBM sp, and
I prefer to update them from a linux machine.
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@2110 c92efa57-630b-4861-b058-cf58834340f0
by both CP and FPMD
- Now FPMD and CP use the same random wave functions initialization,
which is also independent from the number of processors,
very useful for debugging.
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@1836 c92efa57-630b-4861-b058-cf58834340f0
- Standard output hopfully made more clear and common between CP/FPMD
- common CP/FPMD initialization
- fix for nat checking in cploop
git-svn-id: http://qeforge.qe-forge.org/svn/q-e/trunk/espresso@1775 c92efa57-630b-4861-b058-cf58834340f0