quantum-espresso/Doc/BUGS

255 lines
8.9 KiB
Plaintext

* init_paw_1 fails for atom without non-local part
* electron-phonon calculation was not working if called directly
after a phonon calculation
* PWCOND + FFTW + parallel execution = not good
* cell minimization with steepest descent was not working (CP/FPMD)
* various Alpha and SUN compilation problems
Fixed in version 2.1:
* various T3E compilation problems
* cpmd2upf was yielding incorrect DFT if converting BLYP PPs
* some variables not properly written and read in restart file
* The value of gamma_only was not correctly set when restarting or
reading from file with option __NEW_PUNCH enabled
* Incorrect calculation of eloc in pw2casino
* Two serious bugs in the local-TF screening :
possible occurrence of division by zero (present since v1.2),
wrong mixing of spin polarized systems
* cpmd2upf failed with some files due to bad check
* Intel compiler v.8: wavefunction files four times bigger than needed
* compilation problems on some version of SGI compiler
* non-collinear code was not working with insulators and nbnd > nelec/2
* multiple writes to file in parallel execution when calculating
electron-phonon coefficients
* various bugs in LBFGS
* NEB + LDA+U = crash
* compilation problems with __NEW_PUNCH
* planar average crashed if used with a cubic system
* Gamma-only phonon code not working for Raman calculations
in some cases
* yet another bug in phonon and k-point parallelization when
reading namelist (phq_readin)
* options startingwfc and startingpot were ignored if restarting
from a previous calculation
* pw2casino interface didn't work properly in spin-polarized case
and didn't use variable "outdir"
* minor bug in pwtools/pwo2xsf.sh
* serious bug in the path interpolator
* phonon, post_processing, various other auxiliary codes were
not working with k-point parallelization (pools) due to
double call to init_pool
Fixed in version 2.0 :
* wrong results when running Berry-phase calculation in parallel execution:
it was not implemented but no warning was issued
* variable-cell code was subject to overflow and floating-point errors
* phonon + nosym=.true. was not properly done
* out-of-bound error in Berry Phase calculation
* out-of-bound errors in phonon if 4-dimensional irreps were present
(also d3.x was not working properly in this case)
* Berry-phase calculation had problems in low-symmetry cases
* phonon with k-point parallelization (pools) was yielding wrong
results in some cases (since v. 1.2 included)
* upftools/cpmd2upf.f90: wrong conversion due to Rydberg-Hartree mess
* PW/input.f90: lattice parameter a converted to wrong units if input
is given as a,b,c,cos(ab),cos(ac),cos(bc) instead of celldm(:)
* Wrong coordinates written if atomic_positions='crystal'
(thanks to Francois Willaime)
Fixed in version 1.3.0 :
* PH/elphon.f90 : el-ph calculation in the US case was not correctly
working in v.1.2.0 (it was not implemented in previous versions).
An US term in the calculation of deltaV * psi_v was missing.
Fixed by M. Wierzbowska and SdG
* various problems caused by too short file names fixed:
file and directory names up to 80 characters are allowed
(thanks to Serguei Patchkovskii and others)
* LAPACK routines DSYTRF and DYSTRI require some character arguments
(like 'U', 'L'). While most LAPACK implementations accept both
lowercase and uppercase arguments, the standard is uppercase only.
Various anomalies in self-consistency were caused by lowercase
arguments.
* Incorrect Make.pc_abs fixed
* PGI compiler v.3.3-2 on Linux: PP/chdens.x coredump fixed
* various T3E glitches in v.1.2.0 fixed
* PP/work_functions.f90 : STM maps did not work in version 1.2.0
(undefined variable lscf was used, call to sum_band no longer needed)
* PP/projwave.f90: symmetrization of projected dos was incorrectly
performed using d1,d2,or d3 instead of their transponse.
(affects all previous versions)
* PW/new_ns.f90: symmetrization of occupation matrix ns needed for LDA+U
calculations used incorrectly d2 matrices instead of their transponse.
Thanks to Lixin He for finding out the problem and the solution.
(affects all previous versions)
Fixed in version 1.2.0 (f90) :
* dynmat.f90: out-of-bound error fixed
* pplib/chdens.F90, pplib/projwave.F90 : compilation problems
for alpha (found by Giovanni Cantele)
* postprocessing routines: problems with unallocate pointers
passed to subroutine plot_io fixed (found by various people)
* postprocessing with ibrav=0 was not working properly
* rather serious bug in cinitcgg (used by conjugate-gradient
diagonalization) could produce mysterious crashes. The bug
appeared in version 1.1.1.
* pplib/dos.f90 was not plotting the expected energy window
* pplib/chdens.F90, pplib/average.F90 : wrong call to setv
could cause an out-of-bound error
Fixed in version 1.1.2 (f90) :
* a check on the number of arguments to command line in parallel
execution was added - Intel compiler crashes if attempting to
read a nonexistent argument
* tmp_dir was incorrectly truncated to 35 characters in
parallel execution
* variable "kfac" was not deallocated in stres_knl. A crash in
variable-cell MD could result.
* an inconsistent check between the calling program (gen_us_dj)
and the routine calculating j_l(r) (sph_bes) could result in
error stop when calculating stress or dielectric properties
* errors at file close in pw.x and phonon.x in some cases
* tetrahedra work for parallel execution
(ltetra is now distributed in bcast_input)
* fixed some problems in automatic dependencies (Giovanni Cantele)
Fixed in version 1.1.1 (f90) and 1.0.3 (f77) :
* LSDA calculations need either gaussian broadening or tetrahedra
but no input check was performed
* restarting from a run interrupted at the end of self-consistency
yielded wrong forces
* projwave.F (projection over atomic functions) was not working
with atoms having semicore states (found by Seungwu Han)
* stm.F : option stm_wfc_matching was not working properly
if symmetry was present (no symmetrization was performed)
* dynmat.x : displacement patterns in "molden" format were
incorrectly divided by the square root of atomic masses
* d3: misc. problems in parallel execution fixed
Fixed in version 1.1.0 (f90) and 1.0.2 (f77) :
* an inconsistency in the indexing of pseudopotential arrays could
yield bad dielectric tensors and effective charges if atoms where
not listed as first all atoms of type 1, then all atoms of type 2,
and so on (found by Nathalie Vast)
* phonon with ibrav=0 was not working (info on symm_type was lost:
found by Michele Lazzeri)
* the generation of the two random matrices needed in the calculation
of third order derivatives was incorrect because the random seed
was not reset. This produced crazy results for q<>0 calculations.
* the check on existence of tmp_dir did not work properly on
Compaq (formerly Dec) alphas (thanks to Guido Roma and Alberto
Debernardi).
* a system containing local pseudopotentials only (i.e. H)
produced a segmentation fault error
* getenv was incorrectly called on PC's using Absoft compiler:
the default pseudopotential directory was incorrect
* out-of-bound bug in pplib/dosg.f fixed. It could have caused
mysterious crashes or weird results in DOS calculations using
gaussian broadening. Thanks to Gun-Do Lee for fixing the bug.
* a missing initialization to zero in gen_us_dy.F could have
yielded a wrong stress in some cases
* phonons in an insulator did not work if more bands (nbnd)
were specified than filled valence band only
* electron-phonon calculation was incorrect if nonlocal PPs
were used (that is, almost always)
* Real space term in third order derivative of ewald energy
was missing (not exactly a bug, but introduced a small error
that could be not negligible in some cases)
* bad call in dynmat.f corrected
* compilation problems for PC clusters fixed (thanks to Nicola Marzari)
Fixed in version 1.0.1:
* recovering from a previous run in pw.x did not work on PC's
* recovering from a previous run in pw.x did not work for stress
calculation
* poolrecover did not compile on some machines (thanks to Eric Wu)
* PC with absoft compiler (and maybe other cases as well):
bad type conversions for REAL and CMPLX resulted in poor
convergence in some test cases. DCMPLX, DREAL used instead.
* Asymptotic high- and low-density formulae used in PW91 and PBE
unpolarized functionals gave a small but not negligible error,
leading to bad convergence of structural optimization