quantum-espresso/test-suite/epw_tdbe/benchmark.out.git.inp=epw2....

217 lines
10 KiB
Plaintext

``:oss/
`.+s+. .+ys--yh+ `./ss+.
-sh//yy+` +yy +yy -+h+-oyy
-yh- .oyy/.-sh. .syo-.:sy- /yh
`.-.` `yh+ -oyyyo. `/syys: oys `.`
`/+ssys+-` `sh+ ` oys` .:osyo`
-yh- ./syyooyo` .sys+/oyo--yh/
`yy+ .-:-. `-/+/:` -sh-
/yh. oys
``..---hho---------` .---------..` `.-----.` -hd+---.
`./osmNMMMMMMMMMMMMMMMs. +NNMMMMMMMMNNmh+. yNMMMMMNm- oNMMMMMNmo++:`
+sy--/sdMMMhyyyyyyyNMMh- .oyNMMmyyyyyhNMMm+` -yMMMdyyo:` .oyyNMMNhs+syy`
-yy/ /MMM+.`-+/``mMMy- `mMMh:`````.dMMN:` `MMMy-`-dhhy```mMMy:``+hs
-yy+` /MMMo:-mMM+`-oo/. mMMh: `dMMN/` dMMm:`dMMMMy..MMMo-.+yo`
.sys`/MMMMNNMMMs- mMMmyooooymMMNo: oMMM/sMMMMMM++MMN//oh:
`sh+/MMMhyyMMMs- `-` mMMMMMMMMMNmy+-` -MMMhMMMsmMMmdMMd/yy+
`-/+++oyy-/MMM+.`/hh/.`mNm:` mMMd+/////:-.` NMMMMMd/:NMMMMMy:/yyo/:.`
+os+//:-..-oMMMo:--:::-/MMMo. .-mMMd+---` hMMMMN+. oMMMMMo. `-+osyso:`
syo `mNMMMMMNNNNNNNNMMMo.oNNMMMMMNNNN:` +MMMMs:` dMMMN/` ``:syo
/yh` :syyyyyyyyyyyyyyyy+.`+syyyyyyyyo:` .oyys:` .oyys:` +yh
-yh- ```````````````` ````````` `` `` oys
-+h/------------------------::::::::://////++++++++++++++++++++++///////::::/yd:
shdddddddddddddddddddddddddddddhhhhhhhhyyyyyssssssssssssssssyyyyyyyhhhhhhhddddh`
Lee, H., Poncé, S., Bushick, K., Hajinazar, S., Lafuente-Bartolome, J.,Leveillee, J.,
Lian, C., Lihm, J., Macheda, F., Mori, H., Paudyal, H., Sio, W., Tiwari, S.,
Zacharias, M., Zhang, X., Bonini, N., Kioupakis, E., Margine, E.R., and Giustino F.,
npj Comput Mater 9, 156 (2023)
Program EPW v.5.9 starts on 2May2025 at 11:17:24
This program is part of the open-source Quantum ESPRESSO suite
for quantum simulation of materials; please cite
"P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
"P. Giannozzi et al., J. Phys.:Condens. Matter 29 465901 (2017);
"P. Giannozzi et al., J. Chem. Phys. 152 154105 (2020);
URL http://www.quantum-espresso.org",
in publications or presentations arising from this work. More details at
http://www.quantum-espresso.org/quote
Parallel version (MPI), running on 1 processors
MPI processes distributed on 1 nodes
121796 MiB available memory on the printing compute node when the environment starts
Reading input from epw2.in
Reading supplied temperature list.
------------------------------------------------------------------------
RESTART - RESTART - RESTART - RESTART
Restart is done without reading PWSCF save file.
Be aware that some consistency checks are therefore not done.
------------------------------------------------------------------------
--
bravais-lattice index = 0
lattice parameter (a_0) = 0.0000 a.u.
unit-cell volume = 0.0000 (a.u.)^3
number of atoms/cell = 0
number of atomic types = 0
kinetic-energy cut-off = 0.0000 Ry
charge density cut-off = 0.0000 Ry
Exchange-correlation= not set
( -1 -1 -1 -1 -1 -1 -1)
celldm(1)= 0.00000 celldm(2)= 0.00000 celldm(3)= 0.00000
celldm(4)= 0.00000 celldm(5)= 0.00000 celldm(6)= 0.00000
crystal axes: (cart. coord. in units of a_0)
a(1) = ( 0.0000 0.0000 0.0000 )
a(2) = ( 0.0000 0.0000 0.0000 )
a(3) = ( 0.0000 0.0000 0.0000 )
reciprocal axes: (cart. coord. in units 2 pi/a_0)
b(1) = ( 0.0000 0.0000 0.0000 )
b(2) = ( 0.0000 0.0000 0.0000 )
b(3) = ( 0.0000 0.0000 0.0000 )
Atoms inside the unit cell:
Cartesian axes
site n. atom mass positions (a_0 units)
No symmetry!
G cutoff = 0.0000 ( 0 G-vectors) FFT grid: ( 0, 0, 0)
number of k points= 0
cart. coord. in units 2pi/a_0
EPW : 0.00s CPU 0.00s WALL
EPW : 0.00s CPU 0.00s WALL
-------------------------------------------------------------------
Using si.ukk from disk
-------------------------------------------------------------------
HDF5 is NOT used in the current build. Exciton-phonon coupling calculations are disabled.
Symmetries of Bravais lattice: 48
Symmetries of crystal: 48
Do not need to read .epb files; read .fmt files
Band disentanglement is used: nbndsub = 8
Construct the Wigner-Seitz cell using Wannier centers and atomic positions
Number of WS vectors for electrons 339
Number of WS vectors for phonons 63
Number of WS vectors for electron-phonon 63
Maximum number of cores for efficient parallelization 126
Reading Hamiltonian, Dynamical matrix and EP vertex in Wann rep from file
Finished reading Wann rep data from file
===================================================================
Memory usage: VmHWM = 131Mb
VmPeak = 606Mb
===================================================================
Using uniform q-mesh: 15 15 15
Size of q point mesh for interpolation: 3375
Using uniform MP k-mesh: 15 15 15
Size of k point mesh for interpolation: 240
Max number of k points per pool: 240
Fermi energy coarse grid = 6.250723 eV
===================================================================
Fermi energy is read from the input file: Ef = 7.100000 eV
===================================================================
ibndmin = 5 ebndmin = 6.844 eV
ibndmax = 6 ebndmax = 7.396 eV
Number of ep-matrix elements per pool : 2880 ~= 22.50 Kb (@ 8 bytes/ DP)
A selecq.fmt file was found but re-created because selecqread == .FALSE.
We only need to compute 886 q-points
Nr. of irreducible k-points on the uniform grid: 120
Finish mapping k+sign*q onto the fine irreducibe k-mesh and writing .ikmap file
Nr irreducible k-points within the Fermi shell = 14 out of 120
Fermi level (eV) = 0.710000000000000D+01
DOS(states/spin/eV/Unit Cell) = -0.327190751855028D-33
Electron smearing (eV) = 0.500000000000000D-02
Fermi window (eV) = 0.300000000000000D+00
Finish writing .ephmat files
===================================================================
Memory usage: VmHWM = 139Mb
VmPeak = 632Mb
===================================================================
Unfolding on the coarse grid
elphon_wrap : 0.00s CPU 0.32s WALL ( 1 calls)
INITIALIZATION:
Electron-Phonon interpolation
ep-interp : 110.70s CPU 127.75s WALL ( 886 calls)
wigner_seitz : 0.30s CPU 0.31s WALL ( 1 calls)
DynW2B : 0.02s CPU 0.03s WALL ( 886 calls)
HamW2B : 4.32s CPU 5.01s WALL ( 213000 calls)
ephW2Bp : 100.34s CPU 116.75s WALL ( 886 calls)
ephW2B : 0.30s CPU 0.30s WALL ( 3499 calls)
vmeW2B : 2.85s CPU 2.92s WALL ( 6998 calls)
Total program execution
EPW : 1m51.27s CPU 2m 9.34s WALL
% Copyright (C) 2016-2023 EPW-Collaboration
===============================================================================
Please consider citing the following papers.
% Paper describing the method on which EPW relies
F. Giustino and M. L. Cohen and S. G. Louie, Phys. Rev. B 76, 165108 (2007)
% Papers describing the EPW software
H. Lee et al., npj Comput. Mater. 9, 156 (2023)
S. Ponc\'e, E.R. Margine, C. Verdi and F. Giustino, Comput. Phys. Commun. 209, 116 (2016)
J. Noffsinger et al., Comput. Phys. Commun. 181, 2140 (2010)
For your convenience, this information is also reported in the
functionality-dependent EPW.bib file.
===============================================================================