Re: Timeouts/DAOS rendered useless when running IOR with SX/default object class


Steffen Christgau
 

Hi again,

On 3/26/21 5:14 PM, Steffen Christgau wrote:
On 3/26/21 4:49 PM, Oganezov, Alexander A wrote:
Could you enable OFI level logs by setting FI_LOG_LEVEL=warn on the client side and provide stdout/stderr output from runs that result in mercury erorrs/timeouts?
Thanks for that input, we'll try to reproduce the issue with those settings and provide them ASAP
Here is the output of a failed attempt to run IOR. It now crashed for 48 processes on a single client. For smaller process counts IOR succeeds with the same messages/warnings from libfabric

$ export FI_LOG_LEVEL=warn > $ mpiexec -n 48 --map-by socket --bind-to core
/home/bemschri/opt/local/ior/github/bin/ior -F -r -w -t 1m -b 1g -i 3 -o /ior_file -a DFS --dfs.pool=... --dfs.cont=... --dfs.destroy --dfs.group=daos_server --dfs.oclass=S> libfabric:607767:core:core:fi_getinfo_():1019<warn> fi_getinfo: provider usnic returned -61 (No data available)
libfabric:607767:core:core:fi_getinfo_():1019<warn> fi_getinfo: provider ofi_rxm returned -61 (No data available)
libfabric:607767:core:core:fi_getinfo_():1019<warn> fi_getinfo: provider ofi_rxd returned -61 (No data available)
libfabric:607767:ofi_mrail:fabric:mrail_get_core_info():289<warn> OFI_MRAIL_ADDR_STRC env variable not set!
[repeats for each MPI process]

libfabric:607767:core:core:ofi_ns_add_local_name():370<warn> Cannot add
local name - name server uninitialized [repeats again]
IOR-3.4.0+dev: MPI Coordinated Test of Parallel I/O
Began : Mon Mar 29 10:47:36 2021
Command line : /home/bemschri/opt/local/ior/github/bin/ior -F -r
-w -t 1m -b 1g -i 3 -o /ior_file -a DFS --dfs.pool=... --dfs.cont=... --dfs.destroy --dfs.group=daos_server --dfs.oclass=SX
Machine : Linux bcn1031
TestID : 0
StartTime : Mon Mar 29 10:47:36 2021
Path : /ior_file.00000000
FS : 4607.9 TiB Used FS: 100.0% Inodes: 192512.0 Mi Used Inodes: 38.3%
Options: api : DFS
apiVersion : DAOS
test filename : /ior_file
access : file-per-process
type : independent
segments : 1
ordering in a file : sequential
ordering inter file : no tasks offsets
nodes : 1
tasks : 48
clients per node : 48
repetitions : 3
xfersize : 1 MiB
blocksize : 1 GiB
aggregate filesize : 48 GiB
Results: access bw(MiB/s) IOPS Latency(s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---- ---------- ---------- --------- -------- -------- -------- -------- ----
^C
And in the DAOS client log we have the following

03/29-10:47:36.48 bcn1031 DAOS[607790/607790] crt INFO src/cart/crt_init.c:151 data_init() Disabling MR CACHE (FI_MR_CACHE_COUNT=0)
03/29-10:47:36.63 bcn1031 DAOS[607790/607790] mem WARN src/gurt/hash.c:763 d_hash_table_create_inplace() The d_hash_table_ops_t->hop_rec_hash()
callback is not provided!
Therefore the whole hash table locking will be used for backward compatibility.
03/29-10:48:38.41 bcn1031 DAOS[607798/607798] rpc ERR src/cart/crt_context.c:806 crt_context_timeout_check(0x1311b60) [opc=0x4020000 (DAOS) rpcid=0x7f90350400000033 rank:tag=14:7] ctx_id 0, (status: 0x38) timed out (60 seconds), target (14:7)
03/29-10:48:38.41 bcn1031 DAOS[607798/607798] rpc ERR src/cart/crt_context.c:755 crt_req_timeout_hdlr(0x1311b60) [opc=0x4020000 (DAOS) rpcid=0x7f90350400000033 rank:tag=14:7] aborting to group daos_server, rank 14, tgt_uri ofi+sockets://10.246.101.33:20007
03/29-10:48:38.41 bcn1031 DAOS[607798/607798] hg ERR src/cart/crt_hg.c:1050 crt_hg_req_send_cb(0x1311b60) [opc=0x4020000 (DAOS) rpcid=0x7f90350400000033 rank:tag=14:7] RPC failed; rc: DER_TIMEDOUT(-1011): 'Time out'
03/29-10:48:38.41 bcn1031 DAOS[607798/607798] object ERR src/object/cli_shard.c:631 dc_rw_cb() RPC 0 failed, DER_TIMEDOUT(-1011): 'Time out'
Regards, Steffen

Join daos@daos.groups.io to automatically receive all group messages.