Re: Increasing FIO performance


Lombardi, Johann
 

Hi Eeheet,

 

You can use the fio DAOS engine (https://daos-stack.github.io/devbranch/admin/performance_tuning/#fio).

 

If you want to stick to dfuse, then I would advise to use the posixaio engine instead of pvsync. pvsync is synchronous/blocking and won’t be able to submit more than one I/O at a time (in the “IO depths” section of your fio output, you have “1 = 100%”). Please also note that the FUSE kernel module takes a per-file mutex on every write (and not on read), so all writes to a file are effectively serialized. I haven’t checked whether this has been improved with recent Linux kernels though.

 

Cheers,

Johann

 

From: <daos@daos.groups.io> on behalf of "Hayer, Eeheet" <eeheet.hayer@...>
Reply-To: "daos@daos.groups.io" <daos@daos.groups.io>
Date: Tuesday 27 July 2021 at 17:57
To: "daos@daos.groups.io" <daos@daos.groups.io>
Subject: [daos] Increasing FIO performance

 

Hi,

 

I’m an intern on the HPC team, and I’m trying to increase the performance of FIO on my server (wolf-169), admin/client (wolf-57). I attached a screencap of what I’m currently getting and am told I should be getting much higher numbers.

 

My server has 2 SSDs, and some non-optane pmem.

 

The command I’m running:

$ /usr/bin/fio --name=random-write --ioengine=pvsync --rw=randwrite --bs=4k --size=128M --nrfiles=4 --directory=/tmp/daos_test1 --numjobs=8 --iodepth=16 --runtime=60 --time_based --direct=1 --buffered=0 --randrepeat=0 --norandommap --refill_buffers

 

 

 

-Eeheet

---------------------------------------------------------------------
Intel Corporation SAS (French simplified joint stock company)
Registered headquarters: "Les Montalets"- 2, rue de Paris,
92196 Meudon Cedex, France
Registration Number:  302 456 199 R.C.S. NANTERRE
Capital: 4,572,000 Euros

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.

Join daos@daos.groups.io to automatically receive all group messages.