Date   

FW: [External] [gpfsug-discuss] IO500 SC20 Call for Submission

Lombardi, Johann
 

FYI. If you need any help, feel free to ping us ūüėä

 


Subject: IO500 SC20 Call for Submission

 

Call for IO500 Submission

Deadline: 30 October 2020 AoE

Stabilization period: 1st October -- 9th October 2020 AoE

The IO500 is now accepting and encouraging submissions for the upcoming 7th IO500 list, to be revealed at the IO500 Virtual BOF during SC20. Once again, we are also accepting submissions to the 10 Node I/O Challenge to encourage submission of small scale results. The new ranked lists will be announced at our Virtual SC20 BoF. We hope to see you, and your results, there. 

A new change for the upcoming submission procedure is the introduction of a stabilization period that aims to harden the benchmark. The final benchmark is released at the end of this period. During the stabilization we encourage the community to test the proper execution of the benchmark and provide us with feedback. We will apply bug fixes to the code base and expect that results obtained will be valid as full submission. We also continue with another list for the Student Cluster Competition, since IO500 is used during this competition.

Also new this year is that we have partnered with Anthony Kougkas’ team at Illinois Institute of Technology to evaluate the submission metadata describing the storage system on which the test was run to improve the quality and usefulness of the data IO500 collects. You may be contacted by one of his students to clarify one or more of the metadata items from your submission(s). We would appreciate, but do not require, your cooperation to help improve the submission metadata quality. Results from their work will be fed back to improve our submission process for future lists.

The IO500 benchmark suite is designed to be easy to run, and the community has multiple active support channels to help with any questions. Please submit results from your system, and we look forward to seeing many of you at SC20! Please note that submissions of all sizes are welcome, including multiple submissions from different storage systems/tiers at a single site.  The website has customizable sorting so it is possible to submit on a small system and still get a very good per-client score, for example. Additionally, the list is about much more than just the raw rank; all submissions help the community by collecting and publishing a wider corpus of data. More details below.

Following the success of the Top500 in collecting and analyzing historical trends in supercomputer technology and evolution, the IO500 was created in 2017, published its first list at SC17, and has grown continuously since then. The need for such an initiative has long been known within High-Performance Computing; however, defining appropriate benchmarks had long been challenging. Despite this challenge, the community, after long and spirited discussion, finally reached consensus on a suite of benchmarks and a metric for resolving the scores into a single ranking.

The multi-fold goals of the benchmark suite are as follows:

1.      Maximizing simplicity in running the benchmark suite

2.      Encouraging complexity in tuning for performance

3.¬†¬†¬†¬†¬† Allowing submitters to highlight their ‚Äúhero run‚ÄĚ performance numbers

4.      Forcing submitters to simultaneously report performance for challenging IO patterns.

Specifically, the benchmark suite includes a hero-run of both IOR and mdtest configured however possible to maximize performance and establish an upper-bound for performance. It also includes an IOR and mdtest run with highly prescribed parameters in an attempt to determine a lower-bound on the performance. Finally, it includes a namespace search, as this has been determined to be a highly sought-after feature in HPC storage systems that have historically not been well-measured. Submitters are encouraged to share their tuning insights for publication.

The goals of the community are also multi-fold:

1.      Gather historical data for the sake of analysis and to aid predictions of storage futures

2.      Collect tuning information to share valuable performance optimizations across the community

3.¬†¬†¬†¬†¬† Encourage vendors and designers to optimize for workloads beyond ‚Äúhero runs‚ÄĚ

4.      Establish bounded expectations for users, procurers, and administrators

10 Node I/O Challenge

The 10 Node Challenge is conducted using the regular IO500 benchmark, however, with the rule that exactly 10 client nodes must be used to run the benchmark. You may use any shared storage with, e.g., any number of servers. When submitting for the IO500 list, you can opt-in for ‚ÄúParticipate in the 10 compute node challenge only‚ÄĚ, then we will not include the results into the ranked list. Other 10-node node submissions will be included in the full list and in the ranked list. We will announce the result in a separate derived list and in the full list but not on the ranked IO500 list at https://io500.org/¬†

Birds-of-a-feather

Once again, we encourage you to submit [1], to join our community, and to attend our virtual BoF ‚ÄúThe IO500 and the Virtual Institute of I/O‚ÄĚ at SC20, where we will announce the new IO500 list, the 10 node challenge list, and the Student Cluster Competition list. We look forward to answering any questions or concerns you might have.

·         [1] http://www.vi4io.org/io500/submission 

 

Thanks,

 

The IO500 Committee <committee@...>

 

---------------------------------------------------------------------
Intel Corporation SAS (French simplified joint stock company)
Registered headquarters: "Les Montalets"- 2, rue de Paris,
92196 Meudon Cedex, France
Registration Number:  302 456 199 R.C.S. NANTERRE
Capital: 4,572,000 Euros

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.


Re: Webex meeting changed: DAOS User Group 2020 (DUG'20)

Lombardi, Johann
 

Really sorry for the spam, but I used the wrong timezone when creating the invite ‚ėĻ

The last one that was sent to the mailing list (attached for reference) is the correct one.

 

Cheers,

Johann

 

From: <daos@daos.groups.io> on behalf of "Lombardi, Johann" <johann.lombardi@...>
Reply-To: "daos@daos.groups.io" <daos@daos.groups.io>
Date: Thursday 8 October 2020 at 12:32
To: "daos@daos.groups.io" <daos@daos.groups.io>
Subject: Re: [daos] Webex meeting changed: DAOS User Group 2020 (DUG'20)

 

Hi there,

 

Please find below the Webex meeting invite for the DUG’20.

The agenda will be posted there: https://wiki.hpdd.intel.com/display/DC/DUG20

 

Cheers,

Johann

---------------------------------------------------------------------
Intel Corporation SAS (French simplified joint stock company)
Registered headquarters: "Les Montalets"- 2, rue de Paris,
92196 Meudon Cedex, France
Registration Number:  302 456 199 R.C.S. NANTERRE
Capital: 4,572,000 Euros

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.


Webex meeting changed: DAOS User Group 2020 (DUG'20)

Johann Lombardi <messenger@...>
 

 

Johann Lombardi changed the Webex meeting information.

 

When it's time, join the Webex meeting here.

 
Meeting number (access code): 130 258 2165
Meeting password: DAOS@2020y
 
Thursday, November 19, 2020
9:30 am  |  (UTC-05:00) Central Time (US & Canada)  |  3 hrs 30 mins
 
Join meeting
 
Tap to join from a mobile device (attendees only)
+1-210-795-1110,,1302582165## US Toll
+1-866-662-9987,,1302582165## US Toll Free
 
Join by phone
+1-210-795-1110 US Toll
+1-866-662-9987 US Toll Free
Global call-in numbers  |  Toll-free calling restrictions
 
Join from a video system or application
Dial 1302582165@...
You can also dial 173.243.2.68 and enter your meeting number.
 
Join using Microsoft Lync or Microsoft Skype for Business
Dial 1302582165.intel@...
 
 
Need help? Go to http://help.webex.com
 


Re: Webex meeting changed: DAOS User Group 2020 (DUG'20)

Lombardi, Johann
 

Hi there,

 

Please find below the Webex meeting invite for the DUG’20.

The agenda will be posted there: https://wiki.hpdd.intel.com/display/DC/DUG20

 

Cheers,

Johann

 

From: <daos@daos.groups.io> on behalf of "Johann Lombardi via groups.io" <messenger@...>
Reply-To: <daos@daos.groups.io>
Date: Thursday 8 October 2020 at 12:30
To: <daos@daos.groups.io>
Subject: [daos] Webex meeting changed: DAOS User Group 2020 (DUG'20)

 

Johann Lombardi changed the Webex meeting information.

When it's time, join the Webex meeting here.

 

Meeting number (access code): 130 258 2165

 

Meeting password: DAOS@2020y

 

Thursday, November 19, 2020

11:30 am | (UTC-05:00) Central Time (US & Canada) | 3 hrs 30 mins

 

 

Join meeting

 

 

Tap to join from a mobile device (attendees only)

+1-210-795-1110,,1302582165## US Toll

+1-866-662-9987,,1302582165## US Toll Free

 

Join by phone

+1-210-795-1110 US Toll

+1-866-662-9987 US Toll Free

Global call-in numbers | Toll-free calling restrictions

 

 

Join from a video system or application

Dial 1302582165@...

You can also dial 173.243.2.68 and enter your meeting number.

 

 

Join using Microsoft Lync or Microsoft Skype for Business

Dial 1302582165.intel@...

 

Need help? Go to http://help.webex.com

---------------------------------------------------------------------
Intel Corporation SAS (French simplified joint stock company)
Registered headquarters: "Les Montalets"- 2, rue de Paris,
92196 Meudon Cedex, France
Registration Number:  302 456 199 R.C.S. NANTERRE
Capital: 4,572,000 Euros

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.


Webex meeting changed: DAOS User Group 2020 (DUG'20)

Johann Lombardi <messenger@...>
 

 

Johann Lombardi changed the Webex meeting information.

 

When it's time, join the Webex meeting here.

 
Meeting number (access code): 130 258 2165
Meeting password: DAOS@2020y
 
Thursday, November 19, 2020
11:30 am  |  (UTC-05:00) Central Time (US & Canada)  |  3 hrs 30 mins
 
Join meeting
 
Tap to join from a mobile device (attendees only)
+1-210-795-1110,,1302582165## US Toll
+1-866-662-9987,,1302582165## US Toll Free
 
Join by phone
+1-210-795-1110 US Toll
+1-866-662-9987 US Toll Free
Global call-in numbers  |  Toll-free calling restrictions
 
Join from a video system or application
Dial 1302582165@...
You can also dial 173.243.2.68 and enter your meeting number.
 
Join using Microsoft Lync or Microsoft Skype for Business
Dial 1302582165.intel@...
 
 
Need help? Go to http://help.webex.com
 


Webex meeting invitation: DAOS User Group 2020 (DUG'20)

Johann Lombardi <messenger@...>
 

 
Johann Lombardi invites you to join this Webex meeting.
 
Meeting number (access code): 130 258 2165
Meeting password: DAOS@2020y
 
Thursday, November 19, 2020
11:30 am  |  (UTC-05:00) Central Time (US & Canada)  |  3 hrs 30 mins
 
Join meeting
 
Tap to join from a mobile device (attendees only)
+1-210-795-1110,,1302582165## US Toll
+1-866-662-9987,,1302582165## US Toll Free
 
Join by phone
+1-210-795-1110 US Toll
+1-866-662-9987 US Toll Free
Global call-in numbers  |  Toll-free calling restrictions
 
Join from a video system or application
Dial 1302582165@...
You can also dial 173.243.2.68 and enter your meeting number.
 
Join using Microsoft Lync or Microsoft Skype for Business
Dial 1302582165.intel@...
 
 
Need help? Go to http://help.webex.com
 


Re: DAOS with NVMe-over-Fabrics

anton.brekhov@...
 

On Thu, Sep 17, 2020 at 12:56 AM, Lombardi, Johann wrote:
adrfam:IPv4 traddr:10.9.1.118 trsvcid:4420 subnqn:test

I've tried to change daos_nvme.conf in runtime of daos server and before starting due to connect disk through rdma. In both ways I cannot see it in daos_system. nvme discover see exported disk.

When SPDK taking this disk in system? Or should I write this in other files? My daos_nvme.conf
[Nvme]

    TransportID "trtype:PCIe traddr:0000:b1:00.0" Nvme_apache512_0

    TransportID "trtype:PCIe traddr:0000:b2:00.0" Nvme_apache512_1

    TransportID "trtype:PCIe traddr:0000:b3:00.0" Nvme_apache512_2

    TransportID "trtype:PCIe traddr:0000:b4:00.0" Nvme_apache512_3

    TransportID "trtype:rdma adrfam:IPv4 traddr:10.0.1.2 trsvcid:4420 subnqn:nvme-subsystem-name" Nvme_apache512_4

    RetryCount 4

    TimeoutUsec 0

    ActionOnTimeout None

    AdminPollRate 100000

    HotplugEnable No

    HotplugPollRate 0

And nvme discover output:

[root@apache512 ~]# nvme discover -t rdma -a 10.0.1.2 -s 4420

 

Discovery Log Number of Records 1, Generation counter 2

=====Discovery Log Entry 0======

trtype:  rdma

adrfam:  ipv4

subtype: nvme subsystem

treq:    not specified, sq flow control disable supported

portid:  1

trsvcid: 4420

subnqn:  nvme-subsystem-name

traddr:  10.0.1.2

rdma_prtype: not specified

rdma_qptype: connected

rdma_cms:    rdma-cm

 

rdma_pkey: 0x0000


Re: Error attempting to mount via DFUSE

Pittman, Ashley M
 

 

This specific issue is different appears to be at the dfuse/fuse kernel module level.¬† I note you‚Äôre using a newer fuse driver than I am (7.31 vs 7.23) and also a newer libfuse3 as evidenced by the ‚ÄúUnknown flags 0x3000000‚ÄĚ error.¬† Purely from the output you‚Äôve included the we are not explicitly disabling SPLICE_READ although we should not be using it, however the error from fuse seems to indicate that it‚Äôs use is attempted.

 

We do have Ubuntu 20.04 in our CI so I know we test on this, I’ll see if I can find any CI results or if not try Ubuntu 20.04 on a test machine here.

 

Ashley.

 

From: <daos@daos.groups.io> on behalf of Gert Pauwels <gert.pauwels@...>
Reply to: "daos@daos.groups.io" <daos@daos.groups.io>
Date: Monday, 5 October 2020 at 09:09
To: "daos@daos.groups.io" <daos@daos.groups.io>
Subject: Re: [daos] Error attempting to mount via DFUSE

 

I'm experiencing (most likely) the same issue since a week or more on the master branch.

I reproduced it on today's master branch on Ubuntu 20.04. Verified a few days ago on Centos 7.

root@intel-S2600WFD:~/daos-step-by-step# dmg -i pool create --scm-size=30G --nvme-size=300G
Creating DAOS pool with 30 GB SCM and 300 GB NVMe storage (10.00 % ratio)
Pool-create command SUCCEEDED: UUID: fe6475b3-74dc-464f-b8b0-cac50778a9f9, Service replicas: 0

root@intel-S2600WFD:~/daos-step-by-step# daos container create  --svc=0 --path=/mnt/mycontainer --chunk_size=4K --type=POSIX --pool=fe6475b3-74dc-464f-b8b0-cac50778a9f9
fi   INFO src/gurt/fault_inject.c:481 d_fault_inject_init() No config file, fault injection is OFF.
daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=3
crt  INFO src/cart/crt_init.c:269 crt_init_opt() libcart version 4.8.0 initializing
crt  WARN src/cart/crt_init.c:161 data_init() FI_UNIVERSE_SIZE was not set; setting to 2048
client INFO src/utils/daos.c:153 cmd_args_print() DAOS system name: daos_server
client INFO src/utils/daos.c:154 cmd_args_print() pool UUID: fe6475b3-74dc-464f-b8b0-cac50778a9f9
client INFO src/utils/daos.c:155 cmd_args_print() cont UUID: 00000000-0000-0000-0000-000000000000
client INFO src/utils/daos.c:157 cmd_args_print() pool svc: parsed 1 ranks from input 0
client INFO src/utils/daos.c:161 cmd_args_print() attr: name=NULL, value=NULL
client INFO src/utils/daos.c:165 cmd_args_print() path=/mnt/mycontainer, type=POSIX, oclass=UNKNOWN, chunk_size=4096
client INFO src/utils/daos.c:168 cmd_args_print() snapshot: name=NULL, epoch=0, epoch range=NULL (0-0)
client INFO src/utils/daos.c:174 cmd_args_print() oid: 0.0
daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=18
daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=18
Successfully created container 0912dece-00e1-4e8f-8d7f-ac63603f52ac type POSIX

root@intel-S2600WFD:~/daos-step-by-step# dfuse --pool=fe6475b3-74dc-464f-b8b0-cac50778a9f9 --cont=0912dece-00e1-4e8f-8d7f-ac63603f52ac --mountpoint=/mnt/1 --svc=0 --foreground
fi   INFO src/gurt/fault_inject.c:481 d_fault_inject_init() No config file, fault injection is OFF.
daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=3
crt  INFO src/cart/crt_init.c:269 crt_init_opt() libcart version 4.8.0 initializing
crt  WARN src/cart/crt_init.c:161 data_init() FI_UNIVERSE_SIZE was not set; setting to 2048
duns INFO src/client/dfs/duns.c:393 duns_resolve_path() Path does not represent a DAOS link
dfuse INFO src/client/dfuse/dfuse_main.c:436 main(0x55639bbfb100) duns_resolve_path() returned 61 No data available
daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=18
daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=18
dfuse INFO src/client/dfuse/dfuse_fuseops.c:72 dfuse_fuse_init(0x55639bc6eb30) Fuse configuration
dfuse INFO src/client/dfuse/dfuse_fuseops.c:74 dfuse_fuse_init(0x55639bc6eb30) Proto 7 31
dfuse INFO src/client/dfuse/dfuse_fuseops.c:84 dfuse_fuse_init(0x55639bc6eb30) max read 0x400000
dfuse INFO src/client/dfuse/dfuse_fuseops.c:85 dfuse_fuse_init(0x55639bc6eb30) max write 0x400000
dfuse INFO src/client/dfuse/dfuse_fuseops.c:86 dfuse_fuse_init(0x55639bc6eb30) readahead 0x20000
dfuse INFO src/client/dfuse/dfuse_fuseops.c:88 dfuse_fuse_init(0x55639bc6eb30) Capability supported 0x31fffdb
dfuse INFO src/client/dfuse/dfuse_fuseops.c:39 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_ASYNC_READ enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:40 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_POSIX_LOCKS enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:41 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_ATOMIC_O_TRUNC enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:42 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_EXPORT_SUPPORT enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:43 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_DONT_MASK enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:44 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_SPLICE_WRITE enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:45 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_SPLICE_MOVE enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:46 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_SPLICE_READ enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:47 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_FLOCK_LOCKS enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:48 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_IOCTL_DIR enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:49 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_AUTO_INVAL_DATA enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:50 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_READDIRPLUS enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:51 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_READDIRPLUS_AUTO enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:52 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_ASYNC_DIO enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:53 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_WRITEBACK_CACHE enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:54 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_NO_OPEN_SUPPORT enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:55 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_PARALLEL_DIROPS enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:56 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_POSIX_ACL enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:57 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_HANDLE_KILLPRIV enabled
dfuse ERR  src/client/dfuse/dfuse_fuseops.c:60 dfuse_show_flags(0x55639bc6eb30) Unknown flags 0x3000000
dfuse INFO src/client/dfuse/dfuse_fuseops.c:92 dfuse_fuse_init(0x55639bc6eb30) Capability requested 0x149a09
dfuse INFO src/client/dfuse/dfuse_fuseops.c:39 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_ASYNC_READ enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:40 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_POSIX_LOCKS disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:41 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_ATOMIC_O_TRUNC enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:42 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_EXPORT_SUPPORT disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:43 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_DONT_MASK disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:44 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_SPLICE_WRITE disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:45 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_SPLICE_MOVE disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:46 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_SPLICE_READ enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:47 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_FLOCK_LOCKS disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:48 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_IOCTL_DIR enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:49 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_AUTO_INVAL_DATA enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:50 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_READDIRPLUS disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:51 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_READDIRPLUS_AUTO disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:52 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_ASYNC_DIO enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:53 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_WRITEBACK_CACHE disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:54 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_NO_OPEN_SUPPORT disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:55 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_PARALLEL_DIROPS enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:56 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_POSIX_ACL disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:57 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_HANDLE_KILLPRIV enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:99 dfuse_fuse_init(0x55639bc6eb30) max_background 16
dfuse INFO src/client/dfuse/dfuse_fuseops.c:100 dfuse_fuse_init(0x55639bc6eb30) congestion_threshold 8
fuse: splice from device: Invalid argument
dfuse ERR  src/client/dfuse/dfuse_main.c:191 ll_loop_fn(0x55639bbfb100) Fuse loop exited with return code: -22
dfuse ERR  src/client/dfuse/dfuse_core.c:384 dfuse_start(0x55639bc6eb30) Unable to register FUSE fs
dfuse ERR  src/client/dfuse/dfuse_core.c:399 dfuse_start(0x55639bc6eb30) Failed to start dfuse, rc: -1003
dfuse ERR  src/client/dfuse/dfuse_main.c:519 main(0x55639bc8c2f0) DFP left at the end
dfuse ERR  src/client/dfuse/dfuse_main.c:522 main(0x55639bc8c3f0) DFS left at the end
dfuse INFO src/client/dfuse/dfuse_main.c:561 main() Exiting with status -1003


When --pool and --cont ar not specified and --mountpoint pointing to the path specified when creating the container, you get the same output as above.
Calling dfuse --mountpoint=/mnt/mycontainer --svc=0 --foreground gives the same output as as dfuse --pool=fe6475b3-74dc-464f-b8b0-cac50778a9f9 --cont=0912dece-00e1-4e8f-8d7f-ac63603f52ac --mountpoint=/mnt/1 --svc=0 --foreground

Gert,

---------------------------------------------------------------------
Intel Corporation (UK) Limited
Registered No. 1134945 (England)
Registered Office: Pipers Way, Swindon SN3 1RJ
VAT No: 860 2173 47

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.


Re: Error attempting to mount via DFUSE

Pittman, Ashley M
 

 

This case is different, the mount point here has not been used to create a container so dfuse is attempting to use the pool and container on the command line.  The error reported is a DAOS error rather than a dfuse one, but I suspect this is a server error, either because the wrong svc value was specified or because on of the servers isn’t running or contactable.

 

Ashley,

 

From: <daos@daos.groups.io> on behalf of Peter <magpiesaresoawesome@...>
Reply to: "daos@daos.groups.io" <daos@daos.groups.io>
Date: Monday, 5 October 2020 at 07:30
To: "daos@daos.groups.io" <daos@daos.groups.io>
Subject: Re: [daos] Error attempting to mount via DFUSE

 

Thanks for the response, I tried again with:
dfuse  --mountpoint=/home/daos/container/ --svc=0  --pool=[pool_id] --container=[container_id] --foreground

This led to the following error:
10/05-15:25:36.12 master-node DAOS[442/442] fi   INFO src/gurt/fault_inject.c:481 d_fault_inject_init() No config file, fault injection is OFF.

10/05-15:25:36.12 master-node DAOS[442/442] daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=3
10/05-15:25:36.12 master-node DAOS[442/442] crt  INFO src/cart/crt_init.c:269 crt_init_opt() libcart version 4.8.0 initializing
10/05-15:25:36.12 master-node DAOS[442/442] crt  WARN src/cart/crt_init.c:161 data_init() FI_UNIVERSE_SIZE was not set; setting to 2048
10/05-15:25:36.14 master-node DAOS[442/442] duns INFO src/client/dfs/duns.c:301 duns_resolve_path() Path does not represent a DAOS link
10/05-15:25:36.14 master-node DAOS[442/442] dfuse INFO src/client/dfuse/dfuse_main.c:436 main(0x561ee77ad980) duns_resolve_path() returned 61 No data available
10/05-15:25:36.14 master-node DAOS[442/442] daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=24
10/05-15:25:36.15 master-node DAOS[442/442] daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=24
10/05-15:25:36.15 master-node DAOS[442/442] common ERR  src/common/rsvc.c:141 rsvc_client_process_error() removed rank 0 from replica list due to DER_NOTREPLICA(-2020): 'Not a service replica'
10/05-15:25:36.15 master-node DAOS[442/442] common WARN src/common/rsvc.c:102 rsvc_client_choose() replica list empty
10/05-15:25:36.15 master-node DAOS[442/442] pool ERR  src/pool/cli.c:471 dc_pool_connect() 30d0e9d9: cannot find pool service: DER_NOTREPLICA(-2020): 'Not a service replica'
Failed to connect to pool (-1005)
10/05-15:25:36.15 master-node DAOS[442/442] dfuse ERR  src/client/dfuse/dfuse_main.c:519 main(0x561ee784ffe0) DFP left at the end
10/05-15:25:36.15 master-node DAOS[442/442] dfuse ERR  src/client/dfuse/dfuse_main.c:522 main(0x561ee78500e0) DFS left at the end
10/05-15:25:36.15 master-node DAOS[442/442] dfuse INFO src/client/dfuse/dfuse_main.c:561 main() Exiting with status 0

---------------------------------------------------------------------
Intel Corporation (UK) Limited
Registered No. 1134945 (England)
Registered Office: Pipers Way, Swindon SN3 1RJ
VAT No: 860 2173 47

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.


Re: Error attempting to mount via DFUSE

Pittman, Ashley M
 

 

Hi,

 

This is the same issue as Gert hit last week, specifically that you‚Äôre providing the pool/container UUIDs twice to dfuse, one via the path and once on the command line.¬† The fix in this case would not be to provide ‚Äďpool or ‚Äďcontainer options on the command line.

 

It would however made sense for dfuse to support this usage where the uuids match, so I’ve filed DAOS-5778 to allow this.

 

Ashley.

 

 

From: <daos@daos.groups.io> on behalf of Peter <magpiesaresoawesome@...>
Reply to: "daos@daos.groups.io" <daos@daos.groups.io>
Date: Monday, 5 October 2020 at 06:48
To: "daos@daos.groups.io" <daos@daos.groups.io>
Subject: [daos] Error attempting to mount via DFUSE

 

Hello!


I am trying to use dfuse to create a POSIX-enabled mount.

My container is created thusly:
daos cont create --pool=[pool_id] --svc=0 --type=POSIX --path=/tmp/mycontainer

I then try to mount like this:
dfuse --mountpoint=/tmp/mycontainer --svc=0 --pool=[pool_id] --container=[container_id] --foreground


And this is the error I receive:
09/25-14:46:17.18 master-node DAOS[849/849] fi   INFO src/gurt/fault_inject.c:481 d_fault_inject_init() No config file, fault injection is OFF.
09/25-14:46:17.18 master-node DAOS[849/849] daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=3
09/25-14:46:17.18 master-node DAOS[849/849] crt  INFO src/cart/crt_init.c:269 crt_init_opt() libcart version 4.8.0 initializing
09/25-14:46:17.18 master-node DAOS[849/849] crt  WARN src/cart/crt_init.c:161 data_init() FI_UNIVERSE_SIZE was not set; setting to 2048
09/25-14:46:17.27 master-node DAOS[849/849] dfuse INFO src/client/dfuse/dfuse_main.c:436 main(0x55c5e3a8c980) duns_resolve_path() returned 0 Success
UNS configured on mount point but pool provided
09/25-14:46:17.27 master-node DAOS[849/849] dfuse ERR  src/client/dfuse/dfuse_main.c:519 main(0x55c5e3b31410) DFP left at the end
09/25-14:46:17.27 master-node DAOS[849/849] dfuse ERR  src/client/dfuse/dfuse_main.c:522 main(0x55c5e3b31510) DFS left at the end
09/25-14:46:17.27 master-node DAOS[849/849] dfuse INFO src/client/dfuse/dfuse_main.c:561 main() Exiting with status -1003

I would much appreciate any guidance in solving this.

Thank you,

Peter

---------------------------------------------------------------------
Intel Corporation (UK) Limited
Registered No. 1134945 (England)
Registered Office: Pipers Way, Swindon SN3 1RJ
VAT No: 860 2173 47

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.


Re: Error attempting to mount via DFUSE

Gert Pauwels
 

I'm experiencing (most likely) the same issue since a week or more on the master branch.

I reproduced it on today's master branch on Ubuntu 20.04. Verified a few days ago on Centos 7.

root@intel-S2600WFD:~/daos-step-by-step# dmg -i pool create --scm-size=30G --nvme-size=300G
Creating DAOS pool with 30 GB SCM and 300 GB NVMe storage (10.00 % ratio)
Pool-create command SUCCEEDED: UUID: fe6475b3-74dc-464f-b8b0-cac50778a9f9, Service replicas: 0

root@intel-S2600WFD:~/daos-step-by-step# daos container create  --svc=0 --path=/mnt/mycontainer --chunk_size=4K --type=POSIX --pool=fe6475b3-74dc-464f-b8b0-cac50778a9f9
fi   INFO src/gurt/fault_inject.c:481 d_fault_inject_init() No config file, fault injection is OFF.
daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=3
crt  INFO src/cart/crt_init.c:269 crt_init_opt() libcart version 4.8.0 initializing
crt  WARN src/cart/crt_init.c:161 data_init() FI_UNIVERSE_SIZE was not set; setting to 2048
client INFO src/utils/daos.c:153 cmd_args_print() DAOS system name: daos_server
client INFO src/utils/daos.c:154 cmd_args_print() pool UUID: fe6475b3-74dc-464f-b8b0-cac50778a9f9
client INFO src/utils/daos.c:155 cmd_args_print() cont UUID: 00000000-0000-0000-0000-000000000000
client INFO src/utils/daos.c:157 cmd_args_print() pool svc: parsed 1 ranks from input 0
client INFO src/utils/daos.c:161 cmd_args_print() attr: name=NULL, value=NULL
client INFO src/utils/daos.c:165 cmd_args_print() path=/mnt/mycontainer, type=POSIX, oclass=UNKNOWN, chunk_size=4096
client INFO src/utils/daos.c:168 cmd_args_print() snapshot: name=NULL, epoch=0, epoch range=NULL (0-0)
client INFO src/utils/daos.c:174 cmd_args_print() oid: 0.0
daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=18
daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=18
Successfully created container 0912dece-00e1-4e8f-8d7f-ac63603f52ac type POSIX

root@intel-S2600WFD:~/daos-step-by-step# dfuse --pool=fe6475b3-74dc-464f-b8b0-cac50778a9f9 --cont=0912dece-00e1-4e8f-8d7f-ac63603f52ac --mountpoint=/mnt/1 --svc=0 --foreground
fi   INFO src/gurt/fault_inject.c:481 d_fault_inject_init() No config file, fault injection is OFF.
daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=3
crt  INFO src/cart/crt_init.c:269 crt_init_opt() libcart version 4.8.0 initializing
crt  WARN src/cart/crt_init.c:161 data_init() FI_UNIVERSE_SIZE was not set; setting to 2048
duns INFO src/client/dfs/duns.c:393 duns_resolve_path() Path does not represent a DAOS link
dfuse INFO src/client/dfuse/dfuse_main.c:436 main(0x55639bbfb100) duns_resolve_path() returned 61 No data available
daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=18
daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=18
dfuse INFO src/client/dfuse/dfuse_fuseops.c:72 dfuse_fuse_init(0x55639bc6eb30) Fuse configuration
dfuse INFO src/client/dfuse/dfuse_fuseops.c:74 dfuse_fuse_init(0x55639bc6eb30) Proto 7 31
dfuse INFO src/client/dfuse/dfuse_fuseops.c:84 dfuse_fuse_init(0x55639bc6eb30) max read 0x400000
dfuse INFO src/client/dfuse/dfuse_fuseops.c:85 dfuse_fuse_init(0x55639bc6eb30) max write 0x400000
dfuse INFO src/client/dfuse/dfuse_fuseops.c:86 dfuse_fuse_init(0x55639bc6eb30) readahead 0x20000
dfuse INFO src/client/dfuse/dfuse_fuseops.c:88 dfuse_fuse_init(0x55639bc6eb30) Capability supported 0x31fffdb
dfuse INFO src/client/dfuse/dfuse_fuseops.c:39 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_ASYNC_READ enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:40 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_POSIX_LOCKS enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:41 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_ATOMIC_O_TRUNC enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:42 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_EXPORT_SUPPORT enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:43 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_DONT_MASK enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:44 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_SPLICE_WRITE enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:45 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_SPLICE_MOVE enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:46 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_SPLICE_READ enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:47 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_FLOCK_LOCKS enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:48 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_IOCTL_DIR enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:49 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_AUTO_INVAL_DATA enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:50 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_READDIRPLUS enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:51 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_READDIRPLUS_AUTO enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:52 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_ASYNC_DIO enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:53 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_WRITEBACK_CACHE enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:54 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_NO_OPEN_SUPPORT enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:55 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_PARALLEL_DIROPS enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:56 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_POSIX_ACL enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:57 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_HANDLE_KILLPRIV enabled
dfuse ERR  src/client/dfuse/dfuse_fuseops.c:60 dfuse_show_flags(0x55639bc6eb30) Unknown flags 0x3000000
dfuse INFO src/client/dfuse/dfuse_fuseops.c:92 dfuse_fuse_init(0x55639bc6eb30) Capability requested 0x149a09
dfuse INFO src/client/dfuse/dfuse_fuseops.c:39 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_ASYNC_READ enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:40 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_POSIX_LOCKS disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:41 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_ATOMIC_O_TRUNC enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:42 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_EXPORT_SUPPORT disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:43 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_DONT_MASK disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:44 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_SPLICE_WRITE disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:45 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_SPLICE_MOVE disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:46 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_SPLICE_READ enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:47 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_FLOCK_LOCKS disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:48 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_IOCTL_DIR enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:49 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_AUTO_INVAL_DATA enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:50 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_READDIRPLUS disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:51 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_READDIRPLUS_AUTO disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:52 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_ASYNC_DIO enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:53 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_WRITEBACK_CACHE disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:54 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_NO_OPEN_SUPPORT disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:55 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_PARALLEL_DIROPS enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:56 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_POSIX_ACL disabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:57 dfuse_show_flags(0x55639bc6eb30) Flag FUSE_CAP_HANDLE_KILLPRIV enabled
dfuse INFO src/client/dfuse/dfuse_fuseops.c:99 dfuse_fuse_init(0x55639bc6eb30) max_background 16
dfuse INFO src/client/dfuse/dfuse_fuseops.c:100 dfuse_fuse_init(0x55639bc6eb30) congestion_threshold 8
fuse: splice from device: Invalid argument
dfuse ERR  src/client/dfuse/dfuse_main.c:191 ll_loop_fn(0x55639bbfb100) Fuse loop exited with return code: -22
dfuse ERR  src/client/dfuse/dfuse_core.c:384 dfuse_start(0x55639bc6eb30) Unable to register FUSE fs
dfuse ERR  src/client/dfuse/dfuse_core.c:399 dfuse_start(0x55639bc6eb30) Failed to start dfuse, rc: -1003
dfuse ERR  src/client/dfuse/dfuse_main.c:519 main(0x55639bc8c2f0) DFP left at the end
dfuse ERR  src/client/dfuse/dfuse_main.c:522 main(0x55639bc8c3f0) DFS left at the end
dfuse INFO src/client/dfuse/dfuse_main.c:561 main() Exiting with status -1003


When --pool and --cont ar not specified and --mountpoint pointing to the path specified when creating the container, you get the same output as above.
Calling dfuse --mountpoint=/mnt/mycontainer --svc=0 --foreground gives the same output as as dfuse --pool=fe6475b3-74dc-464f-b8b0-cac50778a9f9 --cont=0912dece-00e1-4e8f-8d7f-ac63603f52ac --mountpoint=/mnt/1 --svc=0 --foreground

Gert,


Re: Error attempting to mount via DFUSE

Peter
 

Thanks for the response, I tried again with:
dfuse  --mountpoint=/home/daos/container/ --svc=0  --pool=[pool_id] --container=[container_id] --foreground

This led to the following error:
10/05-15:25:36.12 master-node DAOS[442/442] fi   INFO src/gurt/fault_inject.c:481 d_fault_inject_init() No config file, fault injection is OFF.

10/05-15:25:36.12 master-node DAOS[442/442] daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=3
10/05-15:25:36.12 master-node DAOS[442/442] crt  INFO src/cart/crt_init.c:269 crt_init_opt() libcart version 4.8.0 initializing
10/05-15:25:36.12 master-node DAOS[442/442] crt  WARN src/cart/crt_init.c:161 data_init() FI_UNIVERSE_SIZE was not set; setting to 2048
10/05-15:25:36.14 master-node DAOS[442/442] duns INFO src/client/dfs/duns.c:301 duns_resolve_path() Path does not represent a DAOS link
10/05-15:25:36.14 master-node DAOS[442/442] dfuse INFO src/client/dfuse/dfuse_main.c:436 main(0x561ee77ad980) duns_resolve_path() returned 61 No data available
10/05-15:25:36.14 master-node DAOS[442/442] daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=24
10/05-15:25:36.15 master-node DAOS[442/442] daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=24
10/05-15:25:36.15 master-node DAOS[442/442] common ERR  src/common/rsvc.c:141 rsvc_client_process_error() removed rank 0 from replica list due to DER_NOTREPLICA(-2020): 'Not a service replica'
10/05-15:25:36.15 master-node DAOS[442/442] common WARN src/common/rsvc.c:102 rsvc_client_choose() replica list empty
10/05-15:25:36.15 master-node DAOS[442/442] pool ERR  src/pool/cli.c:471 dc_pool_connect() 30d0e9d9: cannot find pool service: DER_NOTREPLICA(-2020): 'Not a service replica'
Failed to connect to pool (-1005)
10/05-15:25:36.15 master-node DAOS[442/442] dfuse ERR  src/client/dfuse/dfuse_main.c:519 main(0x561ee784ffe0) DFP left at the end
10/05-15:25:36.15 master-node DAOS[442/442] dfuse ERR  src/client/dfuse/dfuse_main.c:522 main(0x561ee78500e0) DFS left at the end
10/05-15:25:36.15 master-node DAOS[442/442] dfuse INFO src/client/dfuse/dfuse_main.c:561 main() Exiting with status 0



Re: Error attempting to mount via DFUSE

Yunjae Lee
 

Hello!

Could you try using the mountpoint different from container path?
Afaik, the path given to `daos cont create` is used as an alias to the container.

Yunjae


Error attempting to mount via DFUSE

Peter
 

Hello!


I am trying to use dfuse to create a POSIX-enabled mount.

My container is created thusly:
daos cont create --pool=[pool_id] --svc=0 --type=POSIX --path=/tmp/mycontainer

I then try to mount like this:
dfuse --mountpoint=/tmp/mycontainer --svc=0 --pool=[pool_id] --container=[container_id] --foreground


And this is the error I receive:
09/25-14:46:17.18 master-node DAOS[849/849] fi   INFO src/gurt/fault_inject.c:481 d_fault_inject_init() No config file, fault injection is OFF.
09/25-14:46:17.18 master-node DAOS[849/849] daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=3
09/25-14:46:17.18 master-node DAOS[849/849] crt  INFO src/cart/crt_init.c:269 crt_init_opt() libcart version 4.8.0 initializing
09/25-14:46:17.18 master-node DAOS[849/849] crt  WARN src/cart/crt_init.c:161 data_init() FI_UNIVERSE_SIZE was not set; setting to 2048
09/25-14:46:17.27 master-node DAOS[849/849] dfuse INFO src/client/dfuse/dfuse_main.c:436 main(0x55c5e3a8c980) duns_resolve_path() returned 0 Success
UNS configured on mount point but pool provided
09/25-14:46:17.27 master-node DAOS[849/849] dfuse ERR  src/client/dfuse/dfuse_main.c:519 main(0x55c5e3b31410) DFP left at the end
09/25-14:46:17.27 master-node DAOS[849/849] dfuse ERR  src/client/dfuse/dfuse_main.c:522 main(0x55c5e3b31510) DFS left at the end
09/25-14:46:17.27 master-node DAOS[849/849] dfuse INFO src/client/dfuse/dfuse_main.c:561 main() Exiting with status -1003

I would much appreciate any guidance in solving this.

Thank you,

Peter


Re: pool creation failed in recent master commits

Zhang, Jiafu
 

The issue is gone after adopting Kennth‚Äôs suggestion to set ‚Äúcrt_timeout: 1200‚ÄĚ in global area of daos_server.yml instead of under ‚Äúservers/env_vars‚ÄĚ.

 

@Cain, Kenneth C, thanks!

 

 

 

From: Zhang, Jiafu
Sent: Tuesday, September 29, 2020 8:29 AM
To: 'daos@daos.groups.io' <daos@daos.groups.io>
Subject: RE: [daos] pool creation failed in recent master commits

 

The most recent worked commit I can track is 681b827527a0587d8496d3adbbd77a175370766c (Feb 28).

 

From: Zhang, Jiafu
Sent: Tuesday, September 29, 2020 8:25 AM
To: daos@daos.groups.io
Subject: RE: [daos] pool creation failed in recent master commits

 

I just recalled that I re-opened the ticket on Aug 10. The issue has been existed for long time. Please see detailed info in the ticket.

 

From: daos@daos.groups.io <daos@daos.groups.io> On Behalf Of Oganezov, Alexander A
Sent: Tuesday, September 29, 2020 5:33 AM
To: daos@daos.groups.io
Subject: Re: [daos] pool creation failed in recent master commits

 

Hi Jiafu,

 

What was the previous commit that you know of that works in your setup?

 

Thanks,

~~Alex.

 

From: daos@daos.groups.io <daos@daos.groups.io> On Behalf Of Zhang, Jiafu
Sent: Monday, September 28, 2020 3:05 AM
To: daos@daos.groups.io
Subject: [daos] pool creation failed in recent master commits

 

Hi Guys,

 

I failed to create pool with recent master commits back to 6726e272e2a0e821c0676838c39a2b133a7e0612 (9th Sep). The error in terminal is,

 

Pool-create command FAILED: pool create failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded

ERROR: dmg: pool create failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded.

 

After enabling debug, I didn’t see more valuable info, but below error about timedout.

 

09/28-17:33:42.25  DAOS[285589/285602] swim ERR  src/cart/swim/swim.c:659 swim_progress() The progress callback was not called for too long: 11515 ms after expected.

09/28-17:33:42.25  DAOS[285589/285602] rdb  WARN src/rdb/rdb_raft.c:1980 rdb_timerd() 64616f73[0]: not scheduled for 12.683030 second

09/28-17:33:42.29  DAOS[285589/285602] mgmt ERR  src/mgmt/srv_pool.c:515 ds_mgmt_create_pool() creating pool on ranks cf7aa844 failed: rc DER_TIMEDOUT(-1011)

09/28-17:33:42.29  DAOS[285589/285602] mgmt ERR  src/mgmt/srv_drpc.c:496 ds_mgmt_drpc_pool_create() failed to create pool: DER_TIMEDOUT(-1011)

09/28-17:33:42.29  DAOS[285589/285603] daos INFO src/iosrv/drpc_progress.c:409 process_session_activity() Session 664 connection has been terminated

09/28-17:33:42.29  DAOS[285589/285603] daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=664

09/28-17:33:43.80  DAOS[285589/285602] daos INFO src/iosrv/drpc_progress.c:295 drpc_handler_ult() dRPC handler ULT for module=2 method=207

09/28-17:33:43.80  DAOS[285589/285602] mgmt INFO src/mgmt/srv_drpc.c:468 ds_mgmt_drpc_pool_create() Received request to create pool

09/28-17:34:43.80  DAOS[285589/285602] rpc  ERR  src/cart/crt_context.c:790 crt_context_timeout_check(0x7f61017447d0) [opc=0x1010007 rpcid=0x32444975000000ba rank:tag=1:0] ctx_id 0, (status: 0x38) timed out, tgt rank 1, tag 0

09/28-17:34:43.80  DAOS[285589/285602] rpc  ERR  src/cart/crt_context.c:748 crt_req_timeout_hdlr(0x7f61017447d0) [opc=0x1010007 rpcid=0x32444975000000ba rank:tag=1:0] aborting to group daos_server, rank 1, tgt_uri (null)

09/28-17:34:43.80  DAOS[285589/285602] hg   ERR  src/cart/crt_hg.c:1031 crt_hg_req_send_cb(0x7f61017447d0) [opc=0x1010007 rpcid=0x32444975000000ba rank:tag=1:0] RPC failed; rc: -1011

09/28-17:34:43.80  DAOS[285589/285602] corpc ERR  src/cart/crt_corpc.c:646 crt_corpc_reply_hdlr() RPC(opc: 0x1010007) error, rc: -1011.

 

Any idea?

 

Thanks.


Re: pool creation failed in recent master commits

Cain, Kenneth C
 

Hello Jaifu,

 

Can you try to set the server RPC timeout by using the daos_server.yml file crt_timeout setting (and not using the env_vars section with the CRT_TIMEOUT variable)? See daos/utils/config/daos_server.yml. And take a look at the daos_io_server log near the beginning with the dump_envariables() output (looking for the CRT_TIMEOUT value printed)? I think a change has been made on the daos server to configure RPC timeouts using this new crt_timeout interface. I suspect your configuration tries to set the CRT_TIMEOUT environment variable using the env_vars section of daos_server.yml and it is not taking effect, resulting in pool create timeouts in all cases.

 

The master commit 68ddb557753cf4bbf657347d28baa7bed15d09ef (Aug 10) and later should be useful for large pool creates if they do happen to fail due to timeouts.

 

Thanks,

 

Ken

 

From: daos@daos.groups.io <daos@daos.groups.io> On Behalf Of Zhang, Jiafu
Sent: Monday, September 28, 2020 8:29 PM
To: daos@daos.groups.io
Subject: Re: [daos] pool creation failed in recent master commits

 

The most recent worked commit I can track is 681b827527a0587d8496d3adbbd77a175370766c (Feb 28).

 

From: Zhang, Jiafu
Sent: Tuesday, September 29, 2020 8:25 AM
To: daos@daos.groups.io
Subject: RE: [daos] pool creation failed in recent master commits

 

I just recalled that I re-opened the ticket on Aug 10. The issue has been existed for long time. Please see detailed info in the ticket.

 

From: daos@daos.groups.io <daos@daos.groups.io> On Behalf Of Oganezov, Alexander A
Sent: Tuesday, September 29, 2020 5:33 AM
To: daos@daos.groups.io
Subject: Re: [daos] pool creation failed in recent master commits

 

Hi Jiafu,

 

What was the previous commit that you know of that works in your setup?

 

Thanks,

~~Alex.

 

From: daos@daos.groups.io <daos@daos.groups.io> On Behalf Of Zhang, Jiafu
Sent: Monday, September 28, 2020 3:05 AM
To: daos@daos.groups.io
Subject: [daos] pool creation failed in recent master commits

 

Hi Guys,

 

I failed to create pool with recent master commits back to 6726e272e2a0e821c0676838c39a2b133a7e0612 (9th Sep). The error in terminal is,

 

Pool-create command FAILED: pool create failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded

ERROR: dmg: pool create failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded.

 

After enabling debug, I didn’t see more valuable info, but below error about timedout.

 

09/28-17:33:42.25  DAOS[285589/285602] swim ERR  src/cart/swim/swim.c:659 swim_progress() The progress callback was not called for too long: 11515 ms after expected.

09/28-17:33:42.25  DAOS[285589/285602] rdb  WARN src/rdb/rdb_raft.c:1980 rdb_timerd() 64616f73[0]: not scheduled for 12.683030 second

09/28-17:33:42.29  DAOS[285589/285602] mgmt ERR  src/mgmt/srv_pool.c:515 ds_mgmt_create_pool() creating pool on ranks cf7aa844 failed: rc DER_TIMEDOUT(-1011)

09/28-17:33:42.29  DAOS[285589/285602] mgmt ERR  src/mgmt/srv_drpc.c:496 ds_mgmt_drpc_pool_create() failed to create pool: DER_TIMEDOUT(-1011)

09/28-17:33:42.29  DAOS[285589/285603] daos INFO src/iosrv/drpc_progress.c:409 process_session_activity() Session 664 connection has been terminated

09/28-17:33:42.29  DAOS[285589/285603] daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=664

09/28-17:33:43.80  DAOS[285589/285602] daos INFO src/iosrv/drpc_progress.c:295 drpc_handler_ult() dRPC handler ULT for module=2 method=207

09/28-17:33:43.80  DAOS[285589/285602] mgmt INFO src/mgmt/srv_drpc.c:468 ds_mgmt_drpc_pool_create() Received request to create pool

09/28-17:34:43.80  DAOS[285589/285602] rpc  ERR  src/cart/crt_context.c:790 crt_context_timeout_check(0x7f61017447d0) [opc=0x1010007 rpcid=0x32444975000000ba rank:tag=1:0] ctx_id 0, (status: 0x38) timed out, tgt rank 1, tag 0

09/28-17:34:43.80  DAOS[285589/285602] rpc  ERR  src/cart/crt_context.c:748 crt_req_timeout_hdlr(0x7f61017447d0) [opc=0x1010007 rpcid=0x32444975000000ba rank:tag=1:0] aborting to group daos_server, rank 1, tgt_uri (null)

09/28-17:34:43.80  DAOS[285589/285602] hg   ERR  src/cart/crt_hg.c:1031 crt_hg_req_send_cb(0x7f61017447d0) [opc=0x1010007 rpcid=0x32444975000000ba rank:tag=1:0] RPC failed; rc: -1011

09/28-17:34:43.80  DAOS[285589/285602] corpc ERR  src/cart/crt_corpc.c:646 crt_corpc_reply_hdlr() RPC(opc: 0x1010007) error, rc: -1011.

 

Any idea?

 

Thanks.


Re: pool creation failed in recent master commits

Zhang, Jiafu
 

The most recent worked commit I can track is 681b827527a0587d8496d3adbbd77a175370766c (Feb 28).

 

From: Zhang, Jiafu
Sent: Tuesday, September 29, 2020 8:25 AM
To: daos@daos.groups.io
Subject: RE: [daos] pool creation failed in recent master commits

 

I just recalled that I re-opened the ticket on Aug 10. The issue has been existed for long time. Please see detailed info in the ticket.

 

From: daos@daos.groups.io <daos@daos.groups.io> On Behalf Of Oganezov, Alexander A
Sent: Tuesday, September 29, 2020 5:33 AM
To: daos@daos.groups.io
Subject: Re: [daos] pool creation failed in recent master commits

 

Hi Jiafu,

 

What was the previous commit that you know of that works in your setup?

 

Thanks,

~~Alex.

 

From: daos@daos.groups.io <daos@daos.groups.io> On Behalf Of Zhang, Jiafu
Sent: Monday, September 28, 2020 3:05 AM
To: daos@daos.groups.io
Subject: [daos] pool creation failed in recent master commits

 

Hi Guys,

 

I failed to create pool with recent master commits back to 6726e272e2a0e821c0676838c39a2b133a7e0612 (9th Sep). The error in terminal is,

 

Pool-create command FAILED: pool create failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded

ERROR: dmg: pool create failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded.

 

After enabling debug, I didn’t see more valuable info, but below error about timedout.

 

09/28-17:33:42.25  DAOS[285589/285602] swim ERR  src/cart/swim/swim.c:659 swim_progress() The progress callback was not called for too long: 11515 ms after expected.

09/28-17:33:42.25  DAOS[285589/285602] rdb  WARN src/rdb/rdb_raft.c:1980 rdb_timerd() 64616f73[0]: not scheduled for 12.683030 second

09/28-17:33:42.29  DAOS[285589/285602] mgmt ERR  src/mgmt/srv_pool.c:515 ds_mgmt_create_pool() creating pool on ranks cf7aa844 failed: rc DER_TIMEDOUT(-1011)

09/28-17:33:42.29  DAOS[285589/285602] mgmt ERR  src/mgmt/srv_drpc.c:496 ds_mgmt_drpc_pool_create() failed to create pool: DER_TIMEDOUT(-1011)

09/28-17:33:42.29  DAOS[285589/285603] daos INFO src/iosrv/drpc_progress.c:409 process_session_activity() Session 664 connection has been terminated

09/28-17:33:42.29  DAOS[285589/285603] daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=664

09/28-17:33:43.80  DAOS[285589/285602] daos INFO src/iosrv/drpc_progress.c:295 drpc_handler_ult() dRPC handler ULT for module=2 method=207

09/28-17:33:43.80  DAOS[285589/285602] mgmt INFO src/mgmt/srv_drpc.c:468 ds_mgmt_drpc_pool_create() Received request to create pool

09/28-17:34:43.80  DAOS[285589/285602] rpc  ERR  src/cart/crt_context.c:790 crt_context_timeout_check(0x7f61017447d0) [opc=0x1010007 rpcid=0x32444975000000ba rank:tag=1:0] ctx_id 0, (status: 0x38) timed out, tgt rank 1, tag 0

09/28-17:34:43.80  DAOS[285589/285602] rpc  ERR  src/cart/crt_context.c:748 crt_req_timeout_hdlr(0x7f61017447d0) [opc=0x1010007 rpcid=0x32444975000000ba rank:tag=1:0] aborting to group daos_server, rank 1, tgt_uri (null)

09/28-17:34:43.80  DAOS[285589/285602] hg   ERR  src/cart/crt_hg.c:1031 crt_hg_req_send_cb(0x7f61017447d0) [opc=0x1010007 rpcid=0x32444975000000ba rank:tag=1:0] RPC failed; rc: -1011

09/28-17:34:43.80  DAOS[285589/285602] corpc ERR  src/cart/crt_corpc.c:646 crt_corpc_reply_hdlr() RPC(opc: 0x1010007) error, rc: -1011.

 

Any idea?

 

Thanks.


Re: pool creation failed in recent master commits

Zhang, Jiafu
 

I just recalled that I re-opened the ticket on Aug 10. The issue has been existed for long time. Please see detailed info in the ticket.

 

From: daos@daos.groups.io <daos@daos.groups.io> On Behalf Of Oganezov, Alexander A
Sent: Tuesday, September 29, 2020 5:33 AM
To: daos@daos.groups.io
Subject: Re: [daos] pool creation failed in recent master commits

 

Hi Jiafu,

 

What was the previous commit that you know of that works in your setup?

 

Thanks,

~~Alex.

 

From: daos@daos.groups.io <daos@daos.groups.io> On Behalf Of Zhang, Jiafu
Sent: Monday, September 28, 2020 3:05 AM
To: daos@daos.groups.io
Subject: [daos] pool creation failed in recent master commits

 

Hi Guys,

 

I failed to create pool with recent master commits back to 6726e272e2a0e821c0676838c39a2b133a7e0612 (9th Sep). The error in terminal is,

 

Pool-create command FAILED: pool create failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded

ERROR: dmg: pool create failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded.

 

After enabling debug, I didn’t see more valuable info, but below error about timedout.

 

09/28-17:33:42.25  DAOS[285589/285602] swim ERR  src/cart/swim/swim.c:659 swim_progress() The progress callback was not called for too long: 11515 ms after expected.

09/28-17:33:42.25  DAOS[285589/285602] rdb  WARN src/rdb/rdb_raft.c:1980 rdb_timerd() 64616f73[0]: not scheduled for 12.683030 second

09/28-17:33:42.29  DAOS[285589/285602] mgmt ERR  src/mgmt/srv_pool.c:515 ds_mgmt_create_pool() creating pool on ranks cf7aa844 failed: rc DER_TIMEDOUT(-1011)

09/28-17:33:42.29  DAOS[285589/285602] mgmt ERR  src/mgmt/srv_drpc.c:496 ds_mgmt_drpc_pool_create() failed to create pool: DER_TIMEDOUT(-1011)

09/28-17:33:42.29  DAOS[285589/285603] daos INFO src/iosrv/drpc_progress.c:409 process_session_activity() Session 664 connection has been terminated

09/28-17:33:42.29  DAOS[285589/285603] daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=664

09/28-17:33:43.80  DAOS[285589/285602] daos INFO src/iosrv/drpc_progress.c:295 drpc_handler_ult() dRPC handler ULT for module=2 method=207

09/28-17:33:43.80  DAOS[285589/285602] mgmt INFO src/mgmt/srv_drpc.c:468 ds_mgmt_drpc_pool_create() Received request to create pool

09/28-17:34:43.80  DAOS[285589/285602] rpc  ERR  src/cart/crt_context.c:790 crt_context_timeout_check(0x7f61017447d0) [opc=0x1010007 rpcid=0x32444975000000ba rank:tag=1:0] ctx_id 0, (status: 0x38) timed out, tgt rank 1, tag 0

09/28-17:34:43.80  DAOS[285589/285602] rpc  ERR  src/cart/crt_context.c:748 crt_req_timeout_hdlr(0x7f61017447d0) [opc=0x1010007 rpcid=0x32444975000000ba rank:tag=1:0] aborting to group daos_server, rank 1, tgt_uri (null)

09/28-17:34:43.80  DAOS[285589/285602] hg   ERR  src/cart/crt_hg.c:1031 crt_hg_req_send_cb(0x7f61017447d0) [opc=0x1010007 rpcid=0x32444975000000ba rank:tag=1:0] RPC failed; rc: -1011

09/28-17:34:43.80  DAOS[285589/285602] corpc ERR  src/cart/crt_corpc.c:646 crt_corpc_reply_hdlr() RPC(opc: 0x1010007) error, rc: -1011.

 

Any idea?

 

Thanks.


Re: pool creation failed in recent master commits

Oganezov, Alexander A
 

Hi Jiafu,

 

What was the previous commit that you know of that works in your setup?

 

Thanks,

~~Alex.

 

From: daos@daos.groups.io <daos@daos.groups.io> On Behalf Of Zhang, Jiafu
Sent: Monday, September 28, 2020 3:05 AM
To: daos@daos.groups.io
Subject: [daos] pool creation failed in recent master commits

 

Hi Guys,

 

I failed to create pool with recent master commits back to 6726e272e2a0e821c0676838c39a2b133a7e0612 (9th Sep). The error in terminal is,

 

Pool-create command FAILED: pool create failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded

ERROR: dmg: pool create failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded.

 

After enabling debug, I didn’t see more valuable info, but below error about timedout.

 

09/28-17:33:42.25  DAOS[285589/285602] swim ERR  src/cart/swim/swim.c:659 swim_progress() The progress callback was not called for too long: 11515 ms after expected.

09/28-17:33:42.25  DAOS[285589/285602] rdb  WARN src/rdb/rdb_raft.c:1980 rdb_timerd() 64616f73[0]: not scheduled for 12.683030 second

09/28-17:33:42.29  DAOS[285589/285602] mgmt ERR  src/mgmt/srv_pool.c:515 ds_mgmt_create_pool() creating pool on ranks cf7aa844 failed: rc DER_TIMEDOUT(-1011)

09/28-17:33:42.29  DAOS[285589/285602] mgmt ERR  src/mgmt/srv_drpc.c:496 ds_mgmt_drpc_pool_create() failed to create pool: DER_TIMEDOUT(-1011)

09/28-17:33:42.29  DAOS[285589/285603] daos INFO src/iosrv/drpc_progress.c:409 process_session_activity() Session 664 connection has been terminated

09/28-17:33:42.29  DAOS[285589/285603] daos INFO src/common/drpc.c:717 drpc_close() Closing dRPC socket fd=664

09/28-17:33:43.80  DAOS[285589/285602] daos INFO src/iosrv/drpc_progress.c:295 drpc_handler_ult() dRPC handler ULT for module=2 method=207

09/28-17:33:43.80  DAOS[285589/285602] mgmt INFO src/mgmt/srv_drpc.c:468 ds_mgmt_drpc_pool_create() Received request to create pool

09/28-17:34:43.80  DAOS[285589/285602] rpc  ERR  src/cart/crt_context.c:790 crt_context_timeout_check(0x7f61017447d0) [opc=0x1010007 rpcid=0x32444975000000ba rank:tag=1:0] ctx_id 0, (status: 0x38) timed out, tgt rank 1, tag 0

09/28-17:34:43.80  DAOS[285589/285602] rpc  ERR  src/cart/crt_context.c:748 crt_req_timeout_hdlr(0x7f61017447d0) [opc=0x1010007 rpcid=0x32444975000000ba rank:tag=1:0] aborting to group daos_server, rank 1, tgt_uri (null)

09/28-17:34:43.80  DAOS[285589/285602] hg   ERR  src/cart/crt_hg.c:1031 crt_hg_req_send_cb(0x7f61017447d0) [opc=0x1010007 rpcid=0x32444975000000ba rank:tag=1:0] RPC failed; rc: -1011

09/28-17:34:43.80  DAOS[285589/285602] corpc ERR  src/cart/crt_corpc.c:646 crt_corpc_reply_hdlr() RPC(opc: 0x1010007) error, rc: -1011.

 

Any idea?

 

Thanks.


Re: Any method to check object location: SCM or NVMe?

Yunjae Lee
 

Thanks for the quick reply, Patrick.

I was also wondering how small the I/O size should be to go to SCM rather than NVMe.
I'll test performances following your advice.
It helped me a lot.

81 - 100 of 1309