|
vos_iterate unexpect repeat executed func fill_rec
Hi, As the code comment mentioned, we avoided reading NVMe data in fill_rec() since it doesn’t support yield, so fill_rec() won’t call copy_data_cb() over NVMe data. (You can see there is an assert on
Hi, As the code comment mentioned, we avoided reading NVMe data in fill_rec() since it doesn’t support yield, so fill_rec() won’t call copy_data_cb() over NVMe data. (You can see there is an assert on
|
By
Niu, Yawei
· #1731
·
|
|
Questions about daos evolution and design
I see, so it looks to me same to the question 4 of "append write"? Hope my answer is helpful. Thanks -Niu
I see, so it looks to me same to the question 4 of "append write"? Hope my answer is helpful. Thanks -Niu
|
By
Niu, Yawei
· #1704
·
|
|
Questions about daos evolution and design
1. DAOS is an actual product. 2. It looks to me that longer I/O latency is inevitable when node/target/SSD is being taken offline. 3. I'm not quite sure what's the "redirect design" you are referring
1. DAOS is an actual product. 2. It looks to me that longer I/O latency is inevitable when node/target/SSD is being taken offline. 3. I'm not quite sure what's the "redirect design" you are referring
|
By
Niu, Yawei
· #1700
·
|
|
DPI_SPACE query after extending pool
Yes, that’s the reservation I mentioned, and the NVMe reservation has been removed in master and 2.2. Thanks -Niu From: <daos@daos.groups.io> on behalf of "Tuffli, Chuck" <chuck.tuffli@...> Reply-
Yes, that’s the reservation I mentioned, and the NVMe reservation has been removed in master and 2.2. Thanks -Niu From: <daos@daos.groups.io> on behalf of "Tuffli, Chuck" <chuck.tuffli@...> Reply-
|
By
Niu, Yawei
· #1624
·
|
|
DPI_SPACE query after extending pool
Could you double check if creating container would cause NVMe free space dropping? If it’s true, please open a ticket for further investigation. I can’t think of why container creation could consume N
Could you double check if creating container would cause NVMe free space dropping? If it’s true, please open a ticket for further investigation. I can’t think of why container creation could consume N
|
By
Niu, Yawei
· #1622
·
|
|
DPI_SPACE query after extending pool
Hi, Chuck The reserved space is per pool, it’s not relevant with container creation, so I think the space change you observed after container creation isn’t caused by space reservation. FYI, we’ve jus
Hi, Chuck The reserved space is per pool, it’s not relevant with container creation, so I think the space change you observed after container creation isn’t caused by space reservation. FYI, we’ve jus
|
By
Niu, Yawei
· #1620
·
|
|
DPI_SPACE query after extending pool
Hi, Chuck The “used space” (total – free) is kind of OP (over-provisioning), you know that DAOS server has to reserve some space on both SCM and NVMe to ensure punch, container/object destroy, GC and
Hi, Chuck The “used space” (total – free) is kind of OP (over-provisioning), you know that DAOS server has to reserve some space on both SCM and NVMe to ensure punch, container/object destroy, GC and
|
By
Niu, Yawei
· #1616
·
|
|
Is there any problem at blobstore load err
Right, that’s probably something could be improved in the future. Maybe we could move the blobstore creation from io engine (on first start) to control plane (on storage format?). Thanks for pointing
Right, that’s probably something could be improved in the future. Maybe we could move the blobstore creation from io engine (on first start) to control plane (on storage format?). Thanks for pointing
|
By
Niu, Yawei
· #1588
·
|
|
Questions about ULT Schedule
Hi, In your example, the ULT will be tracked in the wait list from “ABT_future” on waiting, once it’s waked up, it’ll be pushed back to a runnable ULT FIFO list (per ABT pool, maintained by Argobots i
Hi, In your example, the ULT will be tracked in the wait list from “ABT_future” on waiting, once it’s waked up, it’ll be pushed back to a runnable ULT FIFO list (per ABT pool, maintained by Argobots i
|
By
Niu, Yawei
· #1522
·
|
|
Questions about ULT Schedule
Hi, The design is to ensure that all IO requests from different pools are processed in FIFO order, and space pressure from one pool doesn’t interfere request processing for other pools, but the implem
Hi, The design is to ensure that all IO requests from different pools are processed in FIFO order, and space pressure from one pool doesn’t interfere request processing for other pools, but the implem
|
By
Niu, Yawei
· #1510
·
|
|
dmg pool operation stuck
Hi, Allen The log showed it was stuck on creating blobstore. It looks like your device isn’t well supported by SPDK, could you collect some device information by SPDK ‘identify’ tool? Also, there is a
Hi, Allen The log showed it was stuck on creating blobstore. It looks like your device isn’t well supported by SPDK, could you collect some device information by SPDK ‘identify’ tool? Also, there is a
|
By
Niu, Yawei
· #1503
·
|
|
CPU NUMA node bind error
Hi, Huijun Sorry for the confusion I brought here. I didn’t refer to the NUMA question (which I believe was answered by others), I was just asking you to create a ticket for the particular assert erro
Hi, Huijun Sorry for the confusion I brought here. I didn’t refer to the NUMA question (which I believe was answered by others), I was just asking you to create a ticket for the particular assert erro
|
By
Niu, Yawei
· #1269
·
|
|
CPU NUMA node bind error
The assert of “bdh_io_channel != NULL” is because a bio poll is called after the context is freed on error cleanup, could you open a ticket for it? Thanks! Thanks -Niu From: <daos@daos.groups.io> on b
The assert of “bdh_io_channel != NULL” is because a bio poll is called after the context is freed on error cleanup, could you open a ticket for it? Thanks! Thanks -Niu From: <daos@daos.groups.io> on b
|
By
Niu, Yawei
· #1263
·
|
|
DAOS with NVMe-over-Fabrics
So far DAOS server supports only local PCIe attached NVMe, but it won’t be very difficult to support NVMeOF in the future, it requires only server configuration changes, all others are transparent to
So far DAOS server supports only local PCIe attached NVMe, but it won’t be very difficult to support NVMeOF in the future, it requires only server configuration changes, all others are transparent to
|
By
Niu, Yawei
· #1177
·
|
|
Dkeys and NULL Akey
Hi, Colin Unfortunately daos_perf supports only DAOS fetch/update APIs so far, there are IOR and FIO plugin runs over DAOS array API, but I’m not aware of any benchmark using daos_kv_put(). Thanks -Ni
Hi, Colin Unfortunately daos_perf supports only DAOS fetch/update APIs so far, there are IOR and FIO plugin runs over DAOS array API, but I’m not aware of any benchmark using daos_kv_put(). Thanks -Ni
|
By
Niu, Yawei
· #858
·
|
|
daos_perf
Yes, I agree with you, and I’ve pasted your findings in DAOS-4521 and we’ll fix it along with problem of verification failure in SV mode. Thanks a lot! Thanks -Niu From: <daos@daos.groups.io> on behal
Yes, I agree with you, and I’ve pasted your findings in DAOS-4521 and we’ll fix it along with problem of verification failure in SV mode. Thanks a lot! Thanks -Niu From: <daos@daos.groups.io> on behal
|
By
Niu, Yawei
· #853
·
|
|
NVMe/SPDK disk IO traffic monitor.
The env is only read once on server start (actually, you can put it in server yaml file just like other env variables), so it can’t be set dynamically so far. Thanks -Niu From: <daos@daos.groups.io> o
The env is only read once on server start (actually, you can put it in server yaml file just like other env variables), so it can’t be set dynamically so far. Thanks -Niu From: <daos@daos.groups.io> o
|
By
Niu, Yawei
· #852
·
|
|
daos_perf
Hi, Colin Current implementation is that the NVMe size option will be ignored when NVMe isn’t configured (if NVMe is configured but not available for some other reason, the pool creation will fail), t
Hi, Colin Current implementation is that the NVMe size option will be ignored when NVMe isn’t configured (if NVMe is configured but not available for some other reason, the pool creation will fail), t
|
By
Niu, Yawei
· #838
·
|
|
NVMe/SPDK disk IO traffic monitor.
Hi, Colin To verify if IO goes properly to NVMe SSD, set the env “IO_STAT_PERIOD=10” on server, then SPDK io statistics will be printed on server console every 10 seconds. As far as I know, there isn’
Hi, Colin To verify if IO goes properly to NVMe SSD, set the env “IO_STAT_PERIOD=10” on server, then SPDK io statistics will be printed on server console every 10 seconds. As far as I know, there isn’
|
By
Niu, Yawei
· #832
·
|
|
daos_perf
Hi, Colin You could double check if your NVMe is configured properly on server side, if no NVMe is configured, all data will be landed to SCM. Thanks -Niu From: <daos@daos.groups.io> on behalf of Coli
Hi, Colin You could double check if your NVMe is configured properly on server side, if no NVMe is configured, all data will be landed to SCM. Thanks -Niu From: <daos@daos.groups.io> on behalf of Coli
|
By
Niu, Yawei
· #830
·
|