FW: [External] [gpfsug-discuss] IO500 SC20 Call for Submission
FYI. If you need any help, feel free to ping us 😊
Call for IO500 Submission
Deadline: 30 October 2020 AoE
Stabilization period: 1st October -- 9th October 2020 AoE
The IO500 is now accepting and encouraging submissions for the upcoming 7th IO500 list, to be revealed at the IO500 Virtual BOF during SC20. Once again, we are also accepting submissions to the 10 Node I/O Challenge to encourage submission of small scale results. The new ranked lists will be announced at our Virtual SC20 BoF. We hope to see you, and your results, there.
A new change for the upcoming submission procedure is the introduction of a stabilization period that aims to harden the benchmark. The final benchmark is released at the end of this period. During the stabilization we encourage the community to test the proper execution of the benchmark and provide us with feedback. We will apply bug fixes to the code base and expect that results obtained will be valid as full submission. We also continue with another list for the Student Cluster Competition, since IO500 is used during this competition.
Also new this year is that we have partnered with Anthony Kougkas’ team at Illinois Institute of Technology to evaluate the submission metadata describing the storage system on which the test was run to improve the quality and usefulness of the data IO500 collects. You may be contacted by one of his students to clarify one or more of the metadata items from your submission(s). We would appreciate, but do not require, your cooperation to help improve the submission metadata quality. Results from their work will be fed back to improve our submission process for future lists.
The IO500 benchmark suite is designed to be easy to run, and the community has multiple active support channels to help with any questions. Please submit results from your system, and we look forward to seeing many of you at SC20! Please note that submissions of all sizes are welcome, including multiple submissions from different storage systems/tiers at a single site. The website has customizable sorting so it is possible to submit on a small system and still get a very good per-client score, for example. Additionally, the list is about much more than just the raw rank; all submissions help the community by collecting and publishing a wider corpus of data. More details below.
Following the success of the Top500 in collecting and analyzing historical trends in supercomputer technology and evolution, the IO500 was created in 2017, published its first list at SC17, and has grown continuously since then. The need for such an initiative has long been known within High-Performance Computing; however, defining appropriate benchmarks had long been challenging. Despite this challenge, the community, after long and spirited discussion, finally reached consensus on a suite of benchmarks and a metric for resolving the scores into a single ranking.
The multi-fold goals of the benchmark suite are as follows:
1. Maximizing simplicity in running the benchmark suite
2. Encouraging complexity in tuning for performance
3. Allowing submitters to highlight their “hero run” performance numbers
4. Forcing submitters to simultaneously report performance for challenging IO patterns.
Specifically, the benchmark suite includes a hero-run of both IOR and mdtest configured however possible to maximize performance and establish an upper-bound for performance. It also includes an IOR and mdtest run with highly prescribed parameters in an attempt to determine a lower-bound on the performance. Finally, it includes a namespace search, as this has been determined to be a highly sought-after feature in HPC storage systems that have historically not been well-measured. Submitters are encouraged to share their tuning insights for publication.
The goals of the community are also multi-fold:
1. Gather historical data for the sake of analysis and to aid predictions of storage futures
2. Collect tuning information to share valuable performance optimizations across the community
3. Encourage vendors and designers to optimize for workloads beyond “hero runs”
4. Establish bounded expectations for users, procurers, and administrators
10 Node I/O Challenge
The 10 Node Challenge is conducted using the regular IO500 benchmark, however, with the rule that exactly 10 client nodes must be used to run the benchmark. You may use any shared storage with, e.g., any number of servers. When submitting for the IO500 list, you can opt-in for “Participate in the 10 compute node challenge only”, then we will not include the results into the ranked list. Other 10-node node submissions will be included in the full list and in the ranked list. We will announce the result in a separate derived list and in the full list but not on the ranked IO500 list at https://io500.org/
Once again, we encourage you to submit , to join our community, and to attend our virtual BoF “The IO500 and the Virtual Institute of I/O” at SC20, where we will announce the new IO500 list, the 10 node challenge list, and the Student Cluster Competition list. We look forward to answering any questions or concerns you might have.
The IO500 Committee <committee@...>
This e-mail and any attachments may contain confidential material for