diff --git a/submission_rules.adoc b/submission_rules.adoc index ad392a9..fea8cbd 100644 --- a/submission_rules.adoc +++ b/submission_rules.adoc @@ -732,9 +732,18 @@ An _Available_ software component must be well supported for general use. For op #### Preview Systems -A _Preview_ system is a system which did not qualify as an _Available_ system as of the previous MLPerf submission date, but will qualify in the next submission after 140 days of the current submission date, or by the next MLPerf submission date, whichever is more, and which the submitter commits to submitting as an _Available_ system by that time. If it is not submitted in that submission round with equal or better performance (allowing for noise), the _Preview_ benchmark will be marked as invalid. A _Preview_ submission must include performance on at least one benchmark which will be considered _MLPerf Compatible_ (xref:MLPerf_Compatibility_Table.adoc[see the MLPerf Compatibility Table]) in the upcoming round where transition to _Available_ is made (consult SWG for Benchmark Roadmap). On each of the benchmarks that are previewed and are _Compatible_, the _Available_ submission must show equal or better performance (allowing for noise, for any changes to the benchmark definition) on all systems for Inference and across at least the smallest and the largest scale of the systems used for _Preview_ submission on that benchmark for Training (e.g. _Available_ Training submissions can be on scales smaller than the smallest and larger than the largest scale used for _Preview_ submission). For submissions accompanied by power measurements, "equal or better" must use power-normalized performance rather than absolute performance. - -* Training: For an _Available_ system that is larger than the _Preview_ system, absolute performance must be better. For an _Available_ system that is smaller than the _Preview_ system, efficiency (time-to-train * number of chips) must be better. +A _Preview_ system is a system which did not qualify as an _Available_ system as of the previous MLPerf submission date, but will qualify in the next submission after 140 days of the current submission date, or by the next MLPerf submission date, whichever is more, and which the submitter commits to submitting as an _Available_ system by that time. If it is not submitted in that submission round with equal or better performance (allowing for noise), the _Preview_ benchmark will be marked as invalid. A _Preview_ submission must include performance on at least one benchmark which will be considered _MLPerf Compatible_ (xref:MLPerf_Compatibility_Table.adoc[see the MLPerf Compatibility Table]) in the upcoming round where transition to _Available_ is made (consult SWG for Benchmark Roadmap). + +On each of the benchmarks that are previewed and are _Compatible_, the _Available_ submission must show _equal or better performance_ than the _Preview_submission_, allowing for noise, for changes in the benchmark definition, or for changes in the system scale (defined as the number of system components principally determining performance e.g. accelerator chips): +* Training: An _Available_ submission can be on a system larger than the largest system used for _Preview_, or smaller than the smallest system used for _Preview_: + * For an _Available_ system that is larger than the _Preview_ system, absolute performance must be equal or better. + * For an _Available_ system that is smaller than the _Preview_ system, efficiency (time-to-train * number of accelerators) must be equal or better. + * Performance must be equal or better at least across the smallest and the largest of the systems used for _Preview_. +* Inference without Power measurements: An _Available_ submission can be on a system larger than the largest system used for _Preview_. + * For an _Available_ system that is larger than the _Preview_ system, performance per accelerator must be equal or better. +* Inference with Power measurements: An _Available_ submission must be on a system of the same scale as used for _Preview_. + * Power-normalized performance (not absolute performance) must be equal or better. +Any other changes must be approved by the relevant Working Group prior to submission. If none of the _Preview_ benchmarks are MLPerf _Compatible_ in the upcoming round where transition to Available is made in a rare event, a submitter may get their performance validated in the upcoming round by making a submission on the old/retired benchmark to the Results WG during review period (such a submission will not show up on the Results table but will only be used by the Results WG to validate a past Preview Submission).