Neural networks must be trained to be useful. However, training is a resource-intensive task, often demanding extensive compute and energy resources. To promote faster training algorithms, the MLCommons® Algorithms Working Group is delighted to present the AlgoPerf: Training Algorithms benchmark. This benchmark competition is designed to measure neural network training speedups due to algorithmic improvements. We welcome submissions that implement both novel and existing training algorithms, including, but not limited to:
- Optimizer update rules
- Hyperparameter tuning protocols, search spaces, or schedules
- Data sampling strategies
Submissions can compete under two hyperparameter tuning rulesets (with separate prizes and awards): an external tuning ruleset meant to simulate tuning with a fixed amount of parallel resources, or a self-tuning ruleset for hyperparameter-free algorithms.
- Call for submissions: November 28th, 2023
- Registration deadline to express non-binding intent to submit: February 28th, 2024.
Please fill out the (mandatory but non-binding) registration form. - Submission deadline: March 28th, 2024
- Deadline for self-reporting preliminary results: May 28th, 2024
- [tentative] Announcement of all results: July 15th, 2024
For a detailed and up-to-date timeline see the Competition Rules.
For details on how to participate in the competition, please refer to our Competition Rules. To learn more about the benchmark, see our technical documentation. The benchmark is further motivated, explained, and justified in the accompanying paper. We require all submissions to be provided under the open-source Apache 2.0 license.
MLCommons has provided a total of $50,000 in prize money for eligible winning submissions. We would also like to express our gratitude to Google for their generous support in providing computational resources to score the top submissions, and resources to help score some promising submissions from submitters with more limited resources.