Collective Knowledge (CK) is an educational project to learn how to run AI, ML and other emerging workloads in the most efficient and cost-effective way across diverse models, data sets, software and hardware: [ white paper ].
It includes the following sub-projects.
The CMX framework facilitates the decomposition of complex software systems and benchmarks such as MLPerf into portable, reusable, and interconnected automation recipes for MLOps and DevOps. These recipes are developed and continuously improved by the community.
Starting in 2025, CMX V4.0.0 serves as drop-in, backward-compatible replacement for the earlier Collective Mind framework (CM) and other MLCommons automation prototypes, while providing a simpler and more robust interface.
CMX is a lightweight, Python-based toolset that provides a unified command-line interface (CLI), a Python API, and minimal dependencies. It is designed to help researchers and engineers automate repetitive, time-consuming tasks such as building, running, benchmarking, and optimizing AI, machine learning, and other applications across diverse and constantly evolving models, data, software, and hardware.
CMX is continuously enhanced through public and private Git repositories, providing automation recipes and artifacts that are seamlessly accessible via its unified interface.
We have developed a collection of portable, extensible and technology-agnostic automation recipes with a common CLI and Python API (CM scripts) to unify and automate all the manual steps required to compose, run, benchmark and optimize complex ML/AI applications on diverse platforms with any software and hardware.
The two key automations are script and cache: see online catalog at CK playground, online MLCommons catalog.
CM scripts extend the concept of cmake
with simple Python automations, native scripts
and JSON/YAML meta descriptions. They require Python 3.7+ with minimal dependencies and are
continuously extended by the community and MLCommons members
to run natively on Ubuntu, MacOS, Windows, RHEL, Debian, Amazon Linux
and any other operating system, in a cloud or inside automatically generated containers
while keeping backward compatibility.
See the online documentation
at MLCommons to run MLPerf inference benchmarks across diverse systems using CMX.
Just install pip install cmind
and substitute the following commands and flags:
cm
->cmx
mlc
->cmx run mlc
mlcr
->cmxr
-v
->--v
CM4MLPerf-results powered by CM - a simplified and unified representation of the past MLPerf results in the CM format for further visualization and analysis using CK graphs.
Collective Knowledge Playground - a unified and open-source platform designed to index all CM scripts similar to PYPI, assist users in preparing CM commands to:
- aggregate, process, visualize, and compare MLPerf benchmarking results for AI and ML systems
- run MLPerf benchmarks
- organize open and reproducible optimization challenges and tournaments.
These initiatives aim to help academia and industry collaboratively enhance the efficiency and cost-effectiveness of AI systems.
Artifact Evaluation automation - a community-driven initiative leveraging the Collective Mind framework to automate artifact evaluation and support reproducibility efforts at ML and systems conferences.
- CM (2022-2024)
- CM-MLOps (2021)
- CM4MLOps (2022-2024)
- CK automation framework v1 and v2
Copyright (c) 2021-2025 MLCommons
Grigori Fursin, the cTuning foundation and OctoML donated this project to MLCommons to benefit everyone.
Copyright (c) 2014-2021 cTuning foundation
- Grigori Fursin (FlexAI, cTuning)
- CM, CM4MLOps and MLPerf automations: MLCommons infra WG
- CMX (the next generation of CM since 2025): Grigori Fursin
To learn more about the motivation behind this project, please explore the following presentations:
- "Enabling more efficient and cost-effective AI/ML systems with Collective Mind, virtualized MLOps, MLPerf, Collective Knowledge Playground and reproducible optimization tournaments": [ ArXiv ]
- ACM REP'23 keynote about the MLCommons CM automation framework: [ slides ]
- ACM TechTalk'21 about Collective Knowledge project: [ YouTube ] [ slides ]
- Journal of Royal Society'20: [ paper ]
TBD
This open-source project was created by Grigori Fursin and sponsored by cTuning.org, OctoAI and HiPEAC. Grigori donated this project to MLCommons to modularize and automate MLPerf benchmarks, benefit the community, and foster its development as a collaborative, community-driven effort.
We thank MLCommons, FlexAI and cTuning for supporting this project, as well as our dedicated volunteers and collaborators for their feedback and contributions!
If you found the CM automations helpful, kindly reference this article: [ ArXiv ], [ BibTex ].