Releases: mlcommons/cm4mlops
Releases · mlcommons/cm4mlops
cm4mlperf November 2024
What's Changed
- fixed several URLs (all tests passed) by @gfursin in #342
- Changes for supporting model and dataset download to host - Mixtral by @anandhu-eng in #346
- Support custom git clone branch in docker by @anandhu-eng in #343
- Merge from go, fixes #337 by @arjunsuresh in #348
- Includes cuda version to run suffix by @anandhu-eng in #354
- Fixes for const in script by @arjunsuresh in #355
- Fix docker image naming SCC24, extended CM script tests by @arjunsuresh in #356
- Fixes for MLPerf Inference Github Actions by @arjunsuresh in #362
- Fix typo in gh action by @arjunsuresh in #363
- Fix CUDA num_devices by @arjunsuresh in #365
- Support cleaning of Nvidia SDXL model by @arjunsuresh in #366
- Improvements to Nvidia MLPerf interface by @arjunsuresh in #367
- Fixes to pull changes for Nvidia implementation by @arjunsuresh in #369
- Improvements to MLPerf inference final report generation by @arjunsuresh in #371
- Support get-platform-details for mlperf-inference by @arjunsuresh in #373
- Support system_info.txt in MLPerf inference submission generation by @arjunsuresh in #374
- Cleanups for mlperf inference get-platform-details by @arjunsuresh in #375
- Improvements to get-platform-details by @arjunsuresh in #376
- Improvements for reproducing AMD implementation by @anandhu-eng in #379
- Improvements to amd LLAMA2 70B command generation - Added server scenario by @anandhu-eng in #383
- Build wheels and release them into PYPI by @anandhu-eng in #385
- Do not pass mlperf_conf for inference-src >= 4.1.1 by @arjunsuresh in #404
- Fix version check for mlperf-inference-src by @arjunsuresh in #405
- Added no-compilation-warning variation for loadgen by @arjunsuresh in #406
- Support 8G Nvidia GPUs for MLPerf Inference by @arjunsuresh in #411
- Fix bug on benchmark-program exit check by @arjunsuresh in #412
- Improve the benchmark-program-mlperf run command by @arjunsuresh in #413
- CM4MLOps snapshot with MLPerf inference: 20241024 by @gfursin in #415
- fix batch size duplication issue by @anandhu-eng in #416
- Update cm repo branch - docker by @anandhu-eng in #422
- Fixes for Nvidia MLPerf inference SS and MS by @arjunsuresh in #423
- Improvements for Nvidia MLPerf inference by @arjunsuresh in #428
- added compressed_tools module by @anandhu-eng in #430
- Updated logic for mounting non cache folder by @anandhu-eng in #427
- Fixes for latest MLPerf inference submission checker changes by @arjunsuresh in #431
- Fixes for Latest MLPerf inference changes by @arjunsuresh in #432
- Fixes for latest MLPerf inference changes by @arjunsuresh in #433
- Submission generation fixes by @anandhu-eng in #424
- Support custom path for saving platform details by @anandhu-eng in #418
- Add getting started to cm4mlops docs by @anandhu-eng in #435
- Merge from Mlperf inference by @anandhu-eng in #436
- Fixes for docker detached mode by @arjunsuresh in #438
- Fixes for get-platform-details by @arjunsuresh in #441
- Testing CM Test automation by @arjunsuresh in #442
- Added github action for individual CM tests by @arjunsuresh in #443
- Fix Individual CM script test by @arjunsuresh in #445
- Fix individual CM script test by @arjunsuresh in #446
- Fix gh action for individual CM sript tests by @arjunsuresh in #447
- Initial PR - gh actions for submission generation for non CM based benchmarks by @anandhu-eng in #440
- Fix GH action for individual CM script testing by @arjunsuresh in #449
- Enable docker run for individual CM script tests by @arjunsuresh in #450
- Fixes huggingface downloader by @arjunsuresh in #452
- capture framework version from cm_sut_info.json by @anandhu-eng in #451
- Sync: Mlperf inference by @arjunsuresh in #444
- Support docker_base_image and docker_cm_repo for CM tests by @arjunsuresh in #453
- Improved docker meta for cm test script by @arjunsuresh in #454
- Support test_input_index and test_input_id to customise CM test script by @arjunsuresh in #458
- Improvements to version-detect in get-generic-sys-util by @arjunsuresh in #460
- Use pkg-config deps for get-generic-sys-util by @arjunsuresh in #464
- Fix pstree version detect on macos by @arjunsuresh in #466
- Fix Nvidia mlperf inference retinanet | onnx version by @arjunsuresh in #468
- Support detached mode for nvidia-mlperf-inference-gptj by @arjunsuresh in #469
- Cleanups for get-generic-sys-util by @arjunsuresh in #470
- Fix tmp-run-env.out name by @arjunsuresh in #471
- Add Version RE for g++-11 by @arjunsuresh in #472
- Use berkeley link for imagenet-aux by default by @arjunsuresh in #473
- Code cleanup by @anandhu-eng in #475
- Fixes for Sdxl MLPerf inference by @arjunsuresh in #481
- Added google dns to mlperf-inference docker by @arjunsuresh in #484
- enable submission generation gh action test globally by @anandhu-eng in #487
- Enables docker run in inference submission generation by @anandhu-eng in #486
- Added docker detatched option by @anandhu-eng in #477
- Fixes #261 partially by @Oseltamivir in #426
- Enabled docker run - gh action submission generation by @anandhu-eng in #489
- Fixes for MLPerf inference, intel conda URL by @arjunsuresh in #491
- Dont use '-dt' for Nvidia ml-model-gptj by @arjunsuresh in #492
- Update starting weights filename for SDXL MLPerf inference by @arjunsuresh in #494
- Implements #455: Copy local repo to docker instead of
git clone
by @Oseltamivir in #467 - Fixes for Nvidia MLPerf inference gptj,sdxl by @arjunsuresh in #495
- Cleanups to MLPerf inference preprocess script by @arjunsuresh in #496
- Support sample-ids for coco2014 accuracy script by @arjunsuresh in #497
- Support download to host - amd llama2 by @anandhu-eng in #480
- Added a retry for git clone failure by @arjunsuresh in #499
- Use custom version for dev branch of inference-src by @arjunsuresh in #500
- Fix path error shown to user by @anandhu-eng in #502
- Fixes for the MLPerf inference nightly test failures by @arjunsuresh in #506
- pip install cm4mlops - handle systems where sudo is absent by @anandhu-eng in #504
- logic update for detect sudo by @anandhu-eng in #508
- Fix Nvidia MLPerf inference gptj model name suffix by @arjunsuresh in #509
- Skip sys-utils-install when no...
r20241005a: snapshot
Snapshot of a current repository for reproducibility with fixed URLs.
r20241005
Stable release
Merge pull request #144 from mlcommons/dev Stable rev-20240729
Stable CM4MLOPS v2.3.4 release supporting MLPerf, artifact evaluation, reproducibility challenges, etc
A stable v2.3.4 release of the open-source CM scripts developed by the community to automate and unify the upcoming reproducibility initiatives and student competitions including ACM/IEEE SCC and MICRO, MLPerf benchmarks and other open science and education projects.
Stable release for CM v2.2.0
Stable release to support CM v2.2.0
Stable release 20240416
First release after moving cm-mlops directory from mlcommons@ck here.