Files
cs249r_book/mlperf-edu/tests/test_cli.py
Vijay Janapa Reddi a9878ad6bd feat: import mlperf-edu pedagogical benchmark suite
Snapshot of the standalone /Users/VJ/GitHub/mlperf-edu/ repo as of
2026-04-16, brought into MLSysBook as a parked feature branch for
backup and iteration. Not for merge to dev.

Contents (88 files, ~2.3 MB):
- 16 reference workloads (cloud / edge / tiny / agent divisions)
- LoadGen proxy harness + SUT plugin protocol
- Compliance checker, autograder, hardware fingerprint
- Paper draft (paper.tex) with TikZ/SVG figure sources
- Three lab examples + practitioner workflow configs
- Workload + dataset YAML registries (single source of truth)

Excluded (per mlperf-edu/.gitignore + size constraints):
- Datasets (6.6 GB), checkpoints (260 MB), gpt2 weights (523 MB)
- Generated PDFs, .venv, build artifacts
2026-04-16 14:15:05 -04:00

39 lines
1.3 KiB
Python

import subprocess
import sys
import os
def test_cli_help():
result = subprocess.run(
[sys.executable, "-m", "mlperf_edu.cli", "--help"],
capture_output=True,
text=True,
env={**os.environ, "PYTHONPATH": "."}
)
assert result.returncode == 0
assert "MLPerf EDU" in result.stdout
def test_cli_run_cloud_help():
result = subprocess.run(
[sys.executable, "-m", "mlperf_edu.cli", "run", "cloud", "--help"],
capture_output=True,
text=True,
env={**os.environ, "PYTHONPATH": "."}
)
assert result.returncode == 0
assert "--task" in result.stdout
assert "llm-inference" in result.stdout
def test_cli_run_cloud_inference_offline():
# Run a very small test to keep it fast
# We can't easily pass arguments to change total_samples from CLI yet without more changes
# but we can verify it runs.
result = subprocess.run(
[sys.executable, "-m", "mlperf_edu.cli", "run", "cloud", "--task", "llm-inference", "--scenario", "Offline"],
capture_output=True,
text=True,
env={**os.environ, "PYTHONPATH": "."}
)
assert result.returncode == 0
assert "MLPerf EDU Cloud Benchmark Report" in result.stdout
assert "Scenario: Offline" in result.stdout