AutoML.org

Freiburg-Hannover-Tübingen

HPO Benchmarks

In order to democratize research on Hyperparameter Optimization (HPO) and achieve comparable empirical results, the utilization of benchmarks is essential.

HPOBench

To enable reproducible research in HPO, more than 100 real, tabular, and surrogate benchmarks were released along with their containerized version.

JAHS-Bench-201

JAHS-Bench-201 is a benchmark for Joint Architecture and Hyperparameter Search (JAHS), which is a combination of neural architecture search and hyperparameter optimization. It is the first collection of surrogate benchmarks for JAHS and is built to facilitate research on multi-objective, cost-aware, and multi-fidelity optimization algorithms. JAHS-Bench-201 covers a larger space in both the architectural and hyperparameters axes, with some of the hyperparameters being continuous, and stores 20 metrics in its evaluations, more than any other previous benchmark. It also provides full support for multiple fidelities, objectives, and tasks. JAHS-Bench-201 aims to democratize research on JAHS and lower the barrier to entry of an extremely compute-intensive field.

Bansal, A., Stoll, D., Janowski, M., Zela, A., & Hutter, F. (2022). JAHS-Bench-201: A Foundation For Research On Joint Architecture And Hyperparameter Search. In Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS 2022) [pdf]

LCBench

The Learning Curve benchmark consists of multiple tabular benchmarks containing per epoch learning curves from the training of neural networks of different hyperparameters on various OpenML datasets.

ACLib

ACLib is a benchmark collection for algorithm configuration