HPOBench: Compare Multi-fidelity Optimization Algorithms with Ease

When researching and developing new hyperparameter optimization (HPO) methods, a good collection of benchmark problems, ideally relevant, realistic and cheap-to-evaluate, is a very valuable resource. While such collections exist for synthetic problems (COCO) or simple HPO problems (Bayesmark), to the best of our knowledge there is no such collection for multi-fidelity benchmarks. With ever-growing machine … Continue reading HPOBench: Compare Multi-fidelity Optimization Algorithms with Ease