AutoML.org

Freiburg-Hannover-Tübingen

Experience-Driven Algorithm Selection: Making better and cheaper selection decisions

Authors: Tim Ruhkopf, Aditya Mohan, Difan Deng, Alexander Tornede, Frank Hutter, Marius Lindauer TL;DR: We are augmenting classical algorithm selection with multi-fidelity information, which we make non-myopic through meta-learning – enabling us for the first time to interpret partial learning curves of varying lengths jointly and make good algorithm recommendations at low cost. Why should […]

Read More

Learning Activation Functions for Sparse Neural Networks: Improving Accuracy in Sparse Models

Authors: Mohammad Loni, Aditya Mohan, Mehdi Asadi, and Marius Lindauer TL, DR: Optimizing activation functions and hyperparameters of sparse neural networks help us squeeze more performance out of them; thus helping with deploying models in resource-constrained scenarios. We propose a 2-stage optimization pipeline to achieve this.  Motivation: Sparse Neural Networks (SNNs) – the greener and […]

Read More

Understanding AutoRL Hyperparameter Landscapes

Authors: Aditya Mohan, Carolin Benjamins, Konrad Wienecke, Alexander Dockhorn, and Marius Lindauer TL;DR: We investigate hyperparameters in RL by building landscapes of algorithm performance for different hyperparameter values at different stages of training. Using these landscapes we empirically demonstrate that adjusting hyperparameters during training can improve performance, which opens up new avenues to build better […]

Read More

CARL: A benchmark to study generalization in Reinforcement Learning

TL;DR: CARL is a benchmark for contextual RL (cRL). In cRL, we aim to generalize over different contexts. In CARL we saw that if we vary the context, the learning becomes more difficult, and making the context explicit can facilitate learning. CARL makes the context defining the behavior of the environment visible and configurable. This […]

Read More