Auto-Sklearn – What happened in 2020

2020 is over. Time to look back at the amazing major features we introduced to Auto-Sklearn. 1. Auto-sklearn 2.0 Obviously, the biggest innovation was the release of the next generation of our software: Auto-sklearn 2.0. We already described the details in an earlier blogpost and paper, but here is the short summary: Better meta-learning; We […]

Read More

NAS-Bench-301 and the Case for Surrogate NAS Benchmarks

By Julien Siems, Lucas Zimmer, Arber Zela, Jovita Lukasik, Margret Keuper & Frank Hutter The Need for Realistic NAS Benchmarks Neural Architecture Search (NAS) is a logical next step in representation learning as it removes human bias from architecture design, similar to deep learning removing human bias from feature engineering. As such, NAS has experienced […]

Read More

Learning Step-Size Adaptation in CMA-ES

In a Nutshell In CMA-ES, the step size controls how fast or slow a population traverses through a search space. Large steps allow you to quickly skip over uninteresting areas (exploration), whereas small steps allow a more focused traversal of interesting areas (exploitation). Handcrafted heuristics usually trade off small and large steps given some measure […]

Read More

Playing Games with Progressive Episode Lengths

A framework of ES-based limited episode’s length Evolutionary Strategy for Reinforcement Learning Recently, evolutionary strategy(ES) showed surprisingly good performance as an alternative approach to deep Reinforcement Learning algorithms for playing Atari games [1, 2, 3].  ES directly optimizes the weights of deep policy networks encoding a mapping from states to actions. Thus, an ES approach […]

Read More

Auto-Sklearn 2.0: The Next Generation

Since our initial release of auto-sklearn 0.0.1 in May 2016 and the publication of the NeurIPS paper “Efficient and Robust Automated Machine Learning” in 2015, we have spent a lot of time on maintaining, refactoring and improving code, but also on new research. Now, we’re finally ready to share the next version of our flagship AutoML system: Auto-Sklearn 2.0.

This new version is based on our experience from winning the second ChaLearn AutoML challenge@PAKDD’18 (see also the respective chapter in the AutoML book) and integrates improvements we thoroughly studied in our upcoming paper. Here are the main insights:


Read More


Dynamic Algorithm Configuration

  Motivation When designing algorithms we want them to be as flexible as possible such that they can solve as many problems as possible. To solve a specific family of problems well, finding well-performing hyperparameter configurations requires us to either use extensive domain knowledge or resources. The second point is especially true if we want […]

Read More


By Arber Zela and Frank Hutter Understanding and Robustifying Differentiable Architecture Search Optimizing in the search of neural network architectures was initially defined as a discrete problem which intrinsically required to train and evaluate thousands of networks. This of course required huge amount of computational power, which was only possible for few institutions. One-shot neural […]

Read More