In particular for algorithms in AI (incl. machine learning, deep learning, evolutionary algorithms, and hard-combinatorial problem solvers, such as SAT solving), it is crucial to determine a good configuration of parameter settings to achieve peak performance. Since an in-depth understanding of these high-dimensional, abstract black-box problems is often not available, manual tuning of parameter configurations is often considered as a tedious, time-consuming and even error-prone task. Therefore, automated algorithm configuration (AC) systems were proposed that free users from this task, and they demonstrated the practical applicability of these methods by achieving substantial speedups in a wide range of applications (e.g., up to 3,000x in SAT, 50x in MIP, 300x in AI planning and 14x in ASP).
To achieve this, traditional AC systems first optimize the configuration on a set of problem instances wrt an arbitrary performance metric, and henceforth the found configurations will be applied to new instances. Although applying any kind of optimization approach (incl. evolutionary algorithms and model-based optimization) to AC might seem to be straightforward, AC systems are faced with many challenges in practice, incl. (i) configuration spaces might contain categorical and continuous parameters with hierarchical dependencies (ii) a single evaluation takes a lot of time, (iii) it is often infeasible to evaluate many configurations on all instances and thus (iv) we might only partially observe censored performance estimates, since we cannot afford to run all evaluations until the end. In the tutorial, we will give an overview on (i) how to address these problems, (ii) how recent advances in the field led to even more efficient systems than ever before, (iii) how to apply these to different applications, including applications to automated machine learning (AutoML) and SAT solving, and (iv) open challenges in the field.
PPSN 2020 in Leiden, Netherlands & IJCAI 2020 in Yokohama, Japan — latter was postponed to early 2021
- Motivation of the algorithm configuration problem on applications from different fields of AI, including machine learning, SAT solving, AI planning and evolutionary algorithms.
- Problem Definition will cover the formal outline of the algorithm configuration problem, as well as challenges of the problem compared to standard optimization problems.
- Approaches for algorithm configuration will cover a brief but comprehensive overview of state-of-the-art approaches.
- Hands-On on how to use the state-of-the-art algorithm configurator SMAC in practice. We will provide a Colab-Jupyter notebook to leverage an easy and fast way for getting hands-on experience.
- Best Practices on how to run algorithm configuration experiments in a scientific way (based on Eggensperger et al. 2019), with a short outlook to similar problems in automated machine learning.
- (Short Break — for IJCAI)
- Insights into the optimization landscapes shows the properties of these landscapes on real algorithm configuration benchmarks, revealing that the optimization problem might be simpler than expected (based on insights from Pushak et al. 2018)
- Warmstarting of algorithm configurators will show that we can improve the performance in AC by learning from previous applications (based on Lindauer et al. 2018)
- New Racing mechanisms allow for the first time to find configurations for optimal runtime-performance in a bottom-up fashion, instead of top-down; in addition, these have performance guarantees (based on Kleinberg et al. 2017-19)
- Dynamic Algorithm Configuration proposes the idea of dynamically updating a parameter configuration during the run of an algorithm, which can be solved by reinforcement learning (based on Biedenkapp et al. 2020)
- Open challenges will cover open research questions, such as how to integrate new empirical performance models based on deep neural networks (based on Eggensperger et al. 2018) or whether we can learn a good ordering of the instances on-the-fly (similar to self-paced learning).
Prerequisite for Attendees
- hands-on experience in running empirical evaluations and playing with parameters of their algorithms (e.g., learning rate of a deep neural network, step size of an evolutionary algorithm, heuristic selection of an AI planning algorithm, …)
- basics of machine learning
- (optional) basics of reinforcement learning
- Colab notebook
- Video on Youtube
- Selected software:
- SMAC (Python)
- Golden Parameter Search (Python)
- irace (R)
- Benchmark Libraries: