AutoML.org

Freiburg-Hannover-Tübingen

Review of the Year 2023 – AutoML Hannover

by the AutoML Hannover Team The year 2023 was the most successful for us as a (still relatively young) AutoML group in Hannover. With the start of several big projects, including the ERC starting grant on interactive and explainable AutoML and a BMUV-funded project on Green AutoML, the group has grown and we were able […]

Read More

New Horizons in Parameter Regularization: A Constraint Approach

Authors: Jörg Franke, Michael Hefenbrock, Gregor Koehler, Frank Hutter Introduction We present in our recent preprint a novel approach to parameter regularization for deep learning: Constrained Parameter Regularization (CPR). It is an alternative to traditional weight decay. Instead of applying a constant penalty uniformly to all parameters, we enforce an upper bound on a statistical […]

Read More

Rethinking Performance Measures of RNA Secondary Structure Problems

TL;DR In our NeurIPS workshop paper , we analyze different performance measures for the evaluation of RNA secondary structure prediction algorithms, showing that commonly used measures are flawed in certain settings. We then propose the Weisfeiler-Lehman graph kernel as a competent measure for performance assessment in the field. RNA Secondary Structure Prediction Ribonucleic acid (RNA) […]

Read More

LC-PFN: Efficient Bayesian Learning Curve Extrapolation using Prior-Data Fitted Networks

Authors: Steven Adriaensen*, Herilalaina Rakotoarison*, Samuel Müller, and Frank Hutter TL;DR In our paper, we propose LC-PFN, a novel method for Bayesian learning curve extrapolation. LC-PFN is a prior-data-fitted network (PFN), a transformer trained on synthetic learning curve data capable of doing Bayesian learning curve extrapolation in a single forward pass. We show that our […]

Read More

Construction of Hierarchical Neural Architecture Search Spaces based on Context-free Grammars

Authors: Simon Schrodi, Danny Stoll, Binxin Ru, Rhea Sanjay Sukthanker, Thomas Brox, and Frank Hutter TL;DR We take a functional view of neural architecture search that allows us to construct highly expressive search spaces based on context-free grammars, and show that we can efficiently find well-performing architectures. NAS is great, but… The neural architecture plays […]

Read More

Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face Recognition

Deep learning is applied to a wide variety of socially-consequential domains, e.g., credit scoring, fraud detection, hiring decisions, criminal recidivism, loan repayment, and face recognition, with many of these applications impacting the lives of people more than ever — often in biased ways. Dozens of formal definitions of fairness have been proposed, and many algorithmic […]

Read More

Symbolic Explanations for Hyperparameter Optimization

Authors:  Sarah Segel, Helena Graf, Alexander Tornede, Bernd Bischl, and Marius Lindauer TL;DR We propose to apply symbolic regression in a hyperparameter optimization setting to obtain explicit formulas providing simple and interpretable explanations of the effects of hyperparameters on the model performance. HPO is great, but… In the field of machine learning, hyperparameter optimization (HPO) […]

Read More

Self-Adjusting Bayesian Optimization with SAWEI

By Carolin Benjamins, Elena Raponi, Anja Jankovic, Carola Doerr and Marius Lindauer TLDR: In BO: We self-adjust the exploration-exploitation trade-off online in the acquisition function, adapting to any problem landscape. Motivation Bayesian optimization (BO) encompasses a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets. However, BO itself has numerous design […]

Read More

Experience-Driven Algorithm Selection: Making better and cheaper selection decisions

Authors: Tim Ruhkopf, Aditya Mohan, Difan Deng, Alexander Tornede, Frank Hutter, Marius Lindauer TL;DR: We are augmenting classical algorithm selection with multi-fidelity information, which we make non-myopic through meta-learning – enabling us for the first time to interpret partial learning curves of varying lengths jointly and make good algorithm recommendations at low cost. Why should […]

Read More

PFNs4BO: In-Context Learning for Bayesian Optimization

Can we replace the GP in BO with in-context learning? Absolutely. We achieve strong real-world performance on a variety of benchmarks with a PFN that uses only in-context learning to provide training values. This is what we found out in our ICML ‘23 paper PFNs4BO: In-Context Learning for Bayesian Optimization. Our models are trained only […]

Read More