AutoML.org

Freiburg-Hannover-Tübingen

ixAutoML: Interactive and Explainable AutoML

Automating machine learning supports users, developers, and researchers in developing new ML applications fast. The output of AutoML tools, however, cannot always be easily explained by human intuition or expert knowledge and thus experts sometimes lack trust in AutoML tools. Therefore we develop methods to improve the transparency and explainability of AutoML systems, increasing trust in AutoML tools as well as generating valuable insights into otherwise opaque optimization processes.  Ways of explaining AutoML include:

  • Hyperparameter Importance: Which hyperparameters (or other design decisions) are globally important to improve the performance of ML systems? [Hutter et al. 2014, Watanabe et al. 2023]
  • Automatic Ablation Studies: If an AutoML tool started with a given configuration (e.g., defined by the user or the original developer of the ML algorithm at hand), which changes were important compared to the configuration returned by the AutoML tool to achieve the observed performance improvement? [Biedenkapp et al. 2017]
  • Visualization of Hyperparameter Effects: How can we visualize the effect of changing hyperparameter settings, both locally and globally? [Hutter et al. 2014, Biedenkapp et al. 2018, Moosbauer et al. 2021]
  • Visualization of the Sampling Process: In which areas of the configuration space has an AutoML tool sampled when and why? Which performance can we expect there? [Biedenkapp et al. 2018]
  • Symbolic Explanations of Hyperparameter Effects: How can we obtain simple, interpretable formulas describing the effects of hyperparameters on the performance of the final model? [Segel et al. 2023]

Complementary to getting insights from AutoML is the interaction with AutoML to provide further guidance from the users on how to find good solutions. For example, some experts developed an intuition for good regions of hyperparameter settings.

  • Human Prior Knowledge of the Optimum: How can Bayesian Optimization make efficient use of users’ prior knowledge of the optimum to guide its search? [Souza et al. 2021, Hvarfner et al. 2022] Can knowledge of the optimum be efficiently and robustly used in multi-fidelity optimization? [Mallik et al. 2023]

Related Blog Posts