Machine learning for automated algorithm design logo ChaLearn logo

Beat auto-sklearn

Uncle Sam points at you and says: I want you to beat auto-sklearn

In this game we challenge you to beat auto-sklearn, the winner of the recent ChaLearn AutoML challenge. With this little game we want to see if the human (you) can rival an automated procedure for supervised classification.

In a nutshell, auto-sklearn uses state-of-the-art Bayesian optimization to configure a flexible machine learning pipeline implemented in scikit-learn. Your job is to configure the machine learning pipeline with the form below.

Please find further instructions at the bottom of this page.

Good luck!

Your Machine Learning Pipeline Configuration

2. Submit Configuration
3. View Leaderboard

Further remarks

DO NOT CHANGE THE DOWNLOADED FILE. Re-upload it as it is.

Your configuration will be automatically evaluated with a time-limit of 5min. Although we carefully reduced the space of configurations, it is still possible to hit the time or memory limit. Such runs will be scored with -1. You can then check the log files for any error messages.

This website was built and tested with Firefox and does not work with Safari and other webkit-based browsers. To see the hyperparameter information you need to use a device with a mouse.

Instructions

  1. Get a Codalab Id here.
  2. Configure the machine learning pipeline with the form below (the info icons will display the meaning of the hyperparameters).
  3. Click the button to download the submission file.
  4. Upload the submission file to Codalab (here).
  5. Check the score on the leaderboard.
  6. If you have any questions, do not hesitate to ask in the forum.

The pipeline

In this game you are tuning a simple machine learning pipeline with three steps:

  1. Rescaling of the data, for example rescaling of each feature between zero and one.
  2. Preprocessing, like PCA or univariate feature selection.
  3. Classification algorithm, like SVMs or random forests.

All algorithms except xgradient_boosting are from the library scikit-learn and you can learn more about them in scikit-learn's documentation. The eXtreme Gradient Boosting algorithm comes from the package xgboost.

Further links

Organizers

Machine learning for automated algorithm design logo ChaLearn logo