no code implementations • 18 Feb 2025 • Difan Deng, Marius Lindauer
(ii) Local Tokens survive until the next global token appears.
no code implementations • 7 Jun 2024 • Difan Deng, Marius Lindauer
In this work, we propose a novel hierarchical neural architecture search approach for time series forecasting tasks.
no code implementations • 13 Jun 2023 • Alexander Tornede, Difan Deng, Theresa Eimer, Joseph Giovanelli, Aditya Mohan, Tim Ruhkopf, Sarah Segel, Daphne Theodorakopoulos, Tanja Tornede, Henning Wachsmuth, Marius Lindauer
The fields of both Natural Language Processing (NLP) and Automated Machine Learning (AutoML) have achieved remarkable results over the past years.
1 code implementation • 11 May 2022 • Difan Deng, Florian Karl, Frank Hutter, Bernd Bischl, Marius Lindauer
In contrast to common NAS search spaces, we designed a novel neural architecture search space covering various state-of-the-art architectures, allowing for an efficient macro-search over different DL approaches.
no code implementations • 10 Nov 2021 • Difan Deng, Marius Lindauer
Because of its sample efficiency, Bayesian optimization (BO) has become a popular approach dealing with expensive black-box optimization problems, such as hyperparameter optimization (HPO).
1 code implementation • 20 Sep 2021 • Marius Lindauer, Katharina Eggensperger, Matthias Feurer, André Biedenkapp, Difan Deng, Carolin Benjamins, Tim Ruhopf, René Sass, Frank Hutter
Algorithm parameters, in particular hyperparameters of machine learning algorithms, can substantially impact their performance.
no code implementations • 13 Jul 2021 • Bernd Bischl, Martin Binder, Michel Lang, Tobias Pielok, Jakob Richter, Stefan Coors, Janek Thomas, Theresa Ullmann, Marc Becker, Anne-Laure Boulesteix, Difan Deng, Marius Lindauer
Most machine learning algorithms are configured by one or several hyperparameters that must be carefully chosen and often considerably impact performance.
1 code implementation • ICML Workshop AutoML 2021 • Julia Guerrero-Viu, Sven Hauns, Sergio Izquierdo, Guilherme Miotto, Simon Schrodi, Andre Biedenkapp, Thomas Elsken, Difan Deng, Marius Lindauer, Frank Hutter
Neural architecture search (NAS) and hyperparameter optimization (HPO) make deep learning accessible to non-experts by automatically finding the architecture of the deep neural network to use and tuning the hyperparameters of the used training pipeline.
1 code implementation • 15 Dec 2020 • Noor Awad, Gresa Shala, Difan Deng, Neeratyoy Mallik, Matthias Feurer, Katharina Eggensperger, Andre' Biedenkapp, Diederick Vermetten, Hao Wang, Carola Doerr, Marius Lindauer, Frank Hutter
In this short note, we describe our submission to the NeurIPS 2020 BBO challenge.