Search Results for author: Sebastian Pineda Arango

Found 11 papers, 7 papers with code

Dynamic Post-Hoc Neural Ensemblers

1 code implementation6 Oct 2024 Sebastian Pineda Arango, Maciej Janowski, Lennart Purucker, Arber Zela, Frank Hutter, Josif Grabocka

In this study, we explore employing neural networks as ensemble methods, emphasizing the significance of dynamic ensembling to leverage diverse model predictions adaptively.

Diversity

Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How

no code implementations6 Jun 2023 Sebastian Pineda Arango, Fabio Ferreira, Arlind Kadra, Frank Hutter, Josif Grabocka

With the ever-increasing number of pretrained models, machine learning practitioners are continuously faced with which pretrained model to use, and how to finetune it for a new dataset.

Hyperparameter Optimization Image Classification

Deep Pipeline Embeddings for AutoML

1 code implementation23 May 2023 Sebastian Pineda Arango, Josif Grabocka

As a remedy, this paper proposes a novel neural architecture that captures the deep interaction between the components of a Machine Learning pipeline.

Automatic Machine Learning Model Selection Bayesian Optimization +2

Interpretable Mesomorphic Networks for Tabular Data

1 code implementation22 May 2023 Arlind Kadra, Sebastian Pineda Arango, Josif Grabocka

Even though neural networks have been long deployed in applications involving tabular data, still existing neural architectures are not explainable by design.

Deep Learning

Deep Ranking Ensembles for Hyperparameter Optimization

no code implementations27 Mar 2023 Abdus Salam Khazi, Sebastian Pineda Arango, Josif Grabocka

Automatically optimizing the hyperparameters of Machine Learning algorithms is one of the primary open questions in AI.

Hyperparameter Optimization Learning-To-Rank

Transformers Can Do Bayesian Inference

1 code implementation ICLR 2022 Samuel Müller, Noah Hollmann, Sebastian Pineda Arango, Josif Grabocka, Frank Hutter

Our method restates the objective of posterior approximation as a supervised classification problem with a set-valued input: it repeatedly draws a task (or function) from the prior, draws a set of data points and their labels from it, masks one of the labels and learns to make probabilistic predictions for it based on the set-valued input of the rest of the data points.

AutoML Bayesian Inference +3

Transfer Learning for Bayesian HPO with End-to-End Meta-Features

no code implementations29 Sep 2021 Hadi Samer Jomaa, Sebastian Pineda Arango, Lars Schmidt-Thieme, Josif Grabocka

As a result, our novel DKLM can learn contextualized dataset-specific similarity representations for hyperparameter configurations.

Hyperparameter Optimization Transfer Learning

Multimodal Meta-Learning for Time Series Regression

no code implementations5 Aug 2021 Sebastian Pineda Arango, Felix Heinrich, Kiran Madhusudhanan, Lars Schmidt-Thieme

Recent work has shown the efficiency of deep learning models such as Fully Convolutional Networks (FCN) or Recurrent Neural Networks (RNN) to deal with Time Series Regression (TSR) problems.

Meta-Learning regression +2

HPO-B: A Large-Scale Reproducible Benchmark for Black-Box HPO based on OpenML

1 code implementation11 Jun 2021 Sebastian Pineda Arango, Hadi S. Jomaa, Martin Wistuba, Josif Grabocka

Hyperparameter optimization (HPO) is a core problem for the machine learning community and remains largely unsolved due to the significant computational resources required to evaluate hyperparameter configurations.

Hyperparameter Optimization Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.