1 code implementation • 6 Jan 2025 • Shi Bin Hoo, Samuel Müller, David Salinas, Frank Hutter
Foundation models have become popular in forecasting due to their ability to make accurate predictions, even with minimal fine-tuning on specific datasets.
no code implementations • 15 Nov 2024 • Kai Helli, David Schnurr, Noah Hollmann, Samuel Müller, Frank Hutter
To model shifts of these causal models, we use a secondary SCM, that specifies changes in the primary model parameters.
1 code implementation • 2 Oct 2024 • Samuel Müller, Noah Hollmann, Frank Hutter
In this paper, we argue that a more useful interpretation of neural network behavior in this era is as an approximation of the true posterior, as defined by the data-generating process.
1 code implementation • 27 May 2023 • Samuel Müller, Matthias Feurer, Noah Hollmann, Frank Hutter
In this paper, we use Prior-data Fitted Networks (PFNs) as a flexible surrogate for Bayesian Optimization (BO).
1 code implementation • NeurIPS 2023 • Noah Hollmann, Samuel Müller, Frank Hutter
Specifically, we introduce Context-Aware Automated Feature Engineering (CAAFE), a feature engineering method for tabular datasets that utilizes an LLM to iteratively generate additional semantically meaningful features for tabular datasets based on the description of the dataset.
no code implementations • 16 Jul 2022 • Diane Wagner, Fabio Ferreira, Danny Stoll, Robin Tibor Schirrmeister, Samuel Müller, Frank Hutter
Self-Supervised Learning (SSL) has become a very active area of Deep Learning research where it is heavily used as a pre-training method for classification and other tasks.
6 code implementations • 5 Jul 2022 • Noah Hollmann, Samuel Müller, Katharina Eggensperger, Frank Hutter
We present TabPFN, a trained Transformer that can do supervised classification for small tabular datasets in less than a second, needs no hyperparameter tuning and is competitive with state-of-the-art classification methods.
1 code implementation • ICLR 2022 • Samuel Müller, Noah Hollmann, Sebastian Pineda Arango, Josif Grabocka, Frank Hutter
Our method restates the objective of posterior approximation as a supervised classification problem with a set-valued input: it repeatedly draws a task (or function) from the prior, draws a set of data points and their labels from it, masks one of the labels and learns to make probabilistic predictions for it based on the set-valued input of the rest of the data points.
no code implementations • 5 Feb 2021 • Samuel Müller, André Biedenkapp, Frank Hutter
To do this, we optimize the loss of the next training step.
1 code implementation • 20 Oct 2019 • Samuel Müller, Andreas Vlachos
In this work, we adapt BPE for text-to-SQL generation.
1 code implementation • 5 Jun 2017 • Jochen Scheuer, Ilai Schwartz, Samuel Müller, Qiong Chen, Ish Dhand, Martin B. Plenio, Boris Naydenov, Fedor Jelezko
Here we demonstrate the polarization and read out of a nuclear spin bath consisting of $^{13}$C nuclear spins in diamond by using a single nitrogen-vacancy (NV) center.
Quantum Physics