no code implementations • 12 Mar 2025 • Richard D. Paul, Johannes Seiffarth, David Rügamer, Hanno Scharr, Katharina Nöh
Cell tracking is a key computational task in live-cell microscopy, but fully automated analysis of high-throughput imaging requires reliable and, thus, uncertainty-aware data analysis tools, as the amount of data recorded within a single experiment exceeds what humans are able to overlook.
1 code implementation • 5 Mar 2025 • Daniel Dold, Julius Kobialka, Nicolai Palm, Emanuel Sommer, David Rügamer, Oliver Dürr
Understanding the structure of neural network loss surfaces, particularly the emergence of low-loss tunnels, is critical for advancing neural network theory and practice.
1 code implementation • 10 Feb 2025 • Emanuel Sommer, Jakob Robnik, Giorgi Nozadze, Uros Seljak, David Rügamer
To tackle these challenges, we introduce an ensembling approach that leverages strategies from optimization and a recently proposed sampler called Microcanonical Langevin Monte Carlo (MCLMC) for efficient, robust and predictable sampling performance.
1 code implementation • 10 Feb 2025 • Yawei Li, David Rügamer, Bernd Bischl, Mina Rezaei
Fine-tuned large language models (LLMs) often exhibit overconfidence, particularly when trained on small datasets, resulting in poor calibration and inaccurate uncertainty estimates.
no code implementations • 4 Feb 2025 • Chris Kolb, Tobias Weber, Bernd Bischl, David Rügamer
Sparse regularization techniques are well-established in machine learning, yet their application in neural networks remains challenging due to the non-differentiability of penalties like the $L_1$ norm, which is incompatible with stochastic gradient descent.
1 code implementation • 28 Jan 2025 • Arik Reuter, Tim G. J. Rudner, Vincent Fortuin, David Rügamer
Transformers have emerged as the dominant architecture in the field of deep learning, with a broad range of applications and remarkable in-context learning (ICL) capabilities.
1 code implementation • 7 Oct 2024 • David Rügamer, Bernard X. W. Liew, Zainab Altai, Almond Stöcker
Semi-structured networks (SSNs) merge the structures familiar from additive models with deep neural networks, allowing the modeling of interpretable partial feature effects while capturing higher-order non-linearities at the same time.
no code implementations • 26 Jul 2024 • David Köhler, David Rügamer, Matthias Schmid
Our method is based on a novel concept termed stacked orthogonality, which ensures that the main effects capture as much functional behavior as possible and do not contain information explained by higher-order interactions.
no code implementations • 8 May 2024 • Lucas Kook, Chris Kolb, Philipp Schiele, Daniel Dold, Marcel Arpogaus, Cornelius Fritz, Philipp F. Baumann, Philipp Kopper, Tobias Pielok, Emilio Dorigatti, David Rügamer
Neural network representations of simple models, such as linear regression, are being studied increasingly to better understand the underlying principles of deep learning algorithms.
1 code implementation • 3 May 2024 • David Rügamer, Chris Kolb, Tobias Weber, Lucas Kook, Thomas Nagler
The complexity of black-box algorithms can lead to various challenges, including the introduction of biases.
no code implementations • 3 May 2024 • Moritz Herrmann, F. Julian D. Lange, Katharina Eggensperger, Giuseppe Casalicchio, Marcel Wever, Matthias Feurer, David Rügamer, Eyke Hüllermeier, Anne-Laure Boulesteix, Bernd Bischl
We warn against a common but incomplete understanding of empirical research in machine learning that leads to non-replicable results, makes findings unreliable, and threatens to undermine progress in the field.
1 code implementation • 15 Apr 2024 • Tobias Weber, Jakob Dexl, David Rügamer, Michael Ingrisch
The application of Tucker decomposition to the TS model substantially reduced the model parameters and FLOPs across various compression rates, with limited loss in segmentation accuracy.
no code implementations • 19 Mar 2024 • Philipp Kopper, David Rügamer, Raphael Sonabend, Bernd Bischl, Andreas Bender
Empirical comparisons on synthetic and real-world data indicate that scoring rules can be successfully incorporated into model training and yield competitive predictive performance with established time-to-event models.
1 code implementation • 16 Mar 2024 • David Rundel, Julius Kobialka, Constantin von Crailsheim, Matthias Feurer, Thomas Nagler, David Rügamer
The recently developed Prior-Data Fitted Networks (PFNs) have shown very promising results for applications in low-data regimes.
no code implementations • 2 Feb 2024 • David Rügamer
In the current era of vast data and transparent machine learning, it is essential for techniques to operate at a large scale while providing a clear mathematical comprehension of the internal workings of the method.
1 code implementation • 2 Feb 2024 • Emanuel Sommer, Lisa Wimmer, Theodore Papamarkou, Ludwig Bothmann, Bernd Bischl, David Rügamer
A major challenge in sample-based inference (SBI) for Bayesian neural networks is the size and structure of the networks' parameter space.
no code implementations • 1 Feb 2024 • Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, José Miguel Hernández-Lobato, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A. Osborne, Tim G. J. Rudner, David Rügamer, Yee Whye Teh, Max Welling, Andrew Gordon Wilson, Ruqi Zhang
In the current landscape of deep learning research, there is a predominant emphasis on achieving high predictive accuracy in supervised tasks involving large image and language datasets.
no code implementations • 23 Jan 2024 • Daniel Dold, David Rügamer, Beate Sick, Oliver Dürr
To this end, we extend subspace inference for joint posterior sampling from a full parameter space for structured effects and a subspace for unstructured effects.
1 code implementation • 2 Nov 2023 • Tobias Weber, Michael Ingrisch, Bernd Bischl, David Rügamer
Methods: An orthogonalization is utilized to remove the influence of protected features (e. g., age, sex, race) in CXR embeddings, ensuring feature-independent results.
1 code implementation • 3 Aug 2023 • Zheyu Zhang, Han Yang, Bolei Ma, David Rügamer, Ercong Nie
Large Language Models (LLMs) demonstrate remarkable performance on a variety of natural language understanding (NLU) tasks, primarily due to their in-context learning ability.
no code implementations • 7 Jul 2023 • Chris Kolb, Christian L. Müller, Bernd Bischl, David Rügamer
Additionally, our theory establishes results of independent interest regarding matching local minima for arbitrary, potentially unregularized, objectives.
no code implementations • 1 Jun 2023 • David Rügamer
Recent advances to combine structured regression models and deep neural networks for better interpretability, more expressiveness, and statistically valid uncertainty quantification demonstrate the versatility of semi-structured neural networks (SSNs).
1 code implementation • 25 May 2023 • Tobias Weber, Michael Ingrisch, Bernd Bischl, David Rügamer
Undersampling is a common method in Magnetic Resonance Imaging (MRI) to subsample the number of data points in k-space, reducing acquisition times at the cost of decreased image quality.
no code implementations • 14 Apr 2023 • Felix Ott, Lucas Heublein, David Rügamer, Bernd Bischl, Christopher Mutschler
In this work, we propose recurrent fusion networks to optimally align absolute and relative pose predictions to improve the absolute pose prediction.
no code implementations • 6 Apr 2023 • Jonas Gregor Wiese, Lisa Wimmer, Theodore Papamarkou, Bernd Bischl, Stephan Günnemann, David Rügamer
Bayesian inference in deep neural networks is challenging due to the high-dimensional, strongly multi-modal parameter posterior density landscape.
2 code implementations • 20 Mar 2023 • Tobias Weber, Michael Ingrisch, Bernd Bischl, David Rügamer
While recent advances in large-scale foundational models show promising results, their application to the medical domain has not yet been explored in detail.
no code implementations • 16 Jan 2023 • Felix Ott, David Rügamer, Lucas Heublein, Bernd Bischl, Christopher Mutschler
The goal of domain adaptation (DA) is to mitigate this domain shift problem by searching for an optimal feature transformation to learn a domain-invariant representation.
no code implementations • 4 Nov 2022 • Patrick Kaiser, Christoph Kern, David Rügamer
Both industry and academia have made considerable progress in developing trustworthy and responsible machine learning (ML) systems.
1 code implementation • 24 Oct 2022 • Ingo Ziegler, Bolei Ma, Ercong Nie, Bernd Bischl, David Rügamer, Benjamin Schubert, Emilio Dorigatti
While direct identification of proteasomal cleavage \emph{in vitro} is cumbersome and low throughput, it is possible to implicitly infer cleavage events from the termini of MHC-presented epitopes, which can be detected in large amounts thanks to recent advances in high-throughput MHC ligandomics.
1 code implementation • 14 Oct 2022 • Daniel Schalk, Bernd Bischl, David Rügamer
In this paper, we propose an algorithm for a distributed, privacy-preserving, and lossless estimation of generalized additive mixed models (GAMM) using component-wise gradient boosting (CWB).
2 code implementations • 31 Aug 2022 • Philipp Schiele, Christoph Berninger, David Rügamer
In this work, we introduce the ARMA cell, a simpler, modular, and effective approach for time series modeling in neural networks.
no code implementations • 1 Aug 2022 • Felix Ott, Nisha Lakshmana Raichur, David Rügamer, Tobias Feigl, Heiko Neumann, Bernd Bischl, Christopher Mutschler
We show accuracy improvements for the APR-RPR task and for the RPR-RPR task for aerial vehicles and hand-held devices.
no code implementations • 17 Jun 2022 • Andreas Klaß, Sven M. Lorenz, Martin W. Lauer-Schmaltz, David Rügamer, Bernd Bischl, Christopher Mutschler, Felix Ott
For many applications, analyzing the uncertainty of a machine learning model is indispensable.
no code implementations • 28 May 2022 • David Rügamer
One of the main limitations are missing interactions in these models, which are not included for the sake of better interpretability, but also due to untenable computational costs.
no code implementations • 25 May 2022 • David Rügamer, Andreas Bender, Simon Wiegrebe, Daniel Racek, Bernd Bischl, Christian L. Müller, Clemens Stachl
Here, we propose Factorized Structured Regression (FaStR) for scalable varying coefficient models.
1 code implementation • 7 Apr 2022 • Felix Ott, David Rügamer, Lucas Heublein, Bernd Bischl, Christopher Mutschler
To mitigate this domain shift problem, domain adaptation (DA) techniques search for an optimal transformation that converts the (current) input data from a source domain to a target domain to learn a domain-invariant representation that reduces domain discrepancy.
no code implementations • 16 Feb 2022 • Felix Ott, David Rügamer, Lucas Heublein, Bernd Bischl, Christopher Mutschler
We perform extensive evaluations on synthetic image and time-series data, and on data for offline handwriting recognition (HWR) and on online HWR from sensor-enhanced pens for classifying written words.
no code implementations • 14 Feb 2022 • Felix Ott, David Rügamer, Lucas Heublein, Tim Hamann, Jens Barth, Bernd Bischl, Christopher Mutschler
While there exist many offline HWR datasets, there is only little data available for the development of OnHWR methods on paper as it requires hardware-integrated pens.
no code implementations • 12 Feb 2022 • Philipp Kopper, Simon Wiegrebe, Bernd Bischl, Andreas Bender, David Rügamer
Survival analysis (SA) is an active field of research that is concerned with time-to-event outcomes and is prevalent in many domains, particularly biomedical applications.
no code implementations • 9 Nov 2021 • Magdalena Mittermeier, Maximilian Weigert, David Rügamer
Europe was hit by several, disastrous heat and drought events in recent summers.
no code implementations • 21 Oct 2021 • Tobias Weber, Michael Ingrisch, Matthias Fabritius, Bernd Bischl, David Rügamer
We propose a hazard-regularized variational autoencoder that supports straightforward interpretation of deep neural architectures in the context of survival analysis, a field highly relevant in healthcare.
no code implementations • 21 Oct 2021 • Tobias Weber, Michael Ingrisch, Bernd Bischl, David Rügamer
The application of deep learning in survival analysis (SA) allows utilizing unstructured and high-dimensional data types uncommon in traditional survival methods.
no code implementations • 15 Oct 2021 • David Rügamer, Philipp F. M. Baumann, Thomas Kneib, Torsten Hothorn
Probabilistic forecasting of time series is an important matter in many applications and research fields.
no code implementations • 7 Oct 2021 • Daniel Schalk, Bernd Bischl, David Rügamer
Componentwise boosting (CWB), also known as model-based boosting, is a variant of gradient boosting that builds on additive models as base learners to ensure interpretability.
no code implementations • 12 Sep 2021 • Stefan Coors, Daniel Schalk, Bernd Bischl, David Rügamer
Despite its restriction to an interpretable model space, our system is competitive in terms of predictive performance on most data sets while being more user-friendly and transparent.
no code implementations • 28 Jul 2021 • Ludwig Bothmann, Sven Strickroth, Giuseppe Casalicchio, David Rügamer, Marius Lindauer, Fabian Scheipl, Bernd Bischl
It should be openly accessible to everyone, with as few barriers as possible; even more so for key technologies such as Machine Learning (ML) and Data Science (DS).
2 code implementations • 6 Apr 2021 • David Rügamer, Chris Kolb, Cornelius Fritz, Florian Pfisterer, Philipp Kopper, Bernd Bischl, Ruolin Shen, Christina Bukas, Lisa Barros de Andrade e Sousa, Dominik Thalmeier, Philipp Baumann, Lucas Kook, Nadja Klein, Christian L. Müller
In this paper we describe the implementation of semi-structured deep distributional regression, a flexible framework to learn conditional distributions based on the combination of additive regression models and deep networks.
1 code implementation • 6 Feb 2021 • Jann Goschenhofer, Rasmus Hvingelby, David Rügamer, Janek Thomas, Moritz Wagner, Bernd Bischl
Based on these adaptations, we explore the potential of deep semi-supervised learning in the context of time series classification by evaluating our methods on large public time series classification problems with varying amounts of labelled samples.
no code implementations • 3 Jan 2021 • Cornelius Fritz, Emilio Dorigatti, David Rügamer
The results corroborate the necessity of including mobility data and showcase the flexibility and interpretability of our approach.
no code implementations • 11 Nov 2020 • Philipp Kopper, Sebastian Pölsterl, Christian Wachinger, Bernd Bischl, Andreas Bender, David Rügamer
We propose a versatile framework for survival analysis that combines advanced concepts from statistics with deep learning.
no code implementations • 15 Oct 2020 • Philipp F. M. Baumann, Torsten Hothorn, David Rügamer
Learning the cumulative distribution function (CDF) of an outcome variable conditional on a set of features remains challenging, especially in high-dimensional settings.
no code implementations • 14 Oct 2020 • David Rügamer, Florian Pfisterer, Bernd Bischl
We present neural mixture distributional regression (NMDR), a holistic framework to estimate complex finite mixtures of distributional regressions defined by flexible additive predictors.
1 code implementation • 27 Jun 2020 • Andreas Bender, David Rügamer, Fabian Scheipl, Bernd Bischl
The modeling of time-to-event data, also known as survival analysis, requires specialized methods that can deal with censoring and truncation, time-varying features and effects, and that extend to settings with multiple competing events.
2 code implementations • 13 Feb 2020 • David Rügamer, Chris Kolb, Nadja Klein
We propose a general framework to combine structured regression models and deep neural networks into a unifying network architecture.
1 code implementation • 4 May 2018 • David Rügamer, Sonja Greven
We propose a statistical inference framework for the component-wise functional gradient descent algorithm (CFGD) under normality assumption for model errors, also known as $L_2$-Boosting.
1 code implementation • 30 May 2017 • Sarah Brockhaus, David Rügamer, Sonja Greven
In addition to mean regression, quantile regression models as well as generalized additive models for location scale and shape can be fitted with FDboost.
Computation
1 code implementation • 20 Sep 2016 • David Rügamer, Sarah Brockhaus, Kornelia Gentsch, Klaus Scherer, Sonja Greven
The link between different psychophysiological measures during emotion episodes is not well understood.