no code implementations • 12 Sep 2024 • Ivan Ovinnikov, Eugene Bykovets, Joachim M. Buhmann
Inverse reinforcement learning methods aim to retrieve the reward function of a Markov decision process based on a dataset of expert demonstrations.
no code implementations • 4 Jul 2024 • Robin C. Geyer, Alessandro Torcinovich, João B. Carvalho, Alexander Meyer, Joachim M. Buhmann
Our findings suggest that representation quality is closer related to the orthogonality of independent generative processes rather than their disentanglement, offering a new direction for evaluating and improving unsupervised learning models.
no code implementations • 6 Jun 2024 • Omar G. Younis, Luca Corinzia, Ioannis N. Athanasiadis, Andreas Krause, Joachim M. Buhmann, Matteo Turchetta
Crop breeding is crucial in improving agricultural productivity while potentially decreasing land usage, greenhouse gas emissions, and water consumption.
1 code implementation • 18 Apr 2024 • Mengyuan Liu, Zhongbin Fang, Xia Li, Joachim M. Buhmann, Xiangtai Li, Chen Change Loy
With the emergence of large-scale models trained on diverse datasets, in-context learning has emerged as a promising paradigm for multitasking, notably in natural language processing and image processing.
no code implementations • 10 Feb 2024 • Marc Bartholet, Taehyeon Kim, Ami Beuret, Se-Young Yun, Joachim M. Buhmann
Our study contributes significantly to the under-explored field of Federated Domain Generalization (FDG), setting a new benchmark for performance in this area.
1 code implementation • NeurIPS 2023 • João B. S. Carvalho, Mengtao Zhang, Robin Geyer, Carlos Cotrini, Joachim M. Buhmann
In this work, by leveraging tools from causal inference we attempt to increase the resilience of anomaly detection models to different kinds of distribution shifts.
no code implementations • 17 Aug 2023 • Ivan Ovinnikov, Joachim M. Buhmann
Imitation learning methods are used to infer a policy in a Markov decision process from a dataset of expert demonstrations by minimizing a divergence measure between the empirical state occupancy measures of the expert and the policy.
1 code implementation • 15 Jun 2023 • Lukas Klein, João B. S. Carvalho, Mennatallah El-Assady, Paolo Penna, Joachim M. Buhmann, Paul F. Jaeger
We propose a framework that utilizes interpretable disentangled representations for downstream-task prediction.
2 code implementations • NeurIPS 2023 • Zhongbin Fang, Xiangtai Li, Xia Li, Joachim M. Buhmann, Chen Change Loy, Mengyuan Liu
With the rise of large-scale models trained on broad data, in-context learning has become a new learning paradigm that has demonstrated significant potential in natural language processing and computer vision tasks.
no code implementations • 7 Oct 2022 • Eugene Bykovets, Yannick Metz, Mennatallah El-Assady, Daniel A. Keim, Joachim M. Buhmann
To overcome this, we formulate a Pareto optimization problem in which we simultaneously optimize for reward and OOD detection performance.
Deep Reinforcement Learning Out of Distribution (OOD) Detection +1
no code implementations • 26 Sep 2022 • Đorđe Miladinović, Kumar Shridhar, Kushal Jain, Max B. Paulus, Joachim M. Buhmann, Mrinmaya Sachan, Carl Allen
In principle, applying variational autoencoders (VAEs) to sequential data offers a method for controlled sequence generation, manipulation, and structured representation learning.
no code implementations • 22 Aug 2022 • Eugene Bykovets, Yannick Metz, Mennatallah El-Assady, Daniel A. Keim, Joachim M. Buhmann
Robustness to adversarial perturbations has been explored in many areas of computer vision.
1 code implementation • 24 Jun 2022 • Simon Föll, Alina Dubatovka, Eugen Ernst, Siu Lun Chau, Martin Maritsch, Patrik Okanovic, Gudrun Thäter, Joachim M. Buhmann, Felix Wortmann, Krikamol Muandet
To address this problem, we postulate that real-world distributions are composed of latent Invariant Elementary Distributions (I. E. D) across different domains.
no code implementations • 29 Sep 2021 • Ivan Ovinnikov, Eugene Bykovets, Joachim M. Buhmann
Inverse reinforcement learning methods aim to retrieve the reward function of a Markov decision process based on a dataset of expert demonstrations.
no code implementations • 22 Mar 2021 • João B. S. Carvalho, João A. Santinha, Đorđe Miladinović, Joachim M. Buhmann
In clinical practice, regions of interest in medical imaging often need to be identified through a process of precise image segmentation.
1 code implementation • ICLR 2021 • Đorđe Miladinović, Aleksandar Stanić, Stefan Bauer, Jürgen Schmidhuber, Joachim M. Buhmann
We show that augmenting the decoder of a hierarchical VAE by spatial dependency layers considerably improves density estimation over baseline convolutional architectures and the state-of-the-art among the models within the same class.
no code implementations • 25 Jan 2021 • Luca Corinzia, Paolo Penna, Wojciech Szpankowski, Joachim M. Buhmann
The result follows from two main technical points: (i) the connection established between the MLE and the MMSE, using the first and second-moment methods in the constrained signal space, (ii) a recovery regime for the MMSE stricter than the simple error vanishing characterization given in the standard AoN, that is here proved as a general result.
no code implementations • 23 Nov 2020 • Luca Corinzia, Paolo Penna, Wojciech Szpankowski, Joachim M. Buhmann
In this work, we consider the problem of recovery a planted $k$-densest sub-hypergraph on $d$-uniform hypergraphs.
no code implementations • 13 Aug 2020 • Luca Corinzia, Fabian Laumer, Alessandro Candreva, Maurizio Taramasso, Francesco Maisano, Joachim M. Buhmann
The segmentation of the mitral valve annulus and leaflets specifies a crucial first step to establish a machine learning pipeline that can support physicians in performing multiple tasks, e. g.\ diagnosis of mitral valve diseases, surgical planning, and intraoperative procedures.
no code implementations • 24 Jun 2020 • Yatao Bian, Joachim M. Buhmann, Andreas Krause
We start by a thorough characterization of the class of continuous submodular functions, and show that continuous submodularity is equivalent to a weak version of the diminishing returns (DR) property.
no code implementations • ICML 2020 • Aytunc Sahin, Yatao Bian, Joachim M. Buhmann, Andreas Krause
Submodular functions have been studied extensively in machine learning and data mining.
no code implementations • 14 Jun 2019 • Luca Corinzia, Ami Beuret, Joachim M. Buhmann
Despite federated multi-task learning being shown to be an effective paradigm for real-world datasets, it has been applied only on convex models.
no code implementations • 7 Jun 2019 • Đorđe Miladinović, Muhammad Waleed Gondal, Bernhard Schölkopf, Joachim M. Buhmann, Stefan Bauer
Sequential data often originates from diverse domains across which statistical regularities and domain specifics exist.
no code implementations • ICLR Workshop DeepGenStruct 2019 • Ðorđe Miladinović, Waleed Gondal, Bernhard Schölkopf, Joachim M. Buhmann, Stefan Bauer
To learn robust cross-environment descriptions of sequences we introduce disentangled state space models (DSSM).
1 code implementation • 3 Feb 2019 • Patrick Schwab, Lorenz Linhardt, Stefan Bauer, Joachim M. Buhmann, Walter Karlen
Estimating what would be an individual's potential response to varying levels of exposure to a treatment is of high practical relevance for several important fields, such as healthcare, economics and public policy.
no code implementations • 19 May 2018 • An Bian, Joachim M. Buhmann, Andreas Krause
Mean field inference in probabilistic models is generally a highly nonconvex problem.
3 code implementations • 12 Apr 2018 • Philippe Wenk, Alkis Gotovos, Stefan Bauer, Nico Gorbach, Andreas Krause, Joachim M. Buhmann
Parameter identification and comparison of dynamical systems is a challenging task in many fields.
no code implementations • NeurIPS 2017 • Stefan Bauer, Nico S. Gorbach, Djordje Miladinovic, Joachim M. Buhmann
Many real world dynamical systems are described by stochastic differential equations.
1 code implementation • NeurIPS 2017 • An Bian, Kfir. Y. Levy, Andreas Krause, Joachim M. Buhmann
Concretely, we first devise a "two-phase" algorithm with $1/4$ approximation guarantee.
1 code implementation • NeurIPS 2017 • Nico S. Gorbach, Stefan Bauer, Joachim M. Buhmann
That is why, despite the high computational cost, numerical integration is still the gold standard in many applications.
1 code implementation • ICML 2017 • Andrew An Bian, Joachim M. Buhmann, Andreas Krause, Sebastian Tschiatschek
Our guarantees are characterized by a combination of the (generalized) curvature $\alpha$ and the submodularity ratio $\gamma$.
no code implementations • NeurIPS 2016 • Gabriel Krummenacher, Brian McWilliams, Yannic Kilcher, Joachim M. Buhmann, Nicolai Meinshausen
We show that the regret of Ada-LR is close to the regret of full-matrix AdaGrad which can have an up-to exponentially smaller dependence on the dimension than the diagonal variant.
no code implementations • 21 Oct 2016 • Nico S. Gorbach, Stefan Bauer, Joachim M. Buhmann
The essence of gradient matching is to model the prior over state variables as a Gaussian process which implies that the joint distribution given the ODE's and GP kernels is also Gaussian distributed.
no code implementations • 4 Oct 2016 • Benjamin Fischer, Nico Gorbach, Stefan Bauer, Yatao Bian, Joachim M. Buhmann
Gaussian processes are powerful, yet analytically tractable models for supervised learning.
no code implementations • 17 Jun 2016 • Andrew An Bian, Baharan Mirzasoleiman, Joachim M. Buhmann, Andreas Krause
Submodular continuous functions are a category of (generally) non-convex/non-concave functions with a wide spectrum of applications.
no code implementations • 2 Jun 2016 • Stefan Bauer, Nicolas Carion, Peter Schüffler, Thomas Fuchs, Peter Wild, Joachim M. Buhmann
Accurate and robust cell nuclei classification is the cornerstone for a wider range of tasks in digital and Computational Pathology.
1 code implementation • CVPR 2016 • Dmitry Laptev, Nikolay Savinov, Joachim M. Buhmann, Marc Pollefeys
This more efficient use of training data results in better performance on popular benchmark datasets with smaller number of parameters when comparing to standard convolutional neural networks with dataset augmentation and to other baselines.
no code implementations • 31 Dec 2015 • Thomas J. Fuchs, Joachim M. Buhmann
The histological assessment of human tissue has emerged as the key challenge for detection and treatment of cancer.
no code implementations • CVPR 2015 • Dmitry Laptev, Joachim M. Buhmann
Many Computer Vision problems arise from information processing of data sources with nuisance variances like scale, orientation, contrast, perspective foreshortening or - in medical imaging - staining and local warping.
no code implementations • NeurIPS 2014 • Brian McWilliams, Gabriel Krummenacher, Mario Lucic, Joachim M. Buhmann
Subsampling methods have been recently proposed to speed up least squares estimation in large scale settings.
no code implementations • NeurIPS 2013 • Brian McWilliams, David Balduzzi, Joachim M. Buhmann
Random views are justified by recent theoretical and empirical work showing that regression with random features closely approximates kernel regression, implying that random views can be expected to contain accurate estimators.