Search Results for author: Joachim M. Buhmann

Found 38 papers, 12 papers with code

Point-In-Context: Understanding Point Cloud via In-Context Learning

1 code implementation18 Apr 2024 Mengyuan Liu, Zhongbin Fang, Xia Li, Joachim M. Buhmann, Xiangtai Li, Chen Change Loy

With the emergence of large-scale models trained on diverse datasets, in-context learning has emerged as a promising paradigm for multitasking, notably in natural language processing and image processing.

In-Context Learning

Non-linear Fusion in Federated Learning: A Hypernetwork Approach to Federated Domain Generalization

no code implementations10 Feb 2024 Marc Bartholet, Taehyeon Kim, Ami Beuret, Se-Young Yun, Joachim M. Buhmann

We propose an innovative federated algorithm, termed hFedF for hypernetwork-based Federated Fusion, designed to bridge the performance gap between generalization and personalization, capable of addressing various degrees of domain shift.

Domain Generalization Federated Learning

Invariant Anomaly Detection under Distribution Shifts: A Causal Perspective

1 code implementation NeurIPS 2023 João B. S. Carvalho, Mengtao Zhang, Robin Geyer, Carlos Cotrini, Joachim M. Buhmann

In this work, by leveraging tools from causal inference we attempt to increase the resilience of anomaly detection models to different kinds of distribution shifts.

Anomaly Detection Causal Inference

Regularizing Adversarial Imitation Learning Using Causal Invariance

no code implementations17 Aug 2023 Ivan Ovinnikov, Joachim M. Buhmann

Imitation learning methods are used to infer a policy in a Markov decision process from a dataset of expert demonstrations by minimizing a divergence measure between the empirical state occupancy measures of the expert and the policy.

Imitation Learning

Explore In-Context Learning for 3D Point Cloud Understanding

1 code implementation NeurIPS 2023 Zhongbin Fang, Xiangtai Li, Xia Li, Joachim M. Buhmann, Chen Change Loy, Mengyuan Liu

With the rise of large-scale models trained on broad data, in-context learning has become a new learning paradigm that has demonstrated significant potential in natural language processing and computer vision tasks.

In-Context Learning

Learning to Drop Out: An Adversarial Approach to Training Sequence VAEs

no code implementations26 Sep 2022 Đorđe Miladinović, Kumar Shridhar, Kushal Jain, Max B. Paulus, Joachim M. Buhmann, Mrinmaya Sachan, Carl Allen

In principle, applying variational autoencoders (VAEs) to sequential data offers a method for controlled sequence generation, manipulation, and structured representation learning.

Representation Learning

Gated Domain Units for Multi-source Domain Generalization

1 code implementation24 Jun 2022 Simon Föll, Alina Dubatovka, Eugen Ernst, Siu Lun Chau, Martin Maritsch, Patrik Okanovic, Gudrun Thäter, Joachim M. Buhmann, Felix Wortmann, Krikamol Muandet

To address this problem, we postulate that real-world distributions are composed of latent Invariant Elementary Distributions (I. E. D) across different domains.

Domain Generalization Transfer Learning

Learning Invariant Reward Functions through Trajectory Interventions

no code implementations29 Sep 2021 Ivan Ovinnikov, Eugene Bykovets, Joachim M. Buhmann

Inverse reinforcement learning methods aim to retrieve the reward function of a Markov decision process based on a dataset of expert demonstrations.

reinforcement-learning Reinforcement Learning (RL)

Spatially Dependent U-Nets: Highly Accurate Architectures for Medical Imaging Segmentation

no code implementations22 Mar 2021 João B. S. Carvalho, João A. Santinha, Đorđe Miladinović, Joachim M. Buhmann

In clinical practice, regions of interest in medical imaging often need to be identified through a process of precise image segmentation.

Image Segmentation Inductive Bias +3

Spatial Dependency Networks: Neural Layers for Improved Generative Image Modeling

1 code implementation ICLR 2021 Đorđe Miladinović, Aleksandar Stanić, Stefan Bauer, Jürgen Schmidhuber, Joachim M. Buhmann

We show that augmenting the decoder of a hierarchical VAE by spatial dependency layers considerably improves density estimation over baseline convolutional architectures and the state-of-the-art among the models within the same class.

Density Estimation

On maximum-likelihood estimation in the all-or-nothing regime

no code implementations25 Jan 2021 Luca Corinzia, Paolo Penna, Wojciech Szpankowski, Joachim M. Buhmann

The result follows from two main technical points: (i) the connection established between the MLE and the MMSE, using the first and second-moment methods in the constrained signal space, (ii) a recovery regime for the MMSE stricter than the simple error vanishing characterization given in the standard AoN, that is here proved as a general result.

Statistical and computational thresholds for the planted $k$-densest sub-hypergraph problem

no code implementations23 Nov 2020 Luca Corinzia, Paolo Penna, Wojciech Szpankowski, Joachim M. Buhmann

In this work, we consider the problem of recovery a planted $k$-densest sub-hypergraph on $d$-uniform hypergraphs.

Community Detection

Neural collaborative filtering for unsupervised mitral valve segmentation in echocardiography

no code implementations13 Aug 2020 Luca Corinzia, Fabian Laumer, Alessandro Candreva, Maurizio Taramasso, Francesco Maisano, Joachim M. Buhmann

The segmentation of the mitral valve annulus and leaflets specifies a crucial first step to establish a machine learning pipeline that can support physicians in performing multiple tasks, e. g.\ diagnosis of mitral valve diseases, surgical planning, and intraoperative procedures.

Collaborative Filtering Segmentation

Continuous Submodular Function Maximization

no code implementations24 Jun 2020 Yatao Bian, Joachim M. Buhmann, Andreas Krause

We start by a thorough characterization of the class of continuous submodular functions, and show that continuous submodularity is equivalent to a weak version of the diminishing returns (DR) property.

Variational Federated Multi-Task Learning

no code implementations14 Jun 2019 Luca Corinzia, Ami Beuret, Joachim M. Buhmann

Despite federated multi-task learning being shown to be an effective paradigm for real-world datasets, it has been applied only on convex models.

Federated Learning Multi-Task Learning +1

Disentangled State Space Representations

no code implementations7 Jun 2019 Đorđe Miladinović, Muhammad Waleed Gondal, Bernhard Schölkopf, Joachim M. Buhmann, Stefan Bauer

Sequential data often originates from diverse domains across which statistical regularities and domain specifics exist.

regression Transfer Learning

Learning Counterfactual Representations for Estimating Individual Dose-Response Curves

1 code implementation3 Feb 2019 Patrick Schwab, Lorenz Linhardt, Stefan Bauer, Joachim M. Buhmann, Walter Karlen

Estimating what would be an individual's potential response to varying levels of exposure to a treatment is of high practical relevance for several important fields, such as healthcare, economics and public policy.

counterfactual Model Selection

Optimal DR-Submodular Maximization and Applications to Provable Mean Field Inference

no code implementations19 May 2018 An Bian, Joachim M. Buhmann, Andreas Krause

Mean field inference in probabilistic models is generally a highly nonconvex problem.

Scalable Variational Inference for Dynamical Systems

1 code implementation NeurIPS 2017 Nico S. Gorbach, Stefan Bauer, Joachim M. Buhmann

That is why, despite the high computational cost, numerical integration is still the gold standard in many applications.

Numerical Integration Variational Inference

Scalable Adaptive Stochastic Optimization Using Random Projections

no code implementations NeurIPS 2016 Gabriel Krummenacher, Brian McWilliams, Yannic Kilcher, Joachim M. Buhmann, Nicolai Meinshausen

We show that the regret of Ada-LR is close to the regret of full-matrix AdaGrad which can have an up-to exponentially smaller dependence on the dimension than the diagonal variant.

Dimensionality Reduction Stochastic Optimization

Mean-Field Variational Inference for Gradient Matching with Gaussian Processes

no code implementations21 Oct 2016 Nico S. Gorbach, Stefan Bauer, Joachim M. Buhmann

The essence of gradient matching is to model the prior over state variables as a Gaussian process which implies that the joint distribution given the ODE's and GP kernels is also Gaussian distributed.

Gaussian Processes Variational Inference

Guaranteed Non-convex Optimization: Submodular Maximization over Continuous Domains

no code implementations17 Jun 2016 Andrew An Bian, Baharan Mirzasoleiman, Joachim M. Buhmann, Andreas Krause

Submodular continuous functions are a category of (generally) non-convex/non-concave functions with a wide spectrum of applications.

Data Summarization energy management +1

Multi-Organ Cancer Classification and Survival Analysis

no code implementations2 Jun 2016 Stefan Bauer, Nicolas Carion, Peter Schüffler, Thomas Fuchs, Peter Wild, Joachim M. Buhmann

Accurate and robust cell nuclei classification is the cornerstone for a wider range of tasks in digital and Computational Pathology.

Classification General Classification +3

TI-POOLING: transformation-invariant pooling for feature learning in Convolutional Neural Networks

1 code implementation CVPR 2016 Dmitry Laptev, Nikolay Savinov, Joachim M. Buhmann, Marc Pollefeys

This more efficient use of training data results in better performance on popular benchmark datasets with smaller number of parameters when comparing to standard convolutional neural networks with dataset augmentation and to other baselines.

Computational Pathology: Challenges and Promises for Tissue Analysis

no code implementations31 Dec 2015 Thomas J. Fuchs, Joachim M. Buhmann

The histological assessment of human tissue has emerged as the key challenge for detection and treatment of cancer.

General Classification

Transformation-Invariant Convolutional Jungles

no code implementations CVPR 2015 Dmitry Laptev, Joachim M. Buhmann

Many Computer Vision problems arise from information processing of data sources with nuisance variances like scale, orientation, contrast, perspective foreshortening or - in medical imaging - staining and local warping.

Face Recognition Image Classification

Correlated random features for fast semi-supervised learning

no code implementations NeurIPS 2013 Brian McWilliams, David Balduzzi, Joachim M. Buhmann

Random views are justified by recent theoretical and empirical work showing that regression with random features closely approximates kernel regression, implying that random views can be expected to contain accurate estimators.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.