1 code implementation • CVPR 2024 • Ke Fan, Zechen Bai, Tianjun Xiao, Tong He, Max Horn, Yanwei Fu, Francesco Locatello, Zheng Zhang
Moreover, our analysis substantiates that our method exhibits the capability to dynamically adapt the slot number according to each instance's complexity, offering the potential for further exploration in slot attention research.
1 code implementation • ICCV 2023 • Ke Fan, Zechen Bai, Tianjun Xiao, Dominik Zietlow, Max Horn, Zixu Zhao, Carl-Johann Simon-Gabriel, Mike Zheng Shou, Francesco Locatello, Bernt Schiele, Thomas Brox, Zheng Zhang, Yanwei Fu, Tong He
In this paper, we show that recent advances in video representation learning and pre-trained vision-language models allow for substantial improvements in self-supervised video object localization.
1 code implementation • ICCV 2023 • Zixu Zhao, Jiaze Wang, Max Horn, Yizhuo Ding, Tong He, Zechen Bai, Dominik Zietlow, Carl-Johann Simon-Gabriel, Bing Shuai, Zhuowen Tu, Thomas Brox, Bernt Schiele, Yanwei Fu, Francesco Locatello, Zheng Zhang, Tianjun Xiao
Unsupervised object-centric learning methods allow the partitioning of scenes into entities without additional localization information and are excellent candidates for reducing the annotation burden of multiple-object tracking (MOT) pipelines.
no code implementations • 20 Apr 2023 • Max F. Burg, Florian Wenzel, Dominik Zietlow, Max Horn, Osama Makansi, Francesco Locatello, Chris Russell
Many approaches have been proposed to use diffusion models to augment training datasets for downstream tasks, such as classification.
1 code implementation • 12 Jan 2023 • Yuejiang Liu, Alexandre Alahi, Chris Russell, Max Horn, Dominik Zietlow, Bernhard Schölkopf, Francesco Locatello
Recent years have seen a surge of interest in learning high-level causal representations from low-level image pairs under interventions.
4 code implementations • 29 Sep 2022 • Maximilian Seitzer, Max Horn, Andrii Zadaianchuk, Dominik Zietlow, Tianjun Xiao, Carl-Johann Simon-Gabriel, Tong He, Zheng Zhang, Bernhard Schölkopf, Thomas Brox, Francesco Locatello
Humans naturally decompose their environment into entities at the appropriate level of abstraction to act in the world.
1 code implementation • 19 Jul 2022 • Florian Wenzel, Andrea Dittadi, Peter Vincent Gehler, Carl-Johann Simon-Gabriel, Max Horn, Dominik Zietlow, David Kernert, Chris Russell, Thomas Brox, Bernt Schiele, Bernhard Schölkopf, Francesco Locatello
Since out-of-distribution generalization is a generally ill-posed problem, various proxy targets (e. g., calibration, adversarial robustness, algorithmic corruptions, invariance across shifts) were studied across different research programs resulting in different recommendations.
Adversarial Robustness Out-of-Distribution Generalization +1
no code implementations • NeurIPS Workshop ICBINB 2021 • Tristan Cinquin, Alexander Immer, Max Horn, Vincent Fortuin
In recent years, the transformer has established itself as a workhorse in many applications ranging from natural language processing to reinforcement learning.
no code implementations • 12 Jul 2021 • Michael Moor, Nicolas Bennet, Drago Plecko, Max Horn, Bastian Rieck, Nicolai Meinshausen, Peter Bühlmann, Karsten Borgwardt
Here, we developed and validated a machine learning (ML) system for the prediction of sepsis in the ICU.
2 code implementations • ICLR 2022 • Leslie O'Bray, Max Horn, Bastian Rieck, Karsten Borgwardt
Graph generative models are a highly active branch of machine learning.
1 code implementation • 15 Feb 2021 • Max Horn, Kumar Shridhar, Elrich Groenewald, Philipp F. M. Baumann
While Transformer architectures have show remarkable success, they are bound to the computation of all pairwise interactions of input element and thus suffer from limited scalability.
1 code implementation • ICLR 2022 • Max Horn, Edward De Brouwer, Michael Moor, Yves Moreau, Bastian Rieck, Karsten Borgwardt
Graph neural networks (GNNs) are a powerful architecture for tackling graph learning tasks, yet have been shown to be oblivious to eminent substructures such as cycles.
1 code implementation • NeurIPS Workshop TDA_and_Beyond 2020 • Michael Moor, Max Horn, Karsten Borgwardt, Bastian Rieck
Topological autoencoders (TopoAE) have demonstrated their capabilities for performing dimensionality reduction while at the same time preserving topological information of the input space.
2 code implementations • 25 May 2020 • Michael Moor, Max Horn, Christian Bock, Karsten Borgwardt, Bastian Rieck
The signature transform is a 'universal nonlinearity' on the space of continuous vector-valued paths, and has received attention for use in machine learning on time series.
2 code implementations • ICML 2020 • Max Horn, Michael Moor, Christian Bock, Bastian Rieck, Karsten Borgwardt
Despite the eminent successes of deep neural networks, many architectures are often hard to transfer to irregularly-sampled and asynchronous time series that commonly occur in real-world datasets, especially in healthcare applications.
2 code implementations • ICML 2020 • Michael Moor, Max Horn, Bastian Rieck, Karsten Borgwardt
We propose a novel approach for preserving topological structures of the input space in latent representations of autoencoders.
Ranked #4 on Data Augmentation on GA1457
no code implementations • 16 Apr 2019 • Stephanie L. Hyland, Martin Faltys, Matthias Hüser, Xinrui Lyu, Thomas Gumbsch, Cristóbal Esteban, Christian Bock, Max Horn, Michael Moor, Bastian Rieck, Marc Zimmermann, Dean Bodenham, Karsten Borgwardt, Gunnar Rätsch, Tobias M. Merz
Intensive care clinicians are presented with large quantities of patient information and measurements from a multitude of monitoring systems.
2 code implementations • 5 Feb 2019 • Michael Moor, Max Horn, Bastian Rieck, Damian Roqueiro, Karsten Borgwardt
This empirical study proposes two novel approaches for the early detection of sepsis: a deep learning model and a lazy learner based on time series distances.
2 code implementations • ICLR 2019 • Bastian Rieck, Matteo Togninalli, Christian Bock, Michael Moor, Max Horn, Thomas Gumbsch, Karsten Borgwardt
While many approaches to make neural networks more fathomable have been proposed, they are restricted to interrogating the network with input data.