no code implementations • 10 Apr 2024 • Valentyn Boreiko, Matthias Hein, Jan Hendrik Metzen
Our approach, BEV2EGO, allows for a realistic generation of the complete scene with road-contingent control that maps 2D bird's-eye view (BEV) scene configurations to a first-person view (EGO).
1 code implementation • 19 Feb 2024 • Christian Schlarmann, Naman Deep Singh, Francesco Croce, Matthias Hein
The CLIP model, or one of its variants, is used as a frozen vision encoder in many vision-language models (VLMs), e. g. LLaVA and OpenFlamingo.
no code implementations • 29 Nov 2023 • Maximilian Augustin, Yannic Neuhaus, Matthias Hein
While deep learning has led to huge progress in complex image classification tasks like ImageNet, unexpected failure modes, e. g. via spurious features, call into question how reliably these classifiers work in the wild.
no code implementations • 24 Nov 2023 • Francesco Croce, Matthias Hein
General purpose segmentation models are able to generate (semantic) segmentation masks from a variety of prompts, including visual (points, boxed, etc.)
1 code implementation • 20 Nov 2023 • Indu Ilanchezian, Valentyn Boreiko, Laura Kühlewein, Ziwei Huang, Murat Seçkin Ayhan, Matthias Hein, Lisa Koch, Philipp Berens
Counterfactual reasoning is often used in clinical settings to explain decisions or weigh alternatives.
no code implementations • 23 Sep 2023 • Valentyn Boreiko, Matthias Hein, Jan Hendrik Metzen
Moreover, our framework introduces an evaluation setting that can serve as a benchmark for similar pipelines.
1 code implementation • 21 Aug 2023 • Christian Schlarmann, Matthias Hein
In this paper we show that imperceivable attacks on images in order to change the caption output of a multi-modal foundation model can be used by malicious content providers to harm honest users e. g. by guiding them to malicious websites or broadcast fake information.
1 code implementation • 22 Jun 2023 • Francesco Croce, Naman D Singh, Matthias Hein
While a large amount of work has focused on designing adversarial attacks against image classifiers, only a few methods exist to attack semantic segmentation models.
1 code implementation • NeurIPS 2023 • Maximilian Mueller, Tiffany Vlaar, David Rolnick, Matthias Hein
Sharpness-aware minimization (SAM) was proposed to reduce sharpness of minima and has been shown to enhance generalization performance in various settings.
1 code implementation • 1 Jun 2023 • Julian Bitterwolf, Maximilian Müller, Matthias Hein
The OOD detection performance when the in-distribution (ID) is ImageNet-1K is commonly being tested on a small range of test OOD datasets.
Ranked #1 on Out-of-Distribution Detection on ImageNet-1k vs NINCO (using extra training data)
1 code implementation • NeurIPS 2023 • Naman D Singh, Francesco Croce, Matthias Hein
While adversarial training has been extensively studied for ResNet architectures and low resolution datasets like CIFAR, much less is known for ImageNet.
1 code implementation • 14 Feb 2023 • Maksym Andriushchenko, Francesco Croce, Maximilian Müller, Matthias Hein, Nicolas Flammarion
Overall, we observe that sharpness does not correlate well with generalization but rather with some training parameters like the learning rate that can be positively or negatively correlated with generalization depending on the setup.
1 code implementation • ICCV 2023 • Yannic Neuhaus, Maximilian Augustin, Valentyn Boreiko, Matthias Hein
In contrast, we work with ImageNet and validate our results by showing that presence of the harmful spurious feature of a class alone is sufficient to trigger the prediction of that class.
1 code implementation • 9 Dec 2022 • Yannic Neuhaus, Maximilian Augustin, Valentyn Boreiko, Matthias Hein
In contrast, we work with ImageNet and validate our results by showing that presence of the harmful spurious feature of a class alone is sufficient to trigger the prediction of that class.
1 code implementation • 21 Oct 2022 • Maximilian Augustin, Valentyn Boreiko, Francesco Croce, Matthias Hein
Two modifications to the diffusion process are key for our DVCEs: first, an adaptive parameterization, whose hyperparameters generalize across images and models, together with distance regularization and late start of the diffusion process, allow us to generate images with minimal semantic changes to the original ones but different classification.
no code implementations • 14 Sep 2022 • Francesco Croce, Matthias Hein
In recent years novel architecture components for image classification have been developed, starting with attention and patches used in transformers.
no code implementations • 13 Sep 2022 • Maksym Yatsura, Kaspar Sakmann, N. Grace Hua, Matthias Hein, Jan Hendrik Metzen
Adversarial patch attacks are an emerging security threat for real world deep learning applications.
1 code implementation • 5 Aug 2022 • Jan Nikolas Morshuis, Sergios Gatidis, Matthias Hein, Christian F. Baumgartner
Deep Learning (DL) methods have shown promising results for solving ill-posed inverse problems such as MR image reconstruction from undersampled $k$-space data.
1 code implementation • 14 Jul 2022 • Václav Voráček, Matthias Hein
In particular we provide scalable algorithms for the \emph{exact} computation of the minimal adversarial perturbation when using $\ell_2$-distance and improved lower bounds in other cases.
1 code implementation • 14 Jul 2022 • Václav Voráček, Matthias Hein
Randomized smoothing is sound when using infinite precision.
1 code implementation • 20 Jun 2022 • Julian Bitterwolf, Alexander Meinke, Maximilian Augustin, Matthias Hein
Moreover, we show that the confidence loss which is used by Outlier Exposure has an implicit scoring function which differs in a non-trivial fashion from the theoretically optimal scoring function in the case where training and test out-distribution are the same, which again is similar to the one used when training an Energy-Based OOD detector or when adding a background class.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • 16 May 2022 • Valentyn Boreiko, Maximilian Augustin, Francesco Croce, Philipp Berens, Matthias Hein
Visual counterfactual explanations (VCEs) in image space are an important tool to understand decisions of image classifiers as they show under which changes of the image the decision of the classifier would change.
1 code implementation • 28 Feb 2022 • Francesco Croce, Sven Gowal, Thomas Brunner, Evan Shelhamer, Matthias Hein, Taylan Cemgil
Adaptive defenses, which optimize at test time, promise to improve adversarial robustness.
1 code implementation • NeurIPS 2021 • Maksym Yatsura, Jan Hendrik Metzen, Matthias Hein
We demonstrate that plugging the learned controller into the attack consistently improves its black-box robustness estimate in different query regimes by up to 20% for a wide range of different models with black-box access.
no code implementations • 29 Sep 2021 • Julian Bitterwolf, Alexander Meinke, Maximilian Augustin, Matthias Hein
When trained in a shared fashion with a standard classifier, this binary discriminator reaches an OOD detection performance similar to that of Outlier Exposure.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 29 Sep 2021 • Maximilian Augustin, Matthias Hein
Traditional semi-supervised learning (SSL) has focused on the closed world assumption where all unlabeled samples are task-related.
1 code implementation • 18 Jun 2021 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig
Despite their compelling theoretical properties, Bayesian neural networks (BNNs) tend to perform worse than frequentist methods in classification-based uncertainty quantification (UQ) tasks such as out-of-distribution (OOD) detection.
1 code implementation • 8 Jun 2021 • Alexander Meinke, Julian Bitterwolf, Matthias Hein
The application of machine learning in safety-critical systems requires a reliable assessment of uncertainty.
1 code implementation • 26 May 2021 • Francesco Croce, Matthias Hein
In this way we get the first multiple-norm robust model for ImageNet and boost the state-of-the-art for multiple-norm robustness to more than $51\%$ on CIFAR-10.
no code implementations • 16 Apr 2021 • David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele
Moreover, we present a novel adversarial bit error attack and are able to obtain robustness against both targeted and untargeted bit-level attacks.
no code implementations • ICCV 2021 • David Stutz, Matthias Hein, Bernt Schiele
To this end, we propose average- and worst-case metrics to measure flatness in the robust loss landscape and show a correlation between good robust generalization and flatness.
2 code implementations • 1 Mar 2021 • Francesco Croce, Matthias Hein
Finally, we combine $l_1$-APGD and an adaptation of the Square Attack to $l_1$ into $l_1$-AutoAttack, an ensemble of attacks which reliably assesses adversarial robustness for the threat model of $l_1$-ball intersected with $[0, 1]^d$.
no code implementations • 21 Dec 2020 • Maximilian Augustin, Matthias Hein
The goal of this paper is to leverage unlabeled data in an open world setting to further improve prediction performance.
1 code implementation • 19 Oct 2020 • Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, Matthias Hein
As a research community, we are still lacking a systematic understanding of the progress on adversarial robustness which often makes it hard to identify the most promising ideas in training robust models.
1 code implementation • 6 Oct 2020 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig
Laplace approximations are classic, computationally lightweight means for constructing Bayesian neural networks (BNNs).
no code implementations • NeurIPS 2021 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig
We extend finite ReLU BNNs with infinite ReLU features via the GP and show that the resulting model is asymptotically maximally uncertain far away from the data while the BNNs' predictive power is unaffected near the data.
no code implementations • 28 Sep 2020 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig
However, far away from the training data, even Bayesian neural networks (BNNs) can still underestimate uncertainty and thus be overconfident.
2 code implementations • NeurIPS 2020 • Julian Bitterwolf, Alexander Meinke, Matthias Hein
Deep neural networks are known to be overconfident when applied to out-of-distribution (OOD) inputs which clearly do not belong to any class.
1 code implementation • 24 Jun 2020 • David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele
Low-voltage operation of DNN accelerators allows to further reduce energy consumption significantly, however, causes bit-level failures in the memory storing the quantized DNN weights.
2 code implementations • 23 Jun 2020 • Francesco Croce, Maksym Andriushchenko, Naman D. Singh, Nicolas Flammarion, Matthias Hein
We propose a versatile framework based on random search, Sparse-RS, for score-based sparse targeted and untargeted attacks in the black-box setting.
1 code implementation • ECCV 2020 • Maximilian Augustin, Alexander Meinke, Matthias Hein
Neural networks have led to major improvements in image classification but suffer from being non-robust to adversarial changes, unreliable uncertainty estimates on out-distribution samples and their inscrutable black-box decisions.
10 code implementations • ICML 2020 • Francesco Croce, Matthias Hein
The field of defense strategies against adversarial attacks has significantly grown over the last years, but progress is hampered as the evaluation of adversarial defenses is often insufficient and thus gives a wrong impression of robustness.
1 code implementation • ICML 2020 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig
These theoretical results validate the usage of last-layer Bayesian approximation and motivate a range of a fidelity-cost trade-off.
1 code implementation • ECCV 2020 • Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, Matthias Hein
We propose the Square Attack, a score-based black-box $l_2$- and $l_\infty$-adversarial attack that does not rely on local gradient information and thus is not affected by gradient masking.
no code implementations • NeurIPS 2019 • Pedro Mercado, Francesco Tudisco, Matthias Hein
We study the task of semi-supervised learning on multilayer graphs by taking into account both labeled and unlabeled observations together with the information encoded by each individual graph layer.
3 code implementations • ICML 2020 • David Stutz, Matthias Hein, Bernt Schiele
Our confidence-calibrated adversarial training (CCAT) tackles this problem by biasing the model towards low confidence predictions on adversarial examples.
1 code implementation • ICLR 2020 • Alexander Meinke, Matthias Hein
It has recently been shown that ReLU networks produce arbitrarily over-confident predictions far away from the training data.
no code implementations • 25 Sep 2019 • David Stutz, Matthias Hein, Bernt Schiele
Adversarial training is the standard to train models robust against adversarial examples.
1 code implementation • ICCV 2019 • Francesco Croce, Matthias Hein
On the other hand the pixelwise perturbations of sparse attacks are typically large and thus can be potentially detected.
2 code implementations • ICML 2020 • Francesco Croce, Matthias Hein
The evaluation of robustness against adversarial manipulation of neural networks-based classifiers is mainly tested with empirical attacks as methods for the exact computation, even when available, do not scale to large networks.
1 code implementation • NeurIPS 2019 • Maksym Andriushchenko, Matthias Hein
The problem of adversarial robustness has been studied extensively for neural networks.
1 code implementation • ICLR 2020 • Francesco Croce, Matthias Hein
In recent years several adversarial attacks and defenses have been proposed.
no code implementations • 15 May 2019 • Pedro Mercado, Francesco Tudisco, Matthias Hein
Moreover, we prove that the eigenvalues and eigenvector of the signed power mean Laplacian concentrate around their expectation under reasonable conditions in the general Stochastic Block Model.
1 code implementation • 27 Mar 2019 • Francesco Croce, Jonas Rauber, Matthias Hein
Modern neural networks are highly non-robust against adversarial manipulation.
1 code implementation • CVPR 2019 • Matthias Hein, Maksym Andriushchenko, Julian Bitterwolf
We show that this technique is surprisingly effective in reducing the confidence of predictions far away from the training data while maintaining high confidence predictions and test error on the original classification task compared to standard training.
2 code implementations • CVPR 2019 • David Stutz, Matthias Hein, Bernt Schiele
A recent hypothesis even states that both robust and accurate models are impossible, i. e., adversarial robustness and generalization are conflicting goals.
no code implementations • 28 Nov 2018 • Francesco Croce, Matthias Hein
Relatively fast heuristics have been proposed to produce these adversarial inputs but the problem of finding the optimal adversarial input, that is with the minimal change of the input, is NP-hard.
1 code implementation • 29 Oct 2018 • Marius Mosbach, Maksym Andriushchenko, Thomas Trost, Matthias Hein, Dietrich Klakow
Recently, Kannan et al. [2018] proposed several logit regularization methods to improve the adversarial robustness of classifiers.
2 code implementations • 17 Oct 2018 • Francesco Croce, Maksym Andriushchenko, Matthias Hein
It has been shown that neural network classifiers are not robust.
no code implementations • ICLR 2019 • Quynh Nguyen, Mahesh Chandra Mukkamala, Matthias Hein
We identify a class of over-parameterized deep neural networks with standard activation functions and cross-entropy loss which provably have no bad local valley, in the sense that from any point in parameter space there exists a continuous path on which the cross-entropy loss is non-increasing and gets arbitrarily close to zero.
1 code implementation • 1 Mar 2018 • Pedro Mercado, Antoine Gautier, Francesco Tudisco, Matthias Hein
Multilayer graphs encode different kind of interactions between the same set of entities.
no code implementations • ICML 2018 • Quynh Nguyen, Mahesh Chandra Mukkamala, Matthias Hein
In the recent literature the important role of depth in deep learning has been emphasized.
no code implementations • 30 Jan 2018 • Nicolas Garcia Trillos, Moritz Gerlach, Matthias Hein, Dejan Slepcev
sample from a $m$-dimensional submanifold $M$ in $R^d$ as the sample size $n$ increases and the neighborhood size $h$ tends to zero.
no code implementations • ICLR 2018 • Quynh Nguyen, Matthias Hein
We show that such CNNs produce linearly independent features at a “wide” layer which has more neurons than the number of training samples.
no code implementations • ICML 2018 • Quynh Nguyen, Matthias Hein
We show that such CNNs produce linearly independent features at a "wide" layer which has more neurons than the number of training samples.
no code implementations • 18 Aug 2017 • Francesco Tudisco, Pedro Mercado, Matthias Hein
In this work we propose a nonlinear relaxation which is instead based on the spectrum of a nonlinear modularity operator $\mathcal M$.
no code implementations • ICML 2017 • Mahesh Chandra Mukkamala, Matthias Hein
Adaptive gradient methods have become recently very popular, in particular as they have been shown to be useful in the training of deep neural networks.
no code implementations • NeurIPS 2017 • Matthias Hein, Maksym Andriushchenko
We show in this paper for the first time formal guarantees on the robustness of a classifier by giving instance-specific lower bounds on the norm of the input manipulation required to change the classifier decision.
no code implementations • ICML 2017 • Quynh Nguyen, Matthias Hein
While the optimization problem behind deep neural networks is highly non-convex, it is frequently observed in practice that training deep networks seems possible without getting stuck in suboptimal points.
1 code implementation • NeurIPS 2016 • Pedro Mercado, Francesco Tudisco, Matthias Hein
As a solution we propose to use the geometric mean of the Laplacians of positive and negative part and show that it outperforms the existing approaches.
1 code implementation • 12 Dec 2016 • Maksim Lapin, Matthias Hein, Bernt Schiele
In particular, we find that it is possible to obtain effective multilabel classifiers on Pascal VOC using a single label per image for training, while the gap between multiclass and multilabel methods on MS COCO is more significant.
no code implementations • NeurIPS 2016 • Antoine Gautier, Quynh Nguyen, Matthias Hein
The optimization problem behind neural networks is highly non-convex.
no code implementations • 12 May 2016 • Anna Khoreva, Rodrigo Benenson, Fabio Galasso, Matthias Hein, Bernt Schiele
Graph-based video segmentation methods rely on superpixels as starting point.
no code implementations • CVPR 2016 • Yongqin Xian, Zeynep Akata, Gaurav Sharma, Quynh Nguyen, Matthias Hein, Bernt Schiele
We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image.
no code implementations • CVPR 2017 • Anna Khoreva, Rodrigo Benenson, Jan Hosang, Matthias Hein, Bernt Schiele
Semantic labelling and instance segmentation are two tasks that require particularly costly annotations.
Ranked #1 on Semantic Segmentation on PASCAL VOC 2012 val (Mean IoU metric)
1 code implementation • CVPR 2016 • Maksim Lapin, Matthias Hein, Bernt Schiele
In the experiments, we compare on various datasets all of the proposed and established methods for top-k error optimization.
no code implementations • CVPR 2016 • Anna Khoreva, Rodrigo Benenson, Mohamed Omran, Matthias Hein, Bernt Schiele
State-of-the-art learning based boundary detection methods require extensive training data.
Ranked #2 on Edge Detection on SBD
1 code implementation • NeurIPS 2015 • Maksim Lapin, Matthias Hein, Bernt Schiele
Class ambiguity is typical in image classification problems with a large number of classes.
no code implementations • NeurIPS 2015 • Pratik Jawanpuria, Maksim Lapin, Matthias Hein, Bernt Schiele
The paradigm of multi-task learning is that one can achieve better generalization by learning tasks jointly and thus exploiting the similarity between the tasks rather than learning them independently of each other.
no code implementations • 9 Nov 2015 • Quynh Nguyen, Francesco Tudisco, Antoine Gautier, Matthias Hein
Hypergraph matching has recently become a popular approach for solving correspondence problems in computer vision as it allows to integrate higher-order geometric information.
no code implementations • 1 Jun 2015 • Anastasia Podosinnikova, Simon Setzer, Matthias Hein
In distinction to other methods for robust PCA, our method has no free parameter and is computationally very efficient.
no code implementations • CVPR 2015 • Anna Khoreva, Fabio Galasso, Matthias Hein, Bernt Schiele
Video segmentation has become an important and active research area with a large diversity of proposed approaches.
no code implementations • 24 May 2015 • Syama Sundar Rangapuram, Matthias Hein
Opposite to all other methods which have been suggested for constrained spectral clustering, we can always guarantee to satisfy all constraints.
no code implementations • 24 May 2015 • Syama Sundar Rangapuram, Pramod Kaushik Mudrakarta, Matthias Hein
Spectral Clustering as a relaxation of the normalized/ratio cut has become one of the standard graph-based clustering methods.
no code implementations • CVPR 2015 • Quynh Nguyen, Antoine Gautier, Matthias Hein
We propose two algorithms which both come along with the guarantee of monotonic ascent in the matching score on the set of discrete assignment matrices.
no code implementations • NeurIPS 2015 • Martin Slawski, Ping Li, Matthias Hein
Over the past few years, trace regression models have received considerable attention in the context of matrix completion, quantum state tomography, and compressed sensing.
no code implementations • NeurIPS 2014 • Syama Sundar Rangapuram, Pramod Kaushik Mudrakarta, Matthias Hein
Spectral Clustering as a relaxation of the normalized/ratio cut has become one of the standard graph-based clustering methods.
no code implementations • CVPR 2014 • Maksim Lapin, Bernt Schiele, Matthias Hein
The underlying idea of multitask learning is that learning tasks jointly is better than learning each task individually.
no code implementations • 26 Apr 2014 • Martin Slawski, Matthias Hein
Consider a random vector with finite second moments.
no code implementations • NeurIPS 2013 • Martin Slawski, Matthias Hein, Pavlo Lutsik
Motivated by an application in computational biology, we consider low-rank matrix factorization with $\{0, 1\}$-constraints on one of the factors and optionally convex constraints on the second one.
no code implementations • NeurIPS 2013 • Matthias Hein, Simon Setzer, Leonardo Jost, Syama Sundar Rangapuram
Hypergraphs allow one to encode higher-order relationships in data and are thus a very flexible modeling tool.
no code implementations • 18 Dec 2013 • Leonardo Jost, Simon Setzer, Matthias Hein
It has been recently shown that a large class of balanced graph cuts allows for an exact relaxation into a nonlinear eigenproblem.
1 code implementation • 14 Jun 2013 • Thomas Bühler, Syama Sundar Rangapuram, Simon Setzer, Matthias Hein
While a globally optimal solution for the resulting non-convex problem cannot be guaranteed, we outperform the loose convex or spectral relaxations by a large margin on constrained local clustering problems.
no code implementations • 13 Jun 2013 • Maksim Lapin, Matthias Hein, Bernt Schiele
Prior knowledge can be used to improve predictive performance of learning algorithms or reduce the amount of data required for training.
no code implementations • 4 May 2012 • Martin Slawski, Matthias Hein
We show that for these designs, the performance of NNLS with regard to prediction and estimation is comparable to that of the lasso.
no code implementations • NeurIPS 2011 • Matthias Hein, Simon Setzer
Spectral clustering is based on the spectral relaxation of the normalized/ratio graph cut criterion.
no code implementations • NeurIPS 2011 • Martin Slawski, Matthias Hein
Non-negative data are commonly encountered in numerous fields, making non-negative least squares regression (NNLS) a frequently used tool.
2 code implementations • NeurIPS 2010 • Matthias Hein, Thomas Bühler
Many problems in machine learning and statistics can be formulated as (generalized) eigenproblems.
no code implementations • NeurIPS 2010 • Ulrike V. Luxburg, Agnes Radl, Matthias Hein
As an alternative we introduce the amplified commute distance that corrects for the undesired large sample effects.
no code implementations • NeurIPS 2009 • Matthias Hein
Motivated by recent developments in manifold-valued regression we propose a family of nonparametric kernel-smoothing estimators with metric-space valued output including a robust median type estimator and the classical Frechet mean.
no code implementations • NeurIPS 2009 • Kwang I. Kim, Florian Steinke, Matthias Hein
Semi-supervised regression based on the graph Laplacian suffers from the fact that the solution is biased towards a constant and the lack of extrapolating power.
no code implementations • NeurIPS 2008 • Florian Steinke, Matthias Hein
This paper discusses non-parametric regression between Riemannian manifolds.