You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 18 Jun 2021 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig

Despite their compelling theoretical properties, Bayesian neural networks (BNNs) tend to perform worse than frequentist methods in classification-based uncertainty quantification (UQ) tasks such as out-of-distribution (OOD) detection.

1 code implementation • 8 Jun 2021 • Alexander Meinke, Julian Bitterwolf, Matthias Hein

When applying machine learning in safety-critical systems, a reliable assessment of the uncertainy of a classifier is required.

1 code implementation • 26 May 2021 • Francesco Croce, Matthias Hein

In this way we boost the previous state-of-the-art reported for multiple-norm robustness by more than $6\%$ on CIFAR-10 and report up to our knowledge the first ImageNet models with multiple norm robustness.

no code implementations • 16 Apr 2021 • David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele

Low-voltage operation of DNN accelerators allows to further reduce energy consumption significantly, however, causes bit-level failures in the memory storing the quantized DNN weights.

no code implementations • ICCV 2021 • David Stutz, Matthias Hein, Bernt Schiele

To this end, we propose average- and worst-case metrics to measure flatness in the robust loss landscape and show a correlation between good robust generalization and flatness.

2 code implementations • 1 Mar 2021 • Francesco Croce, Matthias Hein

Finally, we combine $l_1$-APGD and an adaptation of the Square Attack to $l_1$ into $l_1$-AutoAttack, an ensemble of attacks which reliably assesses adversarial robustness for the threat model of $l_1$-ball intersected with $[0, 1]^d$.

no code implementations • 21 Dec 2020 • Maximilian Augustin, Matthias Hein

The goal of this paper is to leverage unlabeled data in an open world setting to further improve prediction performance.

1 code implementation • 19 Oct 2020 • Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, Matthias Hein

Our goal is to instead establish a standardized benchmark of adversarial robustness, which as accurately as possible reflects the robustness of the considered models within a reasonable computational budget.

no code implementations • 6 Oct 2020 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig

We extend finite ReLU BNNs with infinite ReLU features via the GP and show that the resulting model is asymptotically maximally uncertain far away from the data while the BNNs' predictive power is unaffected near the data.

1 code implementation • 6 Oct 2020 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig

Laplace approximations are classic, computationally lightweight means for constructing Bayesian neural networks (BNNs).

2 code implementations • NeurIPS 2020 • Julian Bitterwolf, Alexander Meinke, Matthias Hein

Deep neural networks are known to be overconfident when applied to out-of-distribution (OOD) inputs which clearly do not belong to any class.

1 code implementation • 24 Jun 2020 • David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele

Low-voltage operation of DNN accelerators allows to further reduce energy consumption significantly, however, causes bit-level failures in the memory storing the quantized DNN weights.

2 code implementations • 23 Jun 2020 • Francesco Croce, Maksym Andriushchenko, Naman D. Singh, Nicolas Flammarion, Matthias Hein

Sparse adversarial perturbations received much less attention in the literature compared to $l_2$- and $l_\infty$-attacks.

1 code implementation • ECCV 2020 • Maximilian Augustin, Alexander Meinke, Matthias Hein

Neural networks have led to major improvements in image classification but suffer from being non-robust to adversarial changes, unreliable uncertainty estimates on out-distribution samples and their inscrutable black-box decisions.

6 code implementations • ICML 2020 • Francesco Croce, Matthias Hein

The field of defense strategies against adversarial attacks has significantly grown over the last years, but progress is hampered as the evaluation of adversarial defenses is often insufficient and thus gives a wrong impression of robustness.

1 code implementation • ICML 2020 • Agustinus Kristiadi, Matthias Hein, Philipp Hennig

These theoretical results validate the usage of last-layer Bayesian approximation and motivate a range of a fidelity-cost trade-off.

1 code implementation • ECCV 2020 • Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, Matthias Hein

We propose the Square Attack, a score-based black-box $l_2$- and $l_\infty$-adversarial attack that does not rely on local gradient information and thus is not affected by gradient masking.

no code implementations • NeurIPS 2019 • Pedro Mercado, Francesco Tudisco, Matthias Hein

We study the task of semi-supervised learning on multilayer graphs by taking into account both labeled and unlabeled observations together with the information encoded by each individual graph layer.

2 code implementations • ICML 2020 • David Stutz, Matthias Hein, Bernt Schiele

Our confidence-calibrated adversarial training (CCAT) tackles this problem by biasing the model towards low confidence predictions on adversarial examples.

1 code implementation • ICLR 2020 • Alexander Meinke, Matthias Hein

It has recently been shown that ReLU networks produce arbitrarily over-confident predictions far away from the training data.

1 code implementation • ICCV 2019 • Francesco Croce, Matthias Hein

On the other hand the pixelwise perturbations of sparse attacks are typically large and thus can be potentially detected.

2 code implementations • ICML 2020 • Francesco Croce, Matthias Hein

The evaluation of robustness against adversarial manipulation of neural networks-based classifiers is mainly tested with empirical attacks as methods for the exact computation, even when available, do not scale to large networks.

1 code implementation • NeurIPS 2019 • Maksym Andriushchenko, Matthias Hein

The problem of adversarial robustness has been studied extensively for neural networks.

1 code implementation • ICLR 2020 • Francesco Croce, Matthias Hein

In recent years several adversarial attacks and defenses have been proposed.

no code implementations • 15 May 2019 • Pedro Mercado, Francesco Tudisco, Matthias Hein

Moreover, we prove that the eigenvalues and eigenvector of the signed power mean Laplacian concentrate around their expectation under reasonable conditions in the general Stochastic Block Model.

1 code implementation • 27 Mar 2019 • Francesco Croce, Jonas Rauber, Matthias Hein

Modern neural networks are highly non-robust against adversarial manipulation.

1 code implementation • CVPR 2019 • Matthias Hein, Maksym Andriushchenko, Julian Bitterwolf

We show that this technique is surprisingly effective in reducing the confidence of predictions far away from the training data while maintaining high confidence predictions and test error on the original classification task compared to standard training.

2 code implementations • CVPR 2019 • David Stutz, Matthias Hein, Bernt Schiele

A recent hypothesis even states that both robust and accurate models are impossible, i. e., adversarial robustness and generalization are conflicting goals.

no code implementations • 28 Nov 2018 • Francesco Croce, Matthias Hein

Relatively fast heuristics have been proposed to produce these adversarial inputs but the problem of finding the optimal adversarial input, that is with the minimal change of the input, is NP-hard.

1 code implementation • 29 Oct 2018 • Marius Mosbach, Maksym Andriushchenko, Thomas Trost, Matthias Hein, Dietrich Klakow

Recently, Kannan et al. [2018] proposed several logit regularization methods to improve the adversarial robustness of classifiers.

2 code implementations • 17 Oct 2018 • Francesco Croce, Maksym Andriushchenko, Matthias Hein

It has been shown that neural network classifiers are not robust.

no code implementations • ICLR 2019 • Quynh Nguyen, Mahesh Chandra Mukkamala, Matthias Hein

We identify a class of over-parameterized deep neural networks with standard activation functions and cross-entropy loss which provably have no bad local valley, in the sense that from any point in parameter space there exists a continuous path on which the cross-entropy loss is non-increasing and gets arbitrarily close to zero.

1 code implementation • 1 Mar 2018 • Pedro Mercado, Antoine Gautier, Francesco Tudisco, Matthias Hein

Multilayer graphs encode different kind of interactions between the same set of entities.

no code implementations • ICML 2018 • Quynh Nguyen, Mahesh Chandra Mukkamala, Matthias Hein

In the recent literature the important role of depth in deep learning has been emphasized.

no code implementations • 30 Jan 2018 • Nicolas Garcia Trillos, Moritz Gerlach, Matthias Hein, Dejan Slepcev

sample from a $m$-dimensional submanifold $M$ in $R^d$ as the sample size $n$ increases and the neighborhood size $h$ tends to zero.

no code implementations • ICLR 2018 • Quynh Nguyen, Matthias Hein

We show that such CNNs produce linearly independent features at a “wide” layer which has more neurons than the number of training samples.

no code implementations • ICML 2018 • Quynh Nguyen, Matthias Hein

We show that such CNNs produce linearly independent features at a "wide" layer which has more neurons than the number of training samples.

no code implementations • 18 Aug 2017 • Francesco Tudisco, Pedro Mercado, Matthias Hein

In this work we propose a nonlinear relaxation which is instead based on the spectrum of a nonlinear modularity operator $\mathcal M$.

no code implementations • ICML 2017 • Mahesh Chandra Mukkamala, Matthias Hein

Adaptive gradient methods have become recently very popular, in particular as they have been shown to be useful in the training of deep neural networks.

no code implementations • NeurIPS 2017 • Matthias Hein, Maksym Andriushchenko

We show in this paper for the first time formal guarantees on the robustness of a classifier by giving instance-specific lower bounds on the norm of the input manipulation required to change the classifier decision.

no code implementations • ICML 2017 • Quynh Nguyen, Matthias Hein

While the optimization problem behind deep neural networks is highly non-convex, it is frequently observed in practice that training deep networks seems possible without getting stuck in suboptimal points.

1 code implementation • NeurIPS 2016 • Pedro Mercado, Francesco Tudisco, Matthias Hein

As a solution we propose to use the geometric mean of the Laplacians of positive and negative part and show that it outperforms the existing approaches.

1 code implementation • 12 Dec 2016 • Maksim Lapin, Matthias Hein, Bernt Schiele

In particular, we find that it is possible to obtain effective multilabel classifiers on Pascal VOC using a single label per image for training, while the gap between multiclass and multilabel methods on MS COCO is more significant.

no code implementations • NeurIPS 2016 • Antoine Gautier, Quynh Nguyen, Matthias Hein

The optimization problem behind neural networks is highly non-convex.

no code implementations • 12 May 2016 • Anna Khoreva, Rodrigo Benenson, Fabio Galasso, Matthias Hein, Bernt Schiele

Graph-based video segmentation methods rely on superpixels as starting point.

no code implementations • CVPR 2016 • Yongqin Xian, Zeynep Akata, Gaurav Sharma, Quynh Nguyen, Matthias Hein, Bernt Schiele

We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image.

no code implementations • CVPR 2017 • Anna Khoreva, Rodrigo Benenson, Jan Hosang, Matthias Hein, Bernt Schiele

Semantic labelling and instance segmentation are two tasks that require particularly costly annotations.

Ranked #1 on Semantic Segmentation on PASCAL VOC 2012 val (Mean IoU metric)

1 code implementation • CVPR 2016 • Maksim Lapin, Matthias Hein, Bernt Schiele

In the experiments, we compare on various datasets all of the proposed and established methods for top-k error optimization.

no code implementations • CVPR 2016 • Anna Khoreva, Rodrigo Benenson, Mohamed Omran, Matthias Hein, Bernt Schiele

State-of-the-art learning based boundary detection methods require extensive training data.

Ranked #2 on Edge Detection on SBD

1 code implementation • NeurIPS 2015 • Maksim Lapin, Matthias Hein, Bernt Schiele

Class ambiguity is typical in image classification problems with a large number of classes.

no code implementations • NeurIPS 2015 • Pratik Jawanpuria, Maksim Lapin, Matthias Hein, Bernt Schiele

The paradigm of multi-task learning is that one can achieve better generalization by learning tasks jointly and thus exploiting the similarity between the tasks rather than learning them independently of each other.

no code implementations • 9 Nov 2015 • Quynh Nguyen, Francesco Tudisco, Antoine Gautier, Matthias Hein

Hypergraph matching has recently become a popular approach for solving correspondence problems in computer vision as it allows to integrate higher-order geometric information.

no code implementations • 1 Jun 2015 • Anastasia Podosinnikova, Simon Setzer, Matthias Hein

In distinction to other methods for robust PCA, our method has no free parameter and is computationally very efficient.

no code implementations • CVPR 2015 • Anna Khoreva, Fabio Galasso, Matthias Hein, Bernt Schiele

Video segmentation has become an important and active research area with a large diversity of proposed approaches.

no code implementations • 24 May 2015 • Syama Sundar Rangapuram, Pramod Kaushik Mudrakarta, Matthias Hein

Spectral Clustering as a relaxation of the normalized/ratio cut has become one of the standard graph-based clustering methods.

no code implementations • 24 May 2015 • Syama Sundar Rangapuram, Matthias Hein

Opposite to all other methods which have been suggested for constrained spectral clustering, we can always guarantee to satisfy all constraints.

no code implementations • CVPR 2015 • Quynh Nguyen, Antoine Gautier, Matthias Hein

We propose two algorithms which both come along with the guarantee of monotonic ascent in the matching score on the set of discrete assignment matrices.

no code implementations • NeurIPS 2015 • Martin Slawski, Ping Li, Matthias Hein

Over the past few years, trace regression models have received considerable attention in the context of matrix completion, quantum state tomography, and compressed sensing.

no code implementations • NeurIPS 2014 • Syama Sundar Rangapuram, Pramod Kaushik Mudrakarta, Matthias Hein

Spectral Clustering as a relaxation of the normalized/ratio cut has become one of the standard graph-based clustering methods.

no code implementations • CVPR 2014 • Maksim Lapin, Bernt Schiele, Matthias Hein

The underlying idea of multitask learning is that learning tasks jointly is better than learning each task individually.

no code implementations • 26 Apr 2014 • Martin Slawski, Matthias Hein

Consider a random vector with finite second moments.

no code implementations • NeurIPS 2013 • Martin Slawski, Matthias Hein, Pavlo Lutsik

Motivated by an application in computational biology, we consider low-rank matrix factorization with $\{0, 1\}$-constraints on one of the factors and optionally convex constraints on the second one.

no code implementations • 18 Dec 2013 • Leonardo Jost, Simon Setzer, Matthias Hein

It has been recently shown that a large class of balanced graph cuts allows for an exact relaxation into a nonlinear eigenproblem.

no code implementations • NeurIPS 2013 • Matthias Hein, Simon Setzer, Leonardo Jost, Syama Sundar Rangapuram

Hypergraphs allow one to encode higher-order relationships in data and are thus a very flexible modeling tool.

1 code implementation • 14 Jun 2013 • Thomas Bühler, Syama Sundar Rangapuram, Simon Setzer, Matthias Hein

While a globally optimal solution for the resulting non-convex problem cannot be guaranteed, we outperform the loose convex or spectral relaxations by a large margin on constrained local clustering problems.

no code implementations • 13 Jun 2013 • Maksim Lapin, Matthias Hein, Bernt Schiele

Prior knowledge can be used to improve predictive performance of learning algorithms or reduce the amount of data required for training.

no code implementations • 4 May 2012 • Martin Slawski, Matthias Hein

We show that for these designs, the performance of NNLS with regard to prediction and estimation is comparable to that of the lasso.

no code implementations • NeurIPS 2011 • Matthias Hein, Simon Setzer

Spectral clustering is based on the spectral relaxation of the normalized/ratio graph cut criterion.

no code implementations • NeurIPS 2011 • Martin Slawski, Matthias Hein

Non-negative data are commonly encountered in numerous fields, making non-negative least squares regression (NNLS) a frequently used tool.

2 code implementations • NeurIPS 2010 • Matthias Hein, Thomas Bühler

Many problems in machine learning and statistics can be formulated as (generalized) eigenproblems.

no code implementations • NeurIPS 2010 • Ulrike V. Luxburg, Agnes Radl, Matthias Hein

As an alternative we introduce the amplified commute distance that corrects for the undesired large sample effects.

no code implementations • NeurIPS 2009 • Kwang I. Kim, Florian Steinke, Matthias Hein

Semi-supervised regression based on the graph Laplacian suffers from the fact that the solution is biased towards a constant and the lack of extrapolating power.

no code implementations • NeurIPS 2009 • Matthias Hein

Motivated by recent developments in manifold-valued regression we propose a family of nonparametric kernel-smoothing estimators with metric-space valued output including a robust median type estimator and the classical Frechet mean.

no code implementations • NeurIPS 2008 • Florian Steinke, Matthias Hein

This paper discusses non-parametric regression between Riemannian manifolds.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.