Search Results for author: Matthias Hein

Found 74 papers, 30 papers with code

Being a Bit Frequentist Improves Bayesian Neural Networks

no code implementations18 Jun 2021 Agustinus Kristiadi, Matthias Hein, Philipp Hennig

Despite their compelling theoretical properties, Bayesian neural networks (BNNs) tend to perform worse than frequentist methods in classification-based uncertainty quantification (UQ) tasks such as out-of-distribution (OOD) detection.

Bayesian Inference

Provably Robust Detection of Out-of-distribution Data (almost) for free

1 code implementation8 Jun 2021 Alexander Meinke, Julian Bitterwolf, Matthias Hein

When applying machine learning in safety-critical systems, a reliable assessment of the uncertainy of a classifier is required.

Adversarial robustness against multiple $l_p$-threat models at the price of one and how to quickly fine-tune robust models to another threat model

1 code implementation26 May 2021 Francesco Croce, Matthias Hein

In this way we boost the previous state-of-the-art reported for multiple-norm robustness by more than $6\%$ on CIFAR-10 and report up to our knowledge the first ImageNet models with multiple norm robustness.

Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators

no code implementations16 Apr 2021 David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele

Low-voltage operation of DNN accelerators allows to further reduce energy consumption significantly, however, causes bit-level failures in the memory storing the quantized DNN weights.

Quantization

Relating Adversarially Robust Generalization to Flat Minima

no code implementations ICCV 2021 David Stutz, Matthias Hein, Bernt Schiele

To this end, we propose average- and worst-case metrics to measure flatness in the robust loss landscape and show a correlation between good robust generalization and flatness.

Mind the box: $l_1$-APGD for sparse adversarial attacks on image classifiers

2 code implementations1 Mar 2021 Francesco Croce, Matthias Hein

Finally, we combine $l_1$-APGD and an adaptation of the Square Attack to $l_1$ into $l_1$-AutoAttack, an ensemble of attacks which reliably assesses adversarial robustness for the threat model of $l_1$-ball intersected with $[0, 1]^d$.

Out-distribution aware Self-training in an Open World Setting

no code implementations21 Dec 2020 Maximilian Augustin, Matthias Hein

The goal of this paper is to leverage unlabeled data in an open world setting to further improve prediction performance.

RobustBench: a standardized adversarial robustness benchmark

1 code implementation19 Oct 2020 Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti, Nicolas Flammarion, Mung Chiang, Prateek Mittal, Matthias Hein

Our goal is to instead establish a standardized benchmark of adversarial robustness, which as accurately as possible reflects the robustness of the considered models within a reasonable computational budget.

Fairness Out-of-Distribution Detection

An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their Asymptotic Overconfidence

no code implementations6 Oct 2020 Agustinus Kristiadi, Matthias Hein, Philipp Hennig

We extend finite ReLU BNNs with infinite ReLU features via the GP and show that the resulting model is asymptotically maximally uncertain far away from the data while the BNNs' predictive power is unaffected near the data.

Multi-class Classification

Learnable Uncertainty under Laplace Approximations

1 code implementation6 Oct 2020 Agustinus Kristiadi, Matthias Hein, Philipp Hennig

Laplace approximations are classic, computationally lightweight means for constructing Bayesian neural networks (BNNs).

Certifiably Adversarially Robust Detection of Out-of-Distribution Data

2 code implementations NeurIPS 2020 Julian Bitterwolf, Alexander Meinke, Matthias Hein

Deep neural networks are known to be overconfident when applied to out-of-distribution (OOD) inputs which clearly do not belong to any class.

Bit Error Robustness for Energy-Efficient DNN Accelerators

1 code implementation24 Jun 2020 David Stutz, Nandhini Chandramoorthy, Matthias Hein, Bernt Schiele

Low-voltage operation of DNN accelerators allows to further reduce energy consumption significantly, however, causes bit-level failures in the memory storing the quantized DNN weights.

Quantization

Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks

2 code implementations23 Jun 2020 Francesco Croce, Maksym Andriushchenko, Naman D. Singh, Nicolas Flammarion, Matthias Hein

Sparse adversarial perturbations received much less attention in the literature compared to $l_2$- and $l_\infty$-attacks.

Malware Detection

Adversarial Robustness on In- and Out-Distribution Improves Explainability

1 code implementation ECCV 2020 Maximilian Augustin, Alexander Meinke, Matthias Hein

Neural networks have led to major improvements in image classification but suffer from being non-robust to adversarial changes, unreliable uncertainty estimates on out-distribution samples and their inscrutable black-box decisions.

Image Classification

Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks

6 code implementations ICML 2020 Francesco Croce, Matthias Hein

The field of defense strategies against adversarial attacks has significantly grown over the last years, but progress is hampered as the evaluation of adversarial defenses is often insufficient and thus gives a wrong impression of robustness.

Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks

1 code implementation ICML 2020 Agustinus Kristiadi, Matthias Hein, Philipp Hennig

These theoretical results validate the usage of last-layer Bayesian approximation and motivate a range of a fidelity-cost trade-off.

Bayesian Inference

Square Attack: a query-efficient black-box adversarial attack via random search

1 code implementation ECCV 2020 Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion, Matthias Hein

We propose the Square Attack, a score-based black-box $l_2$- and $l_\infty$-adversarial attack that does not rely on local gradient information and thus is not affected by gradient masking.

Adversarial Attack

Generalized Matrix Means for Semi-Supervised Learning with Multilayer Graphs

no code implementations NeurIPS 2019 Pedro Mercado, Francesco Tudisco, Matthias Hein

We study the task of semi-supervised learning on multilayer graphs by taking into account both labeled and unlabeled observations together with the information encoded by each individual graph layer.

Stochastic Block Model

Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks

2 code implementations ICML 2020 David Stutz, Matthias Hein, Bernt Schiele

Our confidence-calibrated adversarial training (CCAT) tackles this problem by biasing the model towards low confidence predictions on adversarial examples.

Towards neural networks that provably know when they don't know

1 code implementation ICLR 2020 Alexander Meinke, Matthias Hein

It has recently been shown that ReLU networks produce arbitrarily over-confident predictions far away from the training data.

Out-of-Distribution Detection

Sparse and Imperceivable Adversarial Attacks

1 code implementation ICCV 2019 Francesco Croce, Matthias Hein

On the other hand the pixelwise perturbations of sparse attacks are typically large and thus can be potentially detected.

Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack

2 code implementations ICML 2020 Francesco Croce, Matthias Hein

The evaluation of robustness against adversarial manipulation of neural networks-based classifiers is mainly tested with empirical attacks as methods for the exact computation, even when available, do not scale to large networks.

Adversarial Attack

Spectral Clustering of Signed Graphs via Matrix Power Means

no code implementations15 May 2019 Pedro Mercado, Francesco Tudisco, Matthias Hein

Moreover, we prove that the eigenvalues and eigenvector of the signed power mean Laplacian concentrate around their expectation under reasonable conditions in the general Stochastic Block Model.

Stochastic Block Model

Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem

1 code implementation CVPR 2019 Matthias Hein, Maksym Andriushchenko, Julian Bitterwolf

We show that this technique is surprisingly effective in reducing the confidence of predictions far away from the training data while maintaining high confidence predictions and test error on the original classification task compared to standard training.

General Classification

Disentangling Adversarial Robustness and Generalization

2 code implementations CVPR 2019 David Stutz, Matthias Hein, Bernt Schiele

A recent hypothesis even states that both robust and accurate models are impossible, i. e., adversarial robustness and generalization are conflicting goals.

A randomized gradient-free attack on ReLU networks

no code implementations28 Nov 2018 Francesco Croce, Matthias Hein

Relatively fast heuristics have been proposed to produce these adversarial inputs but the problem of finding the optimal adversarial input, that is with the minimal change of the input, is NP-hard.

Object Recognition

Logit Pairing Methods Can Fool Gradient-Based Attacks

1 code implementation29 Oct 2018 Marius Mosbach, Maksym Andriushchenko, Thomas Trost, Matthias Hein, Dietrich Klakow

Recently, Kannan et al. [2018] proposed several logit regularization methods to improve the adversarial robustness of classifiers.

On the loss landscape of a class of deep neural networks with no bad local valleys

no code implementations ICLR 2019 Quynh Nguyen, Mahesh Chandra Mukkamala, Matthias Hein

We identify a class of over-parameterized deep neural networks with standard activation functions and cross-entropy loss which provably have no bad local valley, in the sense that from any point in parameter space there exists a continuous path on which the cross-entropy loss is non-increasing and gets arbitrarily close to zero.

Error estimates for spectral convergence of the graph Laplacian on random geometric graphs towards the Laplace--Beltrami operator

no code implementations30 Jan 2018 Nicolas Garcia Trillos, Moritz Gerlach, Matthias Hein, Dejan Slepcev

sample from a $m$-dimensional submanifold $M$ in $R^d$ as the sample size $n$ increases and the neighborhood size $h$ tends to zero.

The loss surface and expressivity of deep convolutional neural networks

no code implementations ICLR 2018 Quynh Nguyen, Matthias Hein

We show that such CNNs produce linearly independent features at a “wide” layer which has more neurons than the number of training samples.

Optimization Landscape and Expressivity of Deep CNNs

no code implementations ICML 2018 Quynh Nguyen, Matthias Hein

We show that such CNNs produce linearly independent features at a "wide" layer which has more neurons than the number of training samples.

Community detection in networks via nonlinear modularity eigenvectors

no code implementations18 Aug 2017 Francesco Tudisco, Pedro Mercado, Matthias Hein

In this work we propose a nonlinear relaxation which is instead based on the spectrum of a nonlinear modularity operator $\mathcal M$.

Community Detection

Variants of RMSProp and Adagrad with Logarithmic Regret Bounds

no code implementations ICML 2017 Mahesh Chandra Mukkamala, Matthias Hein

Adaptive gradient methods have become recently very popular, in particular as they have been shown to be useful in the training of deep neural networks.

Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation

no code implementations NeurIPS 2017 Matthias Hein, Maksym Andriushchenko

We show in this paper for the first time formal guarantees on the robustness of a classifier by giving instance-specific lower bounds on the norm of the input manipulation required to change the classifier decision.

General Classification

The loss surface of deep and wide neural networks

no code implementations ICML 2017 Quynh Nguyen, Matthias Hein

While the optimization problem behind deep neural networks is highly non-convex, it is frequently observed in practice that training deep networks seems possible without getting stuck in suboptimal points.

Clustering Signed Networks with the Geometric Mean of Laplacians

1 code implementation NeurIPS 2016 Pedro Mercado, Francesco Tudisco, Matthias Hein

As a solution we propose to use the geometric mean of the Laplacians of positive and negative part and show that it outperforms the existing approaches.

Analysis and Optimization of Loss Functions for Multiclass, Top-k, and Multilabel Classification

1 code implementation12 Dec 2016 Maksim Lapin, Matthias Hein, Bernt Schiele

In particular, we find that it is possible to obtain effective multilabel classifiers on Pascal VOC using a single label per image for training, while the gap between multiclass and multilabel methods on MS COCO is more significant.

General Classification Image Classification

Latent Embeddings for Zero-shot Classification

no code implementations CVPR 2016 Yongqin Xian, Zeynep Akata, Gaurav Sharma, Quynh Nguyen, Matthias Hein, Bernt Schiele

We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image.

Classification General Classification +1

Loss Functions for Top-k Error: Analysis and Insights

1 code implementation CVPR 2016 Maksim Lapin, Matthias Hein, Bernt Schiele

In the experiments, we compare on various datasets all of the proposed and established methods for top-k error optimization.

Top-k Multiclass SVM

1 code implementation NeurIPS 2015 Maksim Lapin, Matthias Hein, Bernt Schiele

Class ambiguity is typical in image classification problems with a large number of classes.

General Classification Image Classification

Efficient Output Kernel Learning for Multiple Tasks

no code implementations NeurIPS 2015 Pratik Jawanpuria, Maksim Lapin, Matthias Hein, Bernt Schiele

The paradigm of multi-task learning is that one can achieve better generalization by learning tasks jointly and thus exploiting the similarity between the tasks rather than learning them independently of each other.

Multi-Task Learning

An Efficient Multilinear Optimization Framework for Hypergraph Matching

no code implementations9 Nov 2015 Quynh Nguyen, Francesco Tudisco, Antoine Gautier, Matthias Hein

Hypergraph matching has recently become a popular approach for solving correspondence problems in computer vision as it allows to integrate higher-order geometric information.

Hypergraph Matching

Robust PCA: Optimization of the Robust Reconstruction Error over the Stiefel Manifold

no code implementations1 Jun 2015 Anastasia Podosinnikova, Simon Setzer, Matthias Hein

In distinction to other methods for robust PCA, our method has no free parameter and is computationally very efficient.

Tight Continuous Relaxation of the Balanced $k$-Cut Problem

no code implementations24 May 2015 Syama Sundar Rangapuram, Pramod Kaushik Mudrakarta, Matthias Hein

Spectral Clustering as a relaxation of the normalized/ratio cut has become one of the standard graph-based clustering methods.

Constrained 1-Spectral Clustering

no code implementations24 May 2015 Syama Sundar Rangapuram, Matthias Hein

Opposite to all other methods which have been suggested for constrained spectral clustering, we can always guarantee to satisfy all constraints.

A Flexible Tensor Block Coordinate Ascent Scheme for Hypergraph Matching

no code implementations CVPR 2015 Quynh Nguyen, Antoine Gautier, Matthias Hein

We propose two algorithms which both come along with the guarantee of monotonic ascent in the matching score on the set of discrete assignment matrices.

Graph Matching Hypergraph Matching

Regularization-free estimation in trace regression with symmetric positive semidefinite matrices

no code implementations NeurIPS 2015 Martin Slawski, Ping Li, Matthias Hein

Over the past few years, trace regression models have received considerable attention in the context of matrix completion, quantum state tomography, and compressed sensing.

Matrix Completion Quantum State Tomography

Tight Continuous Relaxation of the Balanced k-Cut Problem

no code implementations NeurIPS 2014 Syama Sundar Rangapuram, Pramod Kaushik Mudrakarta, Matthias Hein

Spectral Clustering as a relaxation of the normalized/ratio cut has become one of the standard graph-based clustering methods.

Matrix factorization with Binary Components

no code implementations NeurIPS 2013 Martin Slawski, Matthias Hein, Pavlo Lutsik

Motivated by an application in computational biology, we consider low-rank matrix factorization with $\{0, 1\}$-constraints on one of the factors and optionally convex constraints on the second one.

Nonlinear Eigenproblems in Data Analysis - Balanced Graph Cuts and the RatioDCA-Prox

no code implementations18 Dec 2013 Leonardo Jost, Simon Setzer, Matthias Hein

It has been recently shown that a large class of balanced graph cuts allows for an exact relaxation into a nonlinear eigenproblem.

The Total Variation on Hypergraphs - Learning on Hypergraphs Revisited

no code implementations NeurIPS 2013 Matthias Hein, Simon Setzer, Leonardo Jost, Syama Sundar Rangapuram

Hypergraphs allow one to encode higher-order relationships in data and are thus a very flexible modeling tool.

Constrained fractional set programs and their application in local clustering and community detection

1 code implementation14 Jun 2013 Thomas Bühler, Syama Sundar Rangapuram, Simon Setzer, Matthias Hein

While a globally optimal solution for the resulting non-convex problem cannot be guaranteed, we outperform the loose convex or spectral relaxations by a large margin on constrained local clustering problems.

Community Detection

Learning Using Privileged Information: SVM+ and Weighted SVM

no code implementations13 Jun 2013 Maksim Lapin, Matthias Hein, Bernt Schiele

Prior knowledge can be used to improve predictive performance of learning algorithms or reduce the amount of data required for training.

Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization

no code implementations4 May 2012 Martin Slawski, Matthias Hein

We show that for these designs, the performance of NNLS with regard to prediction and estimation is comparable to that of the lasso.

Beyond Spectral Clustering - Tight Relaxations of Balanced Graph Cuts

no code implementations NeurIPS 2011 Matthias Hein, Simon Setzer

Spectral clustering is based on the spectral relaxation of the normalized/ratio graph cut criterion.

Sparse recovery by thresholded non-negative least squares

no code implementations NeurIPS 2011 Martin Slawski, Matthias Hein

Non-negative data are commonly encountered in numerous fields, making non-negative least squares regression (NNLS) a frequently used tool.

Getting lost in space: Large sample analysis of the resistance distance

no code implementations NeurIPS 2010 Ulrike V. Luxburg, Agnes Radl, Matthias Hein

As an alternative we introduce the amplified commute distance that corrects for the undesired large sample effects.

Semi-supervised Regression using Hessian energy with an application to semi-supervised dimensionality reduction

no code implementations NeurIPS 2009 Kwang I. Kim, Florian Steinke, Matthias Hein

Semi-supervised regression based on the graph Laplacian suffers from the fact that the solution is biased towards a constant and the lack of extrapolating power.

Supervised dimensionality reduction

Robust Nonparametric Regression with Metric-Space Valued Output

no code implementations NeurIPS 2009 Matthias Hein

Motivated by recent developments in manifold-valued regression we propose a family of nonparametric kernel-smoothing estimators with metric-space valued output including a robust median type estimator and the classical Frechet mean.

General Classification Multi-class Classification

Non-parametric Regression Between Manifolds

no code implementations NeurIPS 2008 Florian Steinke, Matthias Hein

This paper discusses non-parametric regression between Riemannian manifolds.

Cannot find the paper you are looking for? You can Submit a new open access paper.