Search Results for author: Marius Kloft

Found 47 papers, 20 papers with code

Raising the Bar in Graph-level Anomaly Detection

1 code implementation27 May 2022 Chen Qiu, Marius Kloft, Stephan Mandt, Maja Rudolph

Graph-level anomaly detection has become a critical topic in diverse areas, such as financial fraud detection and detecting anomalous activities in social networks.

Anomaly Detection Fraud Detection +1

Exposing Outlier Exposure: What Can Be Learned From Few, One, and Zero Outlier Images

no code implementations23 May 2022 Philipp Liznerski, Lukas Ruff, Robert A. Vandermeulen, Billy Joe Franks, Klaus-Robert Müller, Marius Kloft

Traditionally anomaly detection (AD) is treated as an unsupervised problem utilizing only normal samples due to the intractability of characterizing everything that looks unlike the normal data.

Anomaly Detection

Latent Outlier Exposure for Anomaly Detection with Contaminated Data

1 code implementation16 Feb 2022 Chen Qiu, Aodong Li, Marius Kloft, Maja Rudolph, Stephan Mandt

We propose a strategy for training an anomaly detector in the presence of unlabeled anomalies that is compatible with a broad class of models.

Anomaly Detection

Detecting Anomalies within Time Series using Local Neural Transformations

1 code implementation8 Feb 2022 Tim Schneider, Chen Qiu, Marius Kloft, Decky Aspandi Latif, Steffen Staab, Stephan Mandt, Maja Rudolph

We develop a new method to detect anomalies within time series, which is essential in many application domains, reaching from self-driving cars, finance, and marketing to medical diagnosis and epidemiology.

Anomaly Detection Epidemiology +3

A systematic approach to random data augmentation on graph neural networks

no code implementations8 Dec 2021 Billy Joe Franks, Markus Anders, Marius Kloft, Pascal Schweitzer

On the theoretical side, among other results, we formally prove that under natural conditions all instantiations of our framework are universal.

Data Augmentation

Fine-grained Generalization Analysis of Inductive Matrix Completion

no code implementations NeurIPS 2021 Antoine Ledent, Rodrigo Alves, Yunwen Lei, Marius Kloft

In this paper, we bridge the gap between the state-of-the-art theoretical results for matrix completion with the nuclear norm and their equivalent in \textit{inductive matrix completion}: (1) In the distribution-free setting, we prove bounds improving the previously best scaling of $O(rd^2)$ to $\widetilde{O}(d^{3/2}\sqrt{r})$, where $d$ is the dimension of the side information and $r$ is the rank.

Matrix Completion

Learning Interpretable Concept Groups in CNNs

1 code implementation21 Sep 2021 Saurabh Varshneya, Antoine Ledent, Robert A. Vandermeulen, Yunwen Lei, Matthias Enders, Damian Borth, Marius Kloft

We propose a novel training methodology -- Concept Group Learning (CGL) -- that encourages training of interpretable CNN filters by partitioning filters in each layer into concept groups, each of which is trained to learn a single visual concept.

Explainability Requires Interactivity

2 code implementations16 Sep 2021 Matthias Kirchler, Martin Graf, Marius Kloft, Christoph Lippert

When explaining the decisions of deep neural networks, simple stories are tempting but dangerous.

Explaining Bayesian Neural Networks

no code implementations23 Aug 2021 Kirill Bykov, Marina M. -C. Höhne, Adelaida Creosteanu, Klaus-Robert Müller, Frederick Klauschen, Shinichi Nakajima, Marius Kloft

Bayesian approaches such as Bayesian Neural Networks (BNNs) so far have a limited form of transparency (model transparency) already built-in through their prior weight distribution, but notably, they lack explanations of their predictions for given instances.

Decision Making

Fine-grained Generalization Analysis of Structured Output Prediction

no code implementations31 May 2021 Waleed Mustafa, Yunwen Lei, Antoine Ledent, Marius Kloft

Existing generalization analysis implies generalization bounds with at least a square-root dependency on the cardinality $d$ of the label set, which can be vacuous in practice.

Generalization Bounds Natural Language Processing +1

Fine-grained Generalization Analysis of Vector-valued Learning

no code implementations29 Apr 2021 Liang Wu, Antoine Ledent, Yunwen Lei, Marius Kloft

In this paper, we initiate the generalization analysis of regularized vector-valued learning algorithms by presenting bounds with a mild dependency on the output dimension and a fast rate on the sample size.

Extreme Multi-Label Classification General Classification +3

Neural Transformation Learning for Deep Anomaly Detection Beyond Images

1 code implementation30 Mar 2021 Chen Qiu, Timo Pfrommer, Marius Kloft, Stephan Mandt, Maja Rudolph

Data transformations (e. g. rotations, reflections, and cropping) play an important role in self-supervised learning.

Anomaly Detection Self-Supervised Learning +1

Sharper Generalization Bounds for Pairwise Learning

no code implementations NeurIPS 2020 Yunwen Lei, Antoine Ledent, Marius Kloft

Pairwise learning refers to learning tasks with loss functions depending on a pair of training examples, which includes ranking and metric learning as specific examples.

Generalization Bounds Metric Learning

A Unifying Review of Deep and Shallow Anomaly Detection

no code implementations24 Sep 2020 Lukas Ruff, Jacob R. Kauffmann, Robert A. Vandermeulen, Grégoire Montavon, Wojciech Samek, Marius Kloft, Thomas G. Dietterich, Klaus-Robert Müller

Deep learning approaches to anomaly detection have recently improved the state of the art in detection performance on complex datasets such as large collections of images or text.

Anomaly Detection

Input Hessian Regularization of Neural Networks

no code implementations14 Sep 2020 Waleed Mustafa, Robert A. Vandermeulen, Marius Kloft

Regularizing the input gradient has shown to be effective in promoting the robustness of neural networks.

Adversarial Attack

Explainable Deep One-Class Classification

1 code implementation ICLR 2021 Philipp Liznerski, Lukas Ruff, Robert A. Vandermeulen, Billy Joe Franks, Marius Kloft, Klaus-Robert Müller

Deep one-class classification variants for anomaly detection learn a mapping that concentrates nominal samples in feature space causing anomalies to be mapped away.

Ranked #2 on Anomaly Detection on One-class ImageNet-30 (using extra training data)

Anomaly Detection Classification +2

How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural Networks

1 code implementation16 Jun 2020 Kirill Bykov, Marina M. -C. Höhne, Klaus-Robert Müller, Shinichi Nakajima, Marius Kloft

Explainable AI (XAI) aims to provide interpretations for predictions made by learning machines, such as deep neural networks, in order to make the machines more transparent for the user and furthermore trustworthy also for applications in e. g. safety-critical areas.

Rethinking Assumptions in Deep Anomaly Detection

1 code implementation30 May 2020 Lukas Ruff, Robert A. Vandermeulen, Billy Joe Franks, Klaus-Robert Müller, Marius Kloft

Though anomaly detection (AD) can be viewed as a classification problem (nominal vs. anomalous) it is usually treated in an unsupervised manner since one typically does not have access to, or it is infeasible to utilize, a dataset that sufficiently characterizes what it means to be "anomalous."

Anomaly Detection

Orthogonal Inductive Matrix Completion

no code implementations3 Apr 2020 Antoine Ledent, Rodrigo Alves, Marius Kloft

We propose orthogonal inductive matrix completion (OMIC), an interpretable approach to matrix completion based on a sum of multiple orthonormal side information terms, together with nuclear-norm regularization.

Matrix Completion

Machine Learning in Thermodynamics: Prediction of Activity Coefficients by Matrix Completion

no code implementations29 Jan 2020 Fabian Jirasek, Rodrigo A. S. Alves, Julie Damay, Robert A. Vandermeulen, Robert Bamler, Michael Bortz, Stephan Mandt, Marius Kloft, Hans Hasse

Activity coefficients, which are a measure of the non-ideality of liquid mixtures, are a key property in chemical engineering with relevance to modeling chemical and phase equilibria as well as transport processes.

Matrix Completion

Simple and Effective Prevention of Mode Collapse in Deep One-Class Classification

no code implementations24 Jan 2020 Penny Chong, Lukas Ruff, Marius Kloft, Alexander Binder

However, deep SVDD suffers from hypersphere collapse -- also known as mode collapse, if the architecture of the model does not comply with certain architectural constraints, e. g. the removal of bias terms.

Anomaly Detection General Classification

Effective End-to-end Unsupervised Outlier Detection via Inlier Priority of Discriminative Network

1 code implementation NeurIPS 2019 Siqi Wang, Yijie Zeng, Xinwang Liu, En Zhu, Jianping Yin, Chuanfu Xu, Marius Kloft

Despite the wide success of deep neural networks (DNN), little progress has been made on end-to-end unsupervised outlier detection (UOD) from high dimensional data like raw images.

Outlier Detection Representation Learning +1

Two-sample Testing Using Deep Learning

1 code implementation14 Oct 2019 Matthias Kirchler, Shahryar Khorasani, Marius Kloft, Christoph Lippert

We propose a two-sample testing procedure based on learned deep neural network representations.

Transfer Learning Two-sample testing

Analyzing the Variance of Policy Gradient Estimators for the Linear-Quadratic Regulator

no code implementations2 Oct 2019 James A. Preiss, Sébastien M. R. Arnold, Chen-Yu Wei, Marius Kloft

We study the variance of the REINFORCE policy gradient estimator in environments with continuous state and action spaces, linear dynamics, quadratic cost, and Gaussian noise.

Self-Attentive, Multi-Context One-Class Classification for Unsupervised Anomaly Detection on Text

1 code implementation ACL 2019 Lukas Ruff, Yury Zemlyanskiy, V, Robert ermeulen, Thomas Schnake, Marius Kloft

There exist few text-specific methods for unsupervised anomaly detection, and for those that do exist, none utilize pre-trained models for distributed vector representations of words.

Contextual Anomaly Detection General Classification +1

Norm-based generalisation bounds for multi-class convolutional neural networks

no code implementations29 May 2019 Antoine Ledent, Waleed Mustafa, Yunwen Lei, Marius Kloft

This holds even when formulating the bounds in terms of the $L^2$-norm of the weight matrices, where previous bounds exhibit at least a square-root dependence on the number of classes.

Deep One-Class Classification

1 code implementation ICML 2018 Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Müller, Marius Kloft

Despite the great advances made by deep learning in many machine learning problems, there is a relative dearth of deep learning approaches for anomaly detection.

Anomaly Detection Classification +2

Scalable Generalized Dynamic Topic Models

1 code implementation21 Mar 2018 Patrick Jähnichen, Florian Wenzel, Marius Kloft, Stephan Mandt

First, we extend the class of tractable priors from Wiener processes to the generic class of Gaussian processes (GPs).

Event Detection Gaussian Processes +2

Efficient Gaussian Process Classification Using Polya-Gamma Data Augmentation

3 code implementations18 Feb 2018 Florian Wenzel, Theo Galy-Fajou, Christan Donner, Marius Kloft, Manfred Opper

We propose a scalable stochastic variational approach to GP classification building on Polya-Gamma data augmentation and inducing points.

Classification Data Augmentation +1

Anomaly Detection with Generative Adversarial Networks

no code implementations ICLR 2018 Lucas Deecke, Robert Vandermeulen, Lukas Ruff, Stephan Mandt, Marius Kloft

Many anomaly detection methods exist that perform well on low-dimensional problems however there is a notable lack of effective methods for high-dimensional spaces, such as images.

Anomaly Detection

Bayesian Nonlinear Support Vector Machines for Big Data

3 code implementations18 Jul 2017 Florian Wenzel, Theo Galy-Fajou, Matthaeus Deutsch, Marius Kloft

We propose a fast inference method for Bayesian nonlinear support vector machines that leverages stochastic variational inference and inducing points.

Variational Inference

Data-dependent Generalization Bounds for Multi-class Classification

no code implementations29 Jun 2017 Yunwen Lei, Urun Dogan, Ding-Xuan Zhou, Marius Kloft

In this paper, we study data-dependent generalization error bounds exhibiting a mild dependency on the number of classes, making them suitable for multi-class learning with a large number of label classes.

Classification General Classification +2

Distributed Optimization of Multi-Class SVMs

1 code implementation25 Nov 2016 Maximilian Alber, Julian Zimmert, Urun Dogan, Marius Kloft

Training of one-vs.-rest SVMs can be parallelized over the number of classes in a straight forward way.

Distributed Optimization General Classification +1

Feature Importance Measure for Non-linear Learning Algorithms

1 code implementation22 Nov 2016 Marina M. -C. Vidovic, Nico Görnitz, Klaus-Robert Müller, Marius Kloft

MFI is general and can be applied to any arbitrary learning machine (including kernel machines and deep learning).

Feature Importance

Local Rademacher Complexity-based Learning Guarantees for Multi-Task Learning

no code implementations18 Feb 2016 Niloofar Yousefi, Yunwen Lei, Marius Kloft, Mansooreh Mollaghasemi, Georgios Anagnostopoulos

We show a Talagrand-type concentration inequality for Multi-Task Learning (MTL), using which we establish sharp excess risk bounds for MTL in terms of distribution- and data-dependent versions of the Local Rademacher Complexity (LRC).

Multi-Task Learning

Sparse Probit Linear Mixed Model

no code implementations16 Jul 2015 Stephan Mandt, Florian Wenzel, Shinichi Nakajima, John P. Cunningham, Christoph Lippert, Marius Kloft

Formulated as models for linear regression, LMMs have been restricted to continuous phenotypes.

feature selection

Framework for Multi-task Multiple Kernel Learning and Applications in Genome Analysis

no code implementations30 Jun 2015 Christian Widmer, Marius Kloft, Vipin T Sreedharan, Gunnar Rätsch

We present a general regularization-based framework for Multi-task learning (MTL), in which the similarity between tasks can be learned or refined using $\ell_p$-norm Multiple Kernel learning (MKL).

Multi-Task Learning

Localized Multiple Kernel Learning---A Convex Approach

no code implementations14 Jun 2015 Yunwen Lei, Alexander Binder, Ürün Dogan, Marius Kloft

We propose a localized approach to multiple kernel learning that can be formulated as a convex optimization problem over a given cluster structure.

Multi-class SVMs: From Tighter Data-Dependent Generalization Bounds to Novel Algorithms

no code implementations NeurIPS 2015 Yunwen Lei, Ürün Dogan, Alexander Binder, Marius Kloft

This paper studies the generalization performance of multi-class classification algorithms, for which we obtain, for the first time, a data-dependent generalization error bound with a logarithmic dependence on the class size, substantially improving the state-of-the-art linear dependence in the existing data-dependent generalization analysis.

General Classification Generalization Bounds +1

Probabilistic Clustering of Time-Evolving Distance Data

no code implementations14 Apr 2015 Julia E. Vogt, Marius Kloft, Stefan Stark, Sudhir S. Raman, Sandhya Prabhakaran, Volker Roth, Gunnar Rätsch

We present a novel probabilistic clustering model for objects that are represented via pairwise distances and observed at different time points.

Localized Complexities for Transductive Learning

no code implementations26 Nov 2014 Ilya Tolstikhin, Gilles Blanchard, Marius Kloft

We show two novel concentration inequalities for suprema of empirical processes when sampling without replacement, which both take the variance of the functions into account.

Learning Theory

The Local Rademacher Complexity of Lp-Norm Multiple Kernel Learning

no code implementations NeurIPS 2011 Marius Kloft, Gilles Blanchard

We derive an upper bound on the local Rademacher complexity of Lp-norm multiple kernel learning, which yields a tighter excess risk bound than global approaches.

Efficient and Accurate Lp-Norm Multiple Kernel Learning

no code implementations NeurIPS 2009 Marius Kloft, Ulf Brefeld, Pavel Laskov, Klaus-Robert Müller, Alexander Zien, Sören Sonnenburg

Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations and hence support interpretability.

Cannot find the paper you are looking for? You can Submit a new open access paper.