1 code implementation • NeurIPS 2023 • Kirill Bykov, Laura Kopf, Shinichi Nakajima, Marius Kloft, Marina M. -C. Höhne
Deep Neural Networks (DNNs) demonstrated remarkable capabilities in learning complex hierarchical data representations, but the nature of these representations remains largely unknown.
no code implementations • 12 Sep 2023 • Charu James, Mayank Nagda, Nooshin Haji Ghassemi, Marius Kloft, Sophie Fellenz
There is a lack of quantitative measures to evaluate the progression of topics through time in dynamic topic models (DTMs).
no code implementations • 25 Aug 2023 • Phil Ostheimer, Mayank Nagda, Marius Kloft, Sophie Fellenz
This suggests that LLMs could be a feasible alternative to human evaluation and other automated metrics in TST evaluation.
no code implementations • 1 Jun 2023 • Phil Ostheimer, Mayank Nagda, Marius Kloft, Sophie Fellenz
Text Style Transfer (TST) evaluation is, in practice, inconsistent.
no code implementations • 10 Mar 2023 • Fabian Hartung, Billy Joe Franks, Tobias Michels, Dennis Wagner, Philipp Liznerski, Steffen Reithermann, Sophie Fellenz, Fabian Jirasek, Maja Rudolph, Daniel Neider, Heike Leitte, Chen Song, Benjamin Kloepper, Stephan Mandt, Michael Bortz, Jakob Burger, Hans Hasse, Marius Kloft
Our extensive study will facilitate choosing appropriate anomaly detection methods in industrial applications.
1 code implementation • NeurIPS 2023 • Aodong Li, Chen Qiu, Marius Kloft, Padhraic Smyth, Maja Rudolph, Stephan Mandt
Anomaly detection (AD) plays a crucial role in many safety-critical application domains.
1 code implementation • 15 Feb 2023 • Aodong Li, Chen Qiu, Marius Kloft, Padhraic Smyth, Stephan Mandt, Maja Rudolph
Selecting informative data points for expert feedback can significantly improve the performance of anomaly detection (AD) in various contexts, such as medical diagnostics or fraud detection.
no code implementations • 23 Jan 2023 • Billy Joe Franks, Benjamin Dinkelmann, Sophie Fellenz, Marius Kloft
As is common in popular games, there is a large number of community-designed levels.
no code implementations • 16 Dec 2022 • Antoine Ledent, Rodrigo Alves, Yunwen Lei, Yann Guermeur, Marius Kloft
We study inductive matrix completion (matrix completion with side information) under an i. i. d.
1 code implementation • 25 Oct 2022 • Ajay Chawda, Stefanie Grimm, Marius Kloft
As One Hot encoding of high cardinal dataset invokes the "curse of dimensionality", we experiment with GEL encoding and embedding layer for representing categorical attributes.
Ranked #1 on
Anomaly Detection
on Vehicle Claims
1 code implementation • 29 Sep 2022 • Matthias Kirchler, Christoph Lippert, Marius Kloft
Normalizing flows are powerful non-parametric statistical models that function as a hybrid between density estimators and generative models.
1 code implementation • 27 May 2022 • Chen Qiu, Marius Kloft, Stephan Mandt, Maja Rudolph
Graph-level anomaly detection has become a critical topic in diverse areas, such as financial fraud detection and detecting anomalous activities in social networks.
1 code implementation • 23 May 2022 • Philipp Liznerski, Lukas Ruff, Robert A. Vandermeulen, Billy Joe Franks, Klaus-Robert Müller, Marius Kloft
We find that standard classifiers and semi-supervised one-class methods trained to discern between normal samples and relatively few random natural images are able to outperform the current state of the art on an established AD benchmark with ImageNet.
Ranked #1 on
Anomaly Detection
on One-class CIFAR-10
(using extra training data)
1 code implementation • 16 Feb 2022 • Chen Qiu, Aodong Li, Marius Kloft, Maja Rudolph, Stephan Mandt
We propose a strategy for training an anomaly detector in the presence of unlabeled anomalies that is compatible with a broad class of models.
1 code implementation • 8 Feb 2022 • Tim Schneider, Chen Qiu, Marius Kloft, Decky Aspandi Latif, Steffen Staab, Stephan Mandt, Maja Rudolph
We develop a new method to detect anomalies within time series, which is essential in many application domains, reaching from self-driving cars, finance, and marketing to medical diagnosis and epidemiology.
no code implementations • 8 Dec 2021 • Billy Joe Franks, Markus Anders, Marius Kloft, Pascal Schweitzer
On the theoretical side, among other results, we formally prove that under natural conditions all instantiations of our framework are universal.
no code implementations • NeurIPS 2021 • Antoine Ledent, Rodrigo Alves, Yunwen Lei, Marius Kloft
In this paper, we bridge the gap between the state-of-the-art theoretical results for matrix completion with the nuclear norm and their equivalent in \textit{inductive matrix completion}: (1) In the distribution-free setting, we prove bounds improving the previously best scaling of $O(rd^2)$ to $\widetilde{O}(d^{3/2}\sqrt{r})$, where $d$ is the dimension of the side information and $r$ is the rank.
1 code implementation • 21 Sep 2021 • Saurabh Varshneya, Antoine Ledent, Robert A. Vandermeulen, Yunwen Lei, Matthias Enders, Damian Borth, Marius Kloft
We propose a novel training methodology -- Concept Group Learning (CGL) -- that encourages training of interpretable CNN filters by partitioning filters in each layer into concept groups, each of which is trained to learn a single visual concept.
2 code implementations • 16 Sep 2021 • Matthias Kirchler, Martin Graf, Marius Kloft, Christoph Lippert
When explaining the decisions of deep neural networks, simple stories are tempting but dangerous.
no code implementations • 23 Aug 2021 • Kirill Bykov, Marina M. -C. Höhne, Adelaida Creosteanu, Klaus-Robert Müller, Frederick Klauschen, Shinichi Nakajima, Marius Kloft
Bayesian approaches such as Bayesian Neural Networks (BNNs) so far have a limited form of transparency (model transparency) already built-in through their prior weight distribution, but notably, they lack explanations of their predictions for given instances.
no code implementations • 31 May 2021 • Waleed Mustafa, Yunwen Lei, Antoine Ledent, Marius Kloft
Existing generalization analysis implies generalization bounds with at least a square-root dependency on the cardinality $d$ of the label set, which can be vacuous in practice.
no code implementations • 29 Apr 2021 • Liang Wu, Antoine Ledent, Yunwen Lei, Marius Kloft
In this paper, we initiate the generalization analysis of regularized vector-valued learning algorithms by presenting bounds with a mild dependency on the output dimension and a fast rate on the sample size.
Extreme Multi-Label Classification
General Classification
+2
3 code implementations • 30 Mar 2021 • Chen Qiu, Timo Pfrommer, Marius Kloft, Stephan Mandt, Maja Rudolph
Data transformations (e. g. rotations, reflections, and cropping) play an important role in self-supervised learning.
no code implementations • NeurIPS 2020 • Yunwen Lei, Antoine Ledent, Marius Kloft
Pairwise learning refers to learning tasks with loss functions depending on a pair of training examples, which includes ranking and metric learning as specific examples.
no code implementations • 24 Sep 2020 • Lukas Ruff, Jacob R. Kauffmann, Robert A. Vandermeulen, Grégoire Montavon, Wojciech Samek, Marius Kloft, Thomas G. Dietterich, Klaus-Robert Müller
Deep learning approaches to anomaly detection have recently improved the state of the art in detection performance on complex datasets such as large collections of images or text.
no code implementations • 14 Sep 2020 • Waleed Mustafa, Robert A. Vandermeulen, Marius Kloft
Regularizing the input gradient has shown to be effective in promoting the robustness of neural networks.
1 code implementation • 27 Aug 2020 • Guang Yu, Siqi Wang, Zhiping Cai, En Zhu, Chuanfu Xu, Jianping Yin, Marius Kloft
To build such a visual cloze test, a certain patch of STC is erased to yield an incomplete event (IE).
Ranked #14 on
Anomaly Detection
on CUHK Avenue
2 code implementations • ICLR 2021 • Philipp Liznerski, Lukas Ruff, Robert A. Vandermeulen, Billy Joe Franks, Marius Kloft, Klaus-Robert Müller
Deep one-class classification variants for anomaly detection learn a mapping that concentrates nominal samples in feature space causing anomalies to be mapped away.
Ranked #5 on
Anomaly Detection
on One-class ImageNet-30
(using extra training data)
1 code implementation • 16 Jun 2020 • Kirill Bykov, Marina M. -C. Höhne, Klaus-Robert Müller, Shinichi Nakajima, Marius Kloft
Explainable AI (XAI) aims to provide interpretations for predictions made by learning machines, such as deep neural networks, in order to make the machines more transparent for the user and furthermore trustworthy also for applications in e. g. safety-critical areas.
1 code implementation • 30 May 2020 • Lukas Ruff, Robert A. Vandermeulen, Billy Joe Franks, Klaus-Robert Müller, Marius Kloft
Though anomaly detection (AD) can be viewed as a classification problem (nominal vs. anomalous) it is usually treated in an unsupervised manner since one typically does not have access to, or it is infeasible to utilize, a dataset that sufficiently characterizes what it means to be "anomalous."
no code implementations • 3 Apr 2020 • Antoine Ledent, Rodrigo Alves, Marius Kloft
We propose orthogonal inductive matrix completion (OMIC), an interpretable approach to matrix completion based on a sum of multiple orthonormal side information terms, together with nuclear-norm regularization.
no code implementations • 29 Jan 2020 • Fabian Jirasek, Rodrigo A. S. Alves, Julie Damay, Robert A. Vandermeulen, Robert Bamler, Michael Bortz, Stephan Mandt, Marius Kloft, Hans Hasse
Activity coefficients, which are a measure of the non-ideality of liquid mixtures, are a key property in chemical engineering with relevance to modeling chemical and phase equilibria as well as transport processes.
no code implementations • 24 Jan 2020 • Penny Chong, Lukas Ruff, Marius Kloft, Alexander Binder
However, deep SVDD suffers from hypersphere collapse -- also known as mode collapse, if the architecture of the model does not comply with certain architectural constraints, e. g. the removal of bias terms.
1 code implementation • NeurIPS 2019 • Siqi Wang, Yijie Zeng, Xinwang Liu, En Zhu, Jianping Yin, Chuanfu Xu, Marius Kloft
Despite the wide success of deep neural networks (DNN), little progress has been made on end-to-end unsupervised outlier detection (UOD) from high dimensional data like raw images.
1 code implementation • 14 Oct 2019 • Matthias Kirchler, Shahryar Khorasani, Marius Kloft, Christoph Lippert
We propose a two-sample testing procedure based on learned deep neural network representations.
no code implementations • 2 Oct 2019 • James A. Preiss, Sébastien M. R. Arnold, Chen-Yu Wei, Marius Kloft
We study the variance of the REINFORCE policy gradient estimator in environments with continuous state and action spaces, linear dynamics, quadratic cost, and Gaussian noise.
1 code implementation • ACL 2019 • Lukas Ruff, Yury Zemlyanskiy, V, Robert ermeulen, Thomas Schnake, Marius Kloft
There exist few text-specific methods for unsupervised anomaly detection, and for those that do exist, none utilize pre-trained models for distributed vector representations of words.
7 code implementations • ICLR 2020 • Lukas Ruff, Robert A. Vandermeulen, Nico Görnitz, Alexander Binder, Emmanuel Müller, Klaus-Robert Müller, Marius Kloft
Deep approaches to anomaly detection have recently shown promising results over shallow methods on large and complex datasets.
no code implementations • 29 May 2019 • Antoine Ledent, Waleed Mustafa, Yunwen Lei, Marius Kloft
This holds even when formulating the bounds in terms of the $L^2$-norm of the weight matrices, where previous bounds exhibit at least a square-root dependence on the number of classes.
1 code implementation • ICML 2018 • Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Müller, Marius Kloft
Despite the great advances made by deep learning in many machine learning problems, there is a relative dearth of deep learning approaches for anomaly detection.
Ranked #32 on
Anomaly Detection
on One-class CIFAR-10
1 code implementation • 21 Mar 2018 • Patrick Jähnichen, Florian Wenzel, Marius Kloft, Stephan Mandt
First, we extend the class of tractable priors from Wiener processes to the generic class of Gaussian processes (GPs).
3 code implementations • 18 Feb 2018 • Florian Wenzel, Theo Galy-Fajou, Christan Donner, Marius Kloft, Manfred Opper
We propose a scalable stochastic variational approach to GP classification building on Polya-Gamma data augmentation and inducing points.
no code implementations • ICLR 2018 • Lucas Deecke, Robert Vandermeulen, Lukas Ruff, Stephan Mandt, Marius Kloft
Many anomaly detection methods exist that perform well on low-dimensional problems however there is a notable lack of effective methods for high-dimensional spaces, such as images.
3 code implementations • 18 Jul 2017 • Florian Wenzel, Theo Galy-Fajou, Matthaeus Deutsch, Marius Kloft
We propose a fast inference method for Bayesian nonlinear support vector machines that leverages stochastic variational inference and inducing points.
no code implementations • 29 Jun 2017 • Yunwen Lei, Urun Dogan, Ding-Xuan Zhou, Marius Kloft
In this paper, we study data-dependent generalization error bounds exhibiting a mild dependency on the number of classes, making them suitable for multi-class learning with a large number of label classes.
1 code implementation • 25 Nov 2016 • Maximilian Alber, Julian Zimmert, Urun Dogan, Marius Kloft
Training of one-vs.-rest SVMs can be parallelized over the number of classes in a straight forward way.
1 code implementation • 22 Nov 2016 • Marina M. -C. Vidovic, Nico Görnitz, Klaus-Robert Müller, Marius Kloft
MFI is general and can be applied to any arbitrary learning machine (including kernel machines and deep learning).
no code implementations • 18 Feb 2016 • Niloofar Yousefi, Yunwen Lei, Marius Kloft, Mansooreh Mollaghasemi, Georgios Anagnostopoulos
We show a Talagrand-type concentration inequality for Multi-Task Learning (MTL), using which we establish sharp excess risk bounds for MTL in terms of distribution- and data-dependent versions of the Local Rademacher Complexity (LRC).
no code implementations • 16 Jul 2015 • Stephan Mandt, Florian Wenzel, Shinichi Nakajima, John P. Cunningham, Christoph Lippert, Marius Kloft
Formulated as models for linear regression, LMMs have been restricted to continuous phenotypes.
no code implementations • 30 Jun 2015 • Christian Widmer, Marius Kloft, Vipin T Sreedharan, Gunnar Rätsch
We present a general regularization-based framework for Multi-task learning (MTL), in which the similarity between tasks can be learned or refined using $\ell_p$-norm Multiple Kernel learning (MKL).
no code implementations • NeurIPS 2015 • Yunwen Lei, Ürün Dogan, Alexander Binder, Marius Kloft
This paper studies the generalization performance of multi-class classification algorithms, for which we obtain, for the first time, a data-dependent generalization error bound with a logarithmic dependence on the class size, substantially improving the state-of-the-art linear dependence in the existing data-dependent generalization analysis.
no code implementations • 14 Jun 2015 • Yunwen Lei, Alexander Binder, Ürün Dogan, Marius Kloft
We propose a localized approach to multiple kernel learning that can be formulated as a convex optimization problem over a given cluster structure.
no code implementations • 14 Apr 2015 • Julia E. Vogt, Marius Kloft, Stefan Stark, Sudhir S. Raman, Sandhya Prabhakaran, Volker Roth, Gunnar Rätsch
We present a novel probabilistic clustering model for objects that are represented via pairwise distances and observed at different time points.
no code implementations • 26 Nov 2014 • Ilya Tolstikhin, Gilles Blanchard, Marius Kloft
We show two novel concentration inequalities for suprema of empirical processes when sampling without replacement, which both take the variance of the functions into account.
no code implementations • NeurIPS 2013 • Corinna Cortes, Marius Kloft, Mehryar Mohri
We use the notion of local Rademacher complexity to design new algorithms for learning kernels.
no code implementations • NeurIPS 2011 • Marius Kloft, Gilles Blanchard
We derive an upper bound on the local Rademacher complexity of Lp-norm multiple kernel learning, which yields a tighter excess risk bound than global approaches.
no code implementations • NeurIPS 2009 • Marius Kloft, Ulf Brefeld, Pavel Laskov, Klaus-Robert Müller, Alexander Zien, Sören Sonnenburg
Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations and hence support interpretability.