Search Results for author: Alexander Gepperth

Found 18 papers, 1 papers with code

Large-scale gradient-based training of Mixtures of Factor Analyzers

no code implementations26 Aug 2023 Alexander Gepperth

The Mixture of Factor analyzers (MFA) model is an important extension of GMMs, which allows to smoothly interpolate between diagonal and full CMs based on the number of \textit{factor loadings} $l$.

LEMMA Outlier Detection

Adiabatic replay for continual learning

no code implementations23 Mar 2023 Alexander Krawczyk, Alexander Gepperth

In this proof-of-concept study, we propose a replay-based CL strategy that we term adiabatic replay (AR), which derives its efficiency from the (reasonable) assumption that each new learning phase is adiabatic, i. e., represents only a small addition to existing knowledge.

Continual Learning

A Framework for the Automated Parameterization of a Sensorless Bearing Fault Detection Pipeline

no code implementations15 Mar 2023 Tobias Wagner, Alexander Gepperth, Elmar Engels

The present work contributes to the research of fault detection on rotating machinery in the following terms: (1) Reduction of the human induced bias to the data science process, while still considering expert and task related knowledge, ending in a generic search approach (2) tackling the bearing fault detection task without the need for external sensors (sensorless) (3) learning a domain robust fault detection pipeline applicable to varying motor operating parameters without the need of re-parameterizations or fine-tuning (4) investigations on working condition discrepancies with an excessive degree to determine the pipeline limitations regarding the abstraction of the motor parameters and the pipeline hyperparameters

Fault Detection Hyperparameter Optimization

Beyond Supervised Continual Learning: a Review

no code implementations30 Aug 2022 Benedikt Bagus, Alexander Gepperth, Timothée Lesort

Continual Learning (CL, sometimes also termed incremental learning) is a flavor of machine learning where the usual assumption of stationary data distribution is relaxed or omitted.

Continual Learning Incremental Learning +1

A Study of Continual Learning Methods for Q-Learning

no code implementations8 Jun 2022 Benedikt Bagus, Alexander Gepperth

We present an empirical study on the use of continual learning (CL) methods in a reinforcement learning (RL) scenario, which, to the best of our knowledge, has not been described before.

Continual Learning Q-Learning +1

A new perspective on probabilistic image modeling

no code implementations21 Mar 2022 Alexander Gepperth

We present the Deep Convolutional Gaussian Mixture Model (DCGMM), a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.

Density Estimation

An Investigation of Replay-based Approaches for Continual Learning

no code implementations15 Aug 2021 Benedikt Bagus, Alexander Gepperth

We find that the impact of sample selection increases when a smaller number of samples is stored.

Continual Learning

Continual Learning with Fully Probabilistic Models

no code implementations19 Apr 2021 Benedikt Pfülb, Alexander Gepperth, Benedikt Bagus

As a concrete realization of generative continual learning, we propose Gaussian Mixture Replay (GMR).

Boundary Detection Class Incremental Learning +3

Image Modeling with Deep Convolutional Gaussian Mixture Models

no code implementations19 Apr 2021 Alexander Gepperth, Benedikt Pfülb

For generating sharp images with DCGMMs, we introduce a new gradient-based technique for sampling through non-invertible operations like convolution and pooling.

Clustering Outlier Detection

A Rigorous Link Between Self-Organizing Maps and Gaussian Mixture Models

no code implementations24 Sep 2020 Alexander Gepperth, Benedikt Pfülb

This work presents a mathematical treatment of the relation between Self-Organizing Maps (SOMs) and Gaussian Mixture Models (GMMs).

valid

Gradient-based training of Gaussian Mixture Models for High-Dimensional Streaming Data

1 code implementation18 Dec 2019 Alexander Gepperth, Benedikt Pfülb

We present an approach for efficiently training Gaussian Mixture Model (GMM) by Stochastic Gradient Descent (SGD) with non-stationary, high-dimensional streaming data.

Gradient-based training of Gaussian Mixture Models in High-Dimensional Spaces

no code implementations25 Sep 2019 Alexander Gepperth, Benedikt Pfülb

We present an approach for efficiently training Gaussian Mixture Models (GMMs) with Stochastic Gradient Descent (SGD) on large amounts of high-dimensional data (e. g., images).

Vocal Bursts Intensity Prediction

Marginal Replay vs Conditional Replay for Continual Learning

no code implementations29 Oct 2018 Timothée Lesort, Alexander Gepperth, Andrei Stoian, David Filliat

We present a new replay-based method of continual classification learning that we term "conditional replay" which generates samples and labels together by sampling from a distribution conditioned on the class.

Classification Continual Learning +1

A pragmatic approach to multi-class classification

no code implementations6 Jan 2016 Thomas Kopinski, Stéphane Magand, Uwe Handmann, Alexander Gepperth

We present a novel hierarchical approach to multi-class classification which is generic in that it can be applied to different classification models (e. g., support vector machines, perceptrons), and makes no explicit assumptions about the probabilistic structure of the problem as it is usually done in multi-class classification.

Classification General Classification +2

A simple technique for improving multi-class classification with neural networks

no code implementations6 Jan 2016 Thomas Kopinski, Alexander Gepperth, Uwe Handmann

We present a novel method to perform multi-class pattern classification with neural networks and test it on a challenging 3D hand gesture recognition problem.

Classification General Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.