Unsupervised Anomaly Detection
166 papers with code • 15 benchmarks • 23 datasets
The objective of Unsupervised Anomaly Detection is to detect previously unseen rare objects or events without any prior knowledge about these. The only information available is that the percentage of anomalies in the dataset is small, usually less than 1%. Since anomalies are rare and unknown to the user at training time, anomaly detection in most cases boils down to the problem of modelling the normal data distribution and defining a measurement in this space in order to classify samples as anomalous or normal. In high-dimensional data such as images, distances in the original space quickly lose descriptive power (curse of dimensionality) and a mapping to some more suitable space is required.
Libraries
Use these libraries to find Unsupervised Anomaly Detection models and implementationsDatasets
Subtasks
Most implemented papers
TadGAN: Time Series Anomaly Detection Using Generative Adversarial Networks
However, detecting anomalies in time series data is particularly challenging due to the vague definition of anomalies and said data's frequent lack of labels and highly complex temporal correlations.
FastFlow: Unsupervised Anomaly Detection and Localization via 2D Normalizing Flows
However, current methods can not effectively map image features to a tractable base distribution and ignore the relationship between local and global features which are important to identify anomalies.
Anomaly Detection via Reverse Distillation from One-Class Embedding
Knowledge distillation (KD) achieves promising results on the challenging problem of unsupervised anomaly detection (AD). The representation discrepancy of anomalies in the teacher-student (T-S) model provides essential evidence for AD.
Probabilistic Autoencoder
The PAE is fast and easy to train and achieves small reconstruction errors, high sample quality, and good performance in downstream tasks.
Graph-Augmented Normalizing Flows for Anomaly Detection of Multiple Time Series
Anomaly detection is a widely studied task for a broad variety of data types; among them, multiple time series appear frequently in applications, including for example, power grids and traffic networks.
DeepAnT: A Deep Learning Approach for Unsupervised Anomaly Detection in Time Series
In contrast to the anomaly detection methods where anomalies are learned, DeepAnT uses unlabeled data to capture and learn the data distribution that is used to forecast the normal behavior of a time series.
f-AnoGAN: Fast Unsupervised Anomaly Detection with Generative Adversarial Networks
While supervised learning yields good results if expert labeled training data is available, the visual variability, and thus the vocabulary of findings, we can detect and exploit, is limited to the annotated lesions.
Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty
Self-supervision provides effective representations for downstream tasks without requiring labels.
Deep Weakly-supervised Anomaly Detection
To detect both seen and unseen anomalies, we introduce a novel deep weakly-supervised approach, namely Pairwise Relation prediction Network (PReNet), that learns pairwise relation features and anomaly scores by predicting the relation of any two randomly sampled training instances, in which the pairwise relation can be anomaly-anomaly, anomaly-unlabeled, or unlabeled-unlabeled.
Uninformed Students: Student-Teacher Anomaly Detection with Discriminative Latent Embeddings
Our experiments demonstrate improvements over state-of-the-art methods on a number of real-world datasets, including the recently introduced MVTec Anomaly Detection dataset that was specifically designed to benchmark anomaly segmentation algorithms.