Outlier Detection
193 papers with code • 11 benchmarks • 11 datasets
Outlier Detection is a task of identifying a subset of a given data set which are considered anomalous in that they are unusual from other instances. It is one of the core data mining tasks and is central to many applications. In the security field, it can be used to identify potentially threatening users, in the manufacturing field it can be used to identify parts that are likely to fail.
Libraries
Use these libraries to find Outlier Detection models and implementationsMost implemented papers
MIMIC-Extract: A Data Extraction, Preprocessing, and Representation Pipeline for MIMIC-III
Robust machine learning relies on access to data that can be used with standardized frameworks in important tasks and the ability to develop models whose performance can be reasonably reproduced.
Estimating Density Models with Truncation Boundaries using Score Matching
In this paper, we study parameter estimation for truncated probability densities using SM.
Learning Energy-Based Models in High-Dimensional Spaces with Multi-scale Denoising Score Matching
Recently, \citet{song2019generative} have shown that a generative model trained by denoising score matching accomplishes excellent sample synthesis, when trained with data samples corrupted with multiple levels of noise.
What Do Compressed Deep Neural Networks Forget?
However, this measure of performance conceals significant differences in how different classes and images are impacted by model compression techniques.
XGBOD: Improving Supervised Outlier Detection with Unsupervised Representation Learning
A new semi-supervised ensemble algorithm called XGBOD (Extreme Gradient Boosting Outlier Detection) is proposed, described and demonstrated for the enhanced detection of outliers from normal observations in various practical datasets.
Explainable outlier detection through decision tree conditioning
This work describes an outlier detection procedure (named "OutlierTree") loosely based on the GritBot software developed by RuleQuest research, which works by evaluating and following supervised decision tree splits on variables, in whose branches 1-d confidence intervals are constructed for the target variable and potential outliers flagged according to these confidence intervals.
SUOD: Toward Scalable Unsupervised Outlier Detection
In this study, we propose a three-module acceleration framework called SUOD to expedite the training and prediction with a large number of unsupervised detection models.
Explainable Deep One-Class Classification
Deep one-class classification variants for anomaly detection learn a mapping that concentrates nominal samples in feature space causing anomalies to be mapped away.
A Background-Agnostic Framework with Adversarial Training for Abnormal Event Detection in Video
Following the standard formulation of abnormal event detection as outlier detection, we propose a background-agnostic framework that learns from training videos containing only normal events.
Autoencoding Under Normalization Constraints
The specific role of the normalization constraint is to ensure that the out-of-distribution (OOD) regime has a small likelihood when samples are learned using maximum likelihood.