To ensure undisrupted business, large Internet companies need to closely monitor various KPIs (e. g., Page Views, number of online users, and number of orders) of its Web applications, to accurately detect anomalies and trigger timely troubleshooting/mitigation.
Obtaining models that capture imaging markers relevant for disease progression and treatment monitoring is challenging.
In this paper, we present a Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection.
To calculate the TPR in the objective function, we consider that the set of anomalous sounds is the complementary set of normal sounds and simulate anomalous sounds by using a rejection sampling algorithm.
At the test stage, the learned memory will be fixed, and the reconstruction is obtained from a few selected memory records of the normal data.
By contrast, we introduce an unsupervised anomaly detection model, trained only on the normal (non-anomalous, plentiful) samples in order to learn the normality distribution of the domain and hence detect abnormality based on deviation from this model.
To the best of our knowledge, our method is the first data augmentation technique focused on improving performance in unsupervised anomaly detection.
To the best of our knowledge, this is the first work to study the interpretability of deep learning in anomaly detection.
Our approach is to introduce new regularizers to a classical autoencoder (AE) and a variational AE, which force normal data into a very tight area centered at the origin in the nonsaturating area of the bottleneck unit activations.
Subsequently, given the signature matrices, a convolutional encoder is employed to encode the inter-sensor (time series) correlations and an attention based Convolutional Long-Short Term Memory (ConvLSTM) network is developed to capture the temporal patterns.