Semi-supervised Anomaly Detection
28 papers with code • 1 benchmarks • 2 datasets
Libraries
Use these libraries to find Semi-supervised Anomaly Detection models and implementationsLatest papers
NNG-Mix: Improving Semi-supervised Anomaly Detection with Pseudo-anomaly Generation
While AD is typically treated as an unsupervised learning task due to the high cost of label annotation, it is more practical to assume access to a small set of labeled anomaly samples from domain experts, as is the case for semi-supervised anomaly detection.
Label-based Graph Augmentation with Metapath for Graph Anomaly Detection
To further efficiently exploit context information from metapath-based anomaly subgraph, we present a new framework, Metapath-based Graph Anomaly Detection (MGAD), incorporating GCN layers in both the dual-encoders and decoders to efficiently propagate context information between abnormal and normal nodes.
ImbSAM: A Closer Look at Sharpness-Aware Minimization in Class-Imbalanced Recognition
To overcome this bottleneck, we leverage class priors to restrict the generalization scope of the class-agnostic SAM and propose a class-aware smoothness optimization algorithm named Imbalanced-SAM (ImbSAM).
AnoOnly: Semi-Supervised Anomaly Detection with the Only Loss on Anomalies
Unlike existing SSAD methods that resort to strict loss supervision, AnoOnly suspends it and introduces a form of weak supervision for normal data.
On Diffusion Modeling for Anomaly Detection
By simplifying DDPM in application to anomaly detection, we are naturally led to an alternative approach called Diffusion Time Estimation (DTE).
SAD: Semi-Supervised Anomaly Detection on Dynamic Graphs
Anomaly detection aims to distinguish abnormal instances that deviate significantly from the majority of benign ones.
EfficientAD: Accurate Visual Anomaly Detection at Millisecond-Level Latencies
We train a student network to predict the extracted features of normal, i. e., anomaly-free training images.
Leveraging Contaminated Datasets to Learn Clean-Data Distribution with Purified Generative Adversarial Networks
When training on such datasets, existing GANs will learn a mixture distribution of desired and contaminated instances, rather than the desired distribution of desired data only (target distribution).
R2-AD2: Detecting Anomalies by Analysing the Raw Gradient
Neural networks follow a gradient-based learning scheme, adapting their mapping parameters by back-propagating the output loss.