15 papers with code • 0 benchmarks • 1 datasets
Our architecture is composed of two deep networks, each of which trained by competing with each other while collaborating to understand the underlying concept in the target class, and then classify the testing samples.
We assume that training data is available to describe only the inlier distribution.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
Ranked #9 on Anomaly Detection on MVTec AD
(1) We show that COVID-19-CT-CXR, when used as additional training data, is able to contribute to improved DL performance for the classification of COVID-19 and non-COVID-19 CT. (2) We collected CT images of influenza and trained a DL baseline to distinguish a diagnosis of COVID-19, influenza, or normal or other types of diseases on CT. (3) We trained an unsupervised one-class classifier from non-COVID-19 CXR and performed anomaly detection to detect COVID-19 CXR.
Several approaches have been proposed to detect OOD inputs, but the detection task is still an ongoing challenge.
Our experiments on eight datasets from the image and time-series domains show that our method leads to better results than classical OCC and few-shot classification approaches, and demonstrate the ability to learn unseen tasks from only few normal class samples.
Specifically, we consider the scenario in which pixels within a region of a satellite image are replaced to add or remove an object from the scene.