Deep Isolation Forest for Anomaly Detection

14 Jun 2022  ·  Hongzuo Xu, Guansong Pang, Yijie Wang, Yongjun Wang ·

Isolation forest (iForest) has been emerging as arguably the most popular anomaly detector in recent years due to its general effectiveness across different benchmarks and strong scalability. Nevertheless, its linear axis-parallel isolation method often leads to (i) failure in detecting hard anomalies that are difficult to isolate in high-dimensional/non-linear-separable data space, and (ii) notorious algorithmic bias that assigns unexpectedly lower anomaly scores to artefact regions. These issues contribute to high false negative errors. Several iForest extensions are introduced, but they essentially still employ shallow, linear data partition, restricting their power in isolating true anomalies. Therefore, this paper proposes deep isolation forest. We introduce a new representation scheme that utilises casually initialised neural networks to map original data into random representation ensembles, where random axis-parallel cuts are subsequently applied to perform the data partition. This representation scheme facilitates high freedom of the partition in the original data space (equivalent to non-linear partition on subspaces of varying sizes), encouraging a unique synergy between random representations and random partition-based isolation. Extensive experiments show that our model achieves significant improvement over state-of-the-art isolation-based methods and deep detectors on tabular, graph and time series datasets; our model also inherits desired scalability from iForest.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Anomaly Detection Forest CoverType DIF AUC 0.972 # 1
Anomaly Detection Kaggle-Credit Card Fraud Dataset DIF AUC 0.953 # 1
Anomaly Detection NB15-Analysis DIF AUC 0.931 # 1
Anomaly Detection NB15-Backdoor DIF AUC 0.918 # 1
Anomaly Detection NB15-DoS DIF AUC 0.932 # 1

Methods