AnyThreat: An Opportunistic Knowledge Discovery Approach to Insider Threat Detection

1 Dec 2018  ·  Diana Haidar, Mohamed Medhat Gaber, Yevgeniya Kovalchuk ·

Insider threat detection is getting an increased concern from academia, industry, and governments due to the growing number of malicious insider incidents. The existing approaches proposed for detecting insider threats still have a common shortcoming, which is the high number of false alarms (false positives). The challenge in these approaches is that it is essential to detect all anomalous behaviours which belong to a particular threat. To address this shortcoming, we propose an opportunistic knowledge discovery system, namely AnyThreat, with the aim to detect any anomalous behaviour in all malicious insider threats. We design the AnyThreat system with four components. (1) A feature engineering component, which constructs community data sets from the activity logs of a group of users having the same role. (2) An oversampling component, where we propose a novel oversampling technique named Artificial Minority Oversampling and Trapper REmoval (AMOTRE). AMOTRE first removes the minority (anomalous) instances that have a high resemblance with normal (majority) instances to reduce the number of false alarms, then it synthetically oversamples the minority class by shielding the border of the majority class. (3) A class decomposition component, which is introduced to cluster the instances of the majority class into subclasses to weaken the effect of the majority class without information loss. (4) A classification component, which applies a classification method on the subclasses to achieve a better separation between the majority class(es) and the minority class(es). AnyThreat is evaluated on synthetic data sets generated by Carnegie Mellon University. It detects approximately 87.5% of malicious insider threats, and achieves the minimum of false positives=3.36%.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here