Classifier Suites for Insider Threat Detection

30 Jan 2019  ·  David Noever ·

Better methods to detect insider threats need new anticipatory analytics to capture risky behavior prior to losing data. In search of the best overall classifier, this work empirically scores 88 machine learning algorithms in 16 major families. We extract risk features from the large CERT dataset, which blends real network behavior with individual threat narratives. We discover the predictive importance of measuring employee sentiment. Among major classifier families tested on CERT, the random forest algorithms offer the best choice, with different implementations scoring over 98% accurate. In contrast to more obscure or black-box alternatives, random forests are ensembles of many decision trees and thus offer a deep but human-readable set of detection rules (>2000 rules). We address performance rankings by penalizing long execution times against higher median accuracies using cross-fold validation. We address the relative rarity of threats as a case of low signal-to-noise (< 0.02% malicious to benign activities), and then train on both under-sampled and over-sampled data which is statistically balanced to identify nefarious actors.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here