Paper

STAN: Synthetic Network Traffic Generation with Generative Neural Models

Deep learning models have achieved great success in recent years but progress in some domains like cybersecurity is stymied due to a paucity of realistic datasets. Organizations are reluctant to share such data, even internally, due to privacy reasons. An alternative is to use synthetically generated data but existing methods are limited in their ability to capture complex dependency structures, between attributes and across time. This paper presents STAN (Synthetic network Traffic generation with Autoregressive Neural models), a tool to generate realistic synthetic network traffic datasets for subsequent downstream applications. Our novel neural architecture captures both temporal dependencies and dependence between attributes at any given time. It integrates convolutional neural layers with mixture density neural layers and softmax layers, and models both continuous and discrete variables. We evaluate the performance of STAN in terms of the quality of data generated, by training it on both a simulated dataset and a real network traffic data set. Finally, to answer the question - can real network traffic data be substituted with synthetic data to train models of comparable accuracy? We train two anomaly detection models based on self-supervision. The results show only a small decline in the accuracy of models trained solely on synthetic data. While current results are encouraging in terms of quality of data generated and absence of any obvious data leakage from training data, in the future we plan to further validate this fact by conducting privacy attacks on the generated data. Other future work includes validating capture of long term dependencies and making model training

Results in Papers With Code
(↓ scroll down to see all results)