Taking Over the Stock Market: Adversarial Perturbations Against Algorithmic Traders

19 Oct 2020  ·  Elior Nehemya, Yael Mathov, Asaf Shabtai, Yuval Elovici ·

In recent years, machine learning has become prevalent in numerous tasks, including algorithmic trading. Stock market traders utilize machine learning models to predict the market's behavior and execute an investment strategy accordingly. However, machine learning models have been shown to be susceptible to input manipulations called adversarial examples. Despite this risk, the trading domain remains largely unexplored in the context of adversarial learning. In this study, we present a realistic scenario in which an attacker influences algorithmic trading systems by using adversarial learning techniques to manipulate the input data stream in real time. The attacker creates a universal perturbation that is agnostic to the target model and time of use, which, when added to the input stream, remains imperceptible. We evaluate our attack on a real-world market data stream and target three different trading algorithms. We show that when added to the input stream, our perturbation can fool the trading algorithms at future unseen data points, in both white-box and black-box settings. Finally, we present various mitigation methods and discuss their limitations, which stem from the algorithmic trading domain. We believe that these findings should serve as an alert to the finance community about the threats in this area and promote further research on the risks associated with using automated learning models in the trading domain.

PDF Abstract

Datasets


Introduced in the Paper:

S&P 500 Intraday Data

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here