What You Say and How You Say It Matters: Predicting Stock Volatility Using Verbal and Vocal Cues

ACL 2019  ·  Yu Qin, Yi Yang ·

Predicting financial risk is an essential task in financial market. Prior research has shown that textual information in a firm{'}s financial statement can be used to predict its stock{'}s risk level. Nowadays, firm CEOs communicate information not only verbally through press releases and financial reports, but also nonverbally through investor meetings and earnings conference calls. There are anecdotal evidences that CEO{'}s vocal features, such as emotions and voice tones, can reveal the firm{'}s performance. However, how vocal features can be used to predict risk levels, and to what extent, is still unknown. To fill the gap, we obtain earnings call audio recordings and textual transcripts for S{\&}P 500 companies in recent years. We propose a multimodal deep regression model (MDRM) that jointly model CEO{'}s verbal (from text) and vocal (from audio) information in a conference call. Empirical results show that our model that jointly considers verbal and vocal features achieves significant and substantial prediction error reduction. We also discuss several interesting findings and the implications to financial markets. The processed earnings conference calls data (text and audio) are released for readers who are interested in reproducing the results or designing trading strategy.

PDF Abstract

Datasets


Introduced in the Paper:

Earnings Call

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here