FinQA: A Dataset of Numerical Reasoning over Financial Data

The sheer volume of financial statements makes it difficult for humans to access and analyze a business's financials. Robust numerical reasoning likewise faces unique challenges in this domain. In this work, we focus on answering deep questions over financial data, aiming to automate the analysis of a large corpus of financial documents. In contrast to existing tasks on general domain, the finance domain includes complex numerical reasoning and understanding of heterogeneous representations. To facilitate analytical progress, we propose a new large-scale dataset, FinQA, with Question-Answering pairs over Financial reports, written by financial experts. We also annotate the gold reasoning programs to ensure full explainability. We further introduce baselines and conduct comprehensive experiments in our dataset. The results demonstrate that popular, large, pre-trained models fall far short of expert humans in acquiring finance knowledge and in complex multi-step numerical reasoning on that knowledge. Our dataset -- the first of its kind -- should therefore enable significant, new community research into complex application domains. The dataset and code are publicly available\url{https://github.com/czyssrs/FinQA}.

PDF Abstract EMNLP 2021 PDF EMNLP 2021 Abstract

Datasets


Introduced in the Paper:

FinQA

Used in the Paper:

DROP MathQA

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering FinQA FinQANet (RoBERTa-large) Execution Accuracy 65.05 # 4
Program Accuracy 63.52 # 3
Question Answering FinQA FinQANet (FinBert) Execution Accuracy 53.71 # 6
Program Accuracy 51.71 # 5
Question Answering FinQA FinQANet (BERT-large) Execution Accuracy 57.43 # 5
Program Accuracy 55.52 # 4

Methods


No methods listed for this paper. Add relevant methods here