Despite the use of large training datasets, most models are trained by iterating over single input-output pairs, discarding the remaining examples for the current prediction.
Our work aimed at experimentally assessing the benefits of model ensembling within the context of neural methods for passage reranking.
Specifically, we use bi-directional Recurrent Neural Networks, together with max-pooling over the temporal/sequential dimension and neural attention, for representing (i) the headline, (ii) the first two sentences of the news article, and (iii) the entire news article.
Ranked #3 on Fake News Detection on FNC-1
More specifically, this article explores the use of supervised learning to rank methods, as well as rank aggregation approaches, for combing all of the estimators of expertise.
The task of expert finding has been getting increasing attention in information retrieval literature.