no code implementations • EMNLP (sdp) 2020 • Thomas van Dongen, Gideon Maillette de Buy Wenniger, Lambert Schomaker
We also show the merit of using more training data and longer input for number of citations prediction.
no code implementations • 15 Aug 2023 • Gideon Maillette de Buy Wenniger, Thomas van Dongen, Lambert Schomaker
Using BERT$_{\textrm{BASE}}$ embeddings, on the (log) number of citations prediction task with the ACL-BiblioMetry dataset, our MultiSChuBERT (text+visual) model obtains an $R^{2}$ score of 0. 454 compared to 0. 432 for the SChuBERT (text only) model.
no code implementations • 21 Dec 2020 • Thomas van Dongen, Gideon Maillette de Buy Wenniger, Lambert Schomaker
We also show the merit of using more training data and longer input for number of citations prediction.
no code implementations • EMNLP (sdp) 2020 • Gideon Maillette de Buy Wenniger, Thomas van Dongen, Eleri Aedmaa, Herbert Teun Kruitbosch, Edwin A. Valentijn, Lambert Schomaker
To tackle these problems, we propose the use of HANs combined with structure-tags which mark the role of sentences in the document.