no code implementations • 5 Feb 2024 • Ashley Shin, Qiao Jin, James Anibal, Zhiyong Lu
Our study suggests that repurposing user query logs of academic search engines can be a promising way to train state-of-the-art models for explaining literature recommendation.
no code implementations • 8 Oct 2021 • Hieu Nguyen, Long Phan, James Anibal, Alec Peltekian, Hieu Tran
Text summarization is a challenging task within natural language processing that involves text generation from lengthy input sequences.
1 code implementation • 18 Jun 2021 • Hieu Tran, Long Phan, James Anibal, Binh T. Nguyen, Truong-Son Nguyen
In this paper, we propose SPBERT, a transformer-based language model pre-trained on massive SPARQL query logs.
1 code implementation • ACL (NLP4Prog) 2021 • Long Phan, Hieu Tran, Daniel Le, Hieu Nguyen, James Anibal, Alec Peltekian, Yanfang Ye
We train CoTexT on different combinations of available PL corpus including both "bimodal" and "unimodal" data.