no code implementations • 5 Jul 2022 • Thong Nguyen, Cong-Duy Nguyen, Xiaobao Wu, Anh Tuan Luu
Inheriting the spirit of Transfer Learning, research works in V&L have devised multiple pretraining techniques on large-scale datasets in order to enhance the performance of downstream tasks.
1 code implementation • ACL 2022 • Thong Nguyen, Andrew Yates, Ayah Zirikly, Bart Desmet, Arman Cohan
In dataset-transfer experiments on three social media datasets, we find that grounding the model in PHQ9's symptoms substantially improves its ability to generalize to out-of-distribution data compared to a standard BERT-based approach.
1 code implementation • 7 Dec 2021 • Thong Nguyen, Luu Anh Tuan
Current state-of-the-art cross-lingual summarization models employ multi-task learning paradigm, which works on a shared vocabulary module and relies on the self-attention mechanism to attend among tokens in two languages.
1 code implementation • NeurIPS 2021 • Thong Nguyen, Anh Tuan Luu
Recent empirical studies show that adversarial topic models (ATM) can successfully capture semantic patterns of the document by differentiating a document with another dissimilar sample.
no code implementations • EMNLP 2021 • Thong Nguyen, Anh Tuan Luu, Truc Lu, Tho Quan
Recently, Transformer-based models have been proven effective in the abstractive summarization task by creating fluent and informative summaries.
no code implementations • 10 Mar 2020 • Thong Nguyen, Duy Nguyen, Pramod Rao
For several purposes in Natural Language Processing (NLP), such as Information Extraction, Sentiment Analysis or Chatbot, Named Entity Recognition (NER) holds an important role as it helps to determine and categorize entities in text into predefined groups such as the names of persons, locations, quantities, organizations or percentages, etc.
no code implementations • 25 Jan 2019 • Thong Nguyen, Tianjian Lu, Ken Wu, Jose Schutt-Aine
In this paper, we leverage machine learning methods, to be specific, the recurrent neural network (RNN), to generate black-box macromodels and achieve significant reduction of computation time.