no code implementations • 14 Apr 2024 • Tuan Bui, Oanh Tran, Phuong Nguyen, Bao Ho, Long Nguyen, Thang Bui, Tho Quan
In today's rapidly evolving landscape of Artificial Intelligence, large language models (LLMs) have emerged as a vibrant research topic.
1 code implementation • 5 Mar 2024 • Sang T. Truong, Duc Q. Nguyen, Toan Nguyen, Dong D. Le, Nhi N. Truong, Tho Quan, Sanmi Koyejo
Recent advancements in large language models (LLMs) have underscored their importance in the evolution of artificial intelligence.
2 code implementations • 9 Jan 2024 • Khoi M. Le, Trinh Pham, Tho Quan, Anh Tuan Luu
Paraphrases are texts that convey the same meaning while using different words or sentence structures.
1 code implementation • 4 Dec 2023 • Duc Q. Nguyen, Thanh Toan Nguyen, Tho Quan
Subgraph matching is a challenging problem with a wide range of applications in database systems, biochemistry, and cognitive science.
no code implementations • 4 Dec 2023 • Cong-Duy Nguyen, The-Anh Vu-Le, Thong Nguyen, Tho Quan, Luu Anh Tuan
Language models have been supervised with both language-only objective and visual grounding in existing studies of visual-grounded language learning.
no code implementations • 29 Sep 2021 • Cong-Duy T Nguyen, Anh Tuan Luu, Tho Quan
However, this approach has two main drawbacks: (i) the whole image usually contains more objects and backgrounds than the sentence itself; thus, matching them together will confuse the grounded model; (ii) CNN only extracts the features of the image but not the relationship between objects inside that, limiting the grounded model to learn complicated contexts.
no code implementations • EMNLP 2021 • Thong Nguyen, Anh Tuan Luu, Truc Lu, Tho Quan
Recently, Transformer-based models have been proven effective in the abstractive summarization task by creating fluent and informative summaries.
no code implementations • 29 Aug 2019 • Loc Tran, Tho Quan, An Mai
In this paper, we will model the World Wide Web's link structure as the directed hypergraph.
1 code implementation • 1 May 2019 • Trung Trinh, Tho Quan, Trung Mai
The objective of our research is to create a topic model that can achieve great performances on microtexts while requiring a small runtime for scalability to large datasets.
no code implementations • 16 Feb 2019 • Khuong Vo, Tri Nguyen, Dang Pham, Mao Nguyen, Minh Truong, Trung Mai, Tho Quan
However, when applied with real data obtained from social media, we notice that there is a high volume of short and informal messages posted by users on those channels.
1 code implementation • 7 Feb 2019 • Tai Hoang, Huy Le, Tho Quan
Firstly, we introduce the Autoencoding Variational Inference for Aspect Discovery (AVIAD) model, which extends the previous work of Autoencoding Variational Inference for Topic Models (AVITM) to embed prior knowledge of seed words.
no code implementations • 22 Jun 2018 • Khuong Vo, Dang Pham, Mao Nguyen, Trung Mai, Tho Quan
In particular, when analyzing the applications of deep learning in sentiment analysis, we found that the current approaches are suffering from the following drawbacks: (i) the existing works have not paid much attention to the importance of different types of sentiment terms, which is an important concept in this area; and (ii) the loss function currently employed does not well reflect the degree of error of sentiment misclassification.