1 code implementation • 8 Mar 2024 • Thang M. Pham, Peijie Chen, Tin Nguyen, Seunghyun Yoon, Trung Bui, Anh Totti Nguyen
CLIP-based classifiers rely on the prompt containing a {class name} that is known to the text encoder.
no code implementations • 2 Apr 2023 • Viet H. Pham, Thang M. Pham, Giang Nguyen, Long Nguyen, Dien Dinh
We also introduce a SentenceBERT-based filter to enhance the quality of augmenting data by retaining semantically similar sentence pairs.
1 code implementation • 19 Jul 2022 • Thang M. Pham, Seunghyun Yoon, Trung Bui, Anh Nguyen
While contextualized word embeddings have been a de-facto standard, learning contextualized phrase embeddings is less explored and being hindered by the lack of a human-annotated benchmark that tests machine understanding of phrase semantics given a context sentence or paragraph (instead of phrases alone).
1 code implementation • 22 Oct 2021 • Thang M. Pham, Trung Bui, Long Mai, Anh Nguyen
We find two reasons why IM is not better than LOO: (1) deleting a single word from the input only marginally reduces a classifier's accuracy; and (2) a highly predictable word is always given near-zero attribution, regardless of its true importance to the classifier.
no code implementations • Findings (ACL) 2021 • Thang M. Pham, Trung Bui, Long Mai, Anh Nguyen
Encouraging classifiers to capture word order information improves the performance on most GLUE tasks, SQuAD 2. 0 and out-of-samples.
Natural Language Inference Natural Language Understanding +2