1 code implementation • 11 Oct 2022 • Long Phan, Tai Dang, Hieu Tran, Trieu H. Trinh, Vy Phan, Lam D. Chau, Minh-Thang Luong
Biomedical data and benchmarks are highly valuable yet very limited in low-resource languages other than English such as Vietnamese.
2 code implementations • 11 Oct 2022 • Chinh Ngo, Trieu H. Trinh, Long Phan, Hieu Tran, Tai Dang, Hieu Nguyen, Minh Nguyen, Minh-Thang Luong
We introduce MTet, the largest publicly available parallel corpus for English-Vietnamese translation.
Ranked #1 on Machine Translation on IWSLT2015 English-Vietnamese (using extra training data)
1 code implementation • NAACL (ACL) 2022 • Long Phan, Hieu Tran, Hieu Nguyen, Trieu H. Trinh
In this work, we perform exhaustive experiments on both Vietnamese Abstractive Summarization and Named Entity Recognition, validating the performance of ViT5 against many other pretrained Transformer-based encoder-decoder models.
1 code implementation • Blog 2022 • Chinh Ngo, Hieu Tran, Long Phan, Trieu H. Trinh, Hieu Nguyen, Minh Nguyen, Minh-Thang Luong
We are excited to introduce a new larger and better quality Machine Translation dataset, MTet, which stands for Multi-domain Translation for English and VieTnamese.
1 code implementation • 7 Jun 2019 • Trieu H. Trinh, Minh-Thang Luong, Quoc V. Le
Notably, on ImageNet 224 x 224 with 60 examples per class (5%), our method improves the mean accuracy of ResNet-50 from 35. 6% to 46. 7%, an improvement of 11. 1 points in absolute accuracy.
no code implementations • ICLR 2019 • Trieu H. Trinh, Quoc V. Le
It has been argued that current machine learning models do not have commonsense, and therefore must be hard-coded with prior knowledge (Marcus, 2018).
2 code implementations • 7 Jun 2018 • Trieu H. Trinh, Quoc V. Le
Commonsense reasoning is a long-standing challenge for deep learning.
Ranked #9 on Natural Language Understanding on PDP60
1 code implementation • ICML 2018 • Trieu H. Trinh, Andrew M. Dai, Minh-Thang Luong, Quoc V. Le
Despite recent advances in training recurrent neural networks (RNNs), capturing long-term dependencies in sequences remains a fundamental challenge.
Ranked #13 on Sequential Image Classification on Sequential CIFAR-10