1 code implementation • Findings (ACL) 2022 • Thuy-Trang Vu, Shahram Khadivi, Dinh Phung, Gholamreza Haffari
Generalising to unseen domains is under-explored and remains a challenge in neural machine translation.
no code implementations • 17 Feb 2024 • Minh-Vuong Nguyen, Linhao Luo, Fatemeh Shiri, Dinh Phung, Yuan-Fang Li, Thuy-Trang Vu, Gholamreza Haffari
Large language models (LLMs) demonstrate strong reasoning abilities when prompted to generate chain-of-thought (CoT) explanations alongside answers.
no code implementations • 16 Feb 2024 • Minghan Wang, Thuy-Trang Vu, Ehsan Shareghi, Gholamreza Haffari
Simultaneous machine translation (SimulMT) presents a challenging trade-off between translation quality and latency.
1 code implementation • 2 Feb 2024 • Tongtong Wu, Linhao Luo, Yuan-Fang Li, Shirui Pan, Thuy-Trang Vu, Gholamreza Haffari
Large language models (LLMs) are not amenable to frequent re-training, due to high training costs arising from their massive scale.
no code implementations • 12 Jan 2024 • Minghao Wu, Thuy-Trang Vu, Lizhen Qu, George Foster, Gholamreza Haffari
Large language models (LLMs) have made significant strides in various natural language processing (NLP) tasks.
no code implementations • 18 Oct 2023 • Linhao Luo, Thuy-Trang Vu, Dinh Phung, Gholamreza Haffari
We systematically evaluate the state-of-the-art LLMs with KGs in generic and specific domains.
no code implementations • 13 Sep 2023 • Minghan Wang, Jinming Zhao, Thuy-Trang Vu, Fatemeh Shiri, Ehsan Shareghi, Gholamreza Haffari
The results show that LLM outperforms dedicated MT models in terms of BLEU and LAAL metrics.
no code implementations • 6 May 2023 • Thuy-Trang Vu, Shahram Khadivi, Mahsa Ghorbanali, Dinh Phung, Gholamreza Haffari
Acquiring new knowledge without forgetting what has been learned in a sequence of tasks is the central focus of continual learning (CL).
no code implementations • 26 Mar 2023 • Thuy-Trang Vu, Xuanli He, Gholamreza Haffari, Ehsan Shareghi
In very recent years more attention has been placed on probing the role of pre-training data in Large Language Models (LLMs) downstream behaviour.
no code implementations • 20 Oct 2022 • Thuy-Trang Vu, Shahram Khadivi, Xuanli He, Dinh Phung, Gholamreza Haffari
Previous works mostly focus on either multilingual or multi-domain aspects of neural machine translation (NMT).
1 code implementation • EMNLP 2021 • Thuy-Trang Vu, Xuanli He, Dinh Phung, Gholamreza Haffari
Once the in-domain data is detected by the classifier, the NMT model is then adapted to the new domain by jointly learning translation and domain discrimination tasks.
1 code implementation • EMNLP 2020 • Thuy-Trang Vu, Dinh Phung, Gholamreza Haffari
Recent work has shown the importance of adaptation of broad-coverage contextualised embedding models on the domain of the target task of interest.
1 code implementation • ACL 2019 • Thuy-Trang Vu, Ming Liu, Dinh Phung, Gholamreza Haffari
Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary.
1 code implementation • EMNLP 2018 • Thuy-Trang Vu, Gholamreza Haffari
Automated Post-Editing (PE) is the task of automatically correct common and repetitive errors found in machine translation (MT) output.