Search Results for author: Thuy-Trang Vu

Found 14 papers, 5 papers with code

Automatic Post-Editing of Machine Translation: A Neural Programmer-Interpreter Approach

1 code implementation EMNLP 2018 Thuy-Trang Vu, Gholamreza Haffari

Automated Post-Editing (PE) is the task of automatically correct common and repetitive errors found in machine translation (MT) output.

Automatic Post-Editing Translation

Learning How to Active Learn by Dreaming

1 code implementation ACL 2019 Thuy-Trang Vu, Ming Liu, Dinh Phung, Gholamreza Haffari

Heuristic-based active learning (AL) methods are limited when the data distribution of the underlying learning problems vary.

Active Learning named-entity-recognition +5

Effective Unsupervised Domain Adaptation with Adversarially Trained Language Models

1 code implementation EMNLP 2020 Thuy-Trang Vu, Dinh Phung, Gholamreza Haffari

Recent work has shown the importance of adaptation of broad-coverage contextualised embedding models on the domain of the target task of interest.

named-entity-recognition Named Entity Recognition +2

Generalised Unsupervised Domain Adaptation of Neural Machine Translation with Cross-Lingual Data Selection

1 code implementation EMNLP 2021 Thuy-Trang Vu, Xuanli He, Dinh Phung, Gholamreza Haffari

Once the in-domain data is detected by the classifier, the NMT model is then adapted to the new domain by jointly learning translation and domain discrimination tasks.

Contrastive Learning Machine Translation +3

Koala: An Index for Quantifying Overlaps with Pre-training Corpora

no code implementations26 Mar 2023 Thuy-Trang Vu, Xuanli He, Gholamreza Haffari, Ehsan Shareghi

In very recent years more attention has been placed on probing the role of pre-training data in Large Language Models (LLMs) downstream behaviour.

Memorization

Active Continual Learning: On Balancing Knowledge Retention and Learnability

no code implementations6 May 2023 Thuy-Trang Vu, Shahram Khadivi, Mahsa Ghorbanali, Dinh Phung, Gholamreza Haffari

Acquiring new knowledge without forgetting what has been learned in a sequence of tasks is the central focus of continual learning (CL).

Active Learning Continual Learning +1

Continual Learning for Large Language Models: A Survey

no code implementations2 Feb 2024 Tongtong Wu, Linhao Luo, Yuan-Fang Li, Shirui Pan, Thuy-Trang Vu, Gholamreza Haffari

Large language models (LLMs) are not amenable to frequent re-training, due to high training costs arising from their massive scale.

Continual Learning Continual Pretraining +2

Conversational SimulMT: Efficient Simultaneous Translation with Large Language Models

no code implementations16 Feb 2024 Minghan Wang, Thuy-Trang Vu, Ehsan Shareghi, Gholamreza Haffari

Simultaneous machine translation (SimulMT) presents a challenging trade-off between translation quality and latency.

Machine Translation Translation

Direct Evaluation of Chain-of-Thought in Multi-hop Reasoning with Knowledge Graphs

no code implementations17 Feb 2024 Minh-Vuong Nguyen, Linhao Luo, Fatemeh Shiri, Dinh Phung, Yuan-Fang Li, Thuy-Trang Vu, Gholamreza Haffari

Large language models (LLMs) demonstrate strong reasoning abilities when prompted to generate chain-of-thought (CoT) explanations alongside answers.

Knowledge Graphs Multi-hop Question Answering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.