Search Results for author: Luu Anh Tuan

Found 41 papers, 22 papers with code

UniBridge: A Unified Approach to Cross-Lingual Transfer Learning for Low-Resource Languages

1 code implementation14 Jun 2024 Trinh Pham, Khoi M. Le, Luu Anh Tuan

Our approach tackles two essential elements of a language model: the initialization of embeddings and the optimal vocabulary size.

Cross-Lingual Transfer Language Modelling +1

A Survey of Backdoor Attacks and Defenses on Large Language Models: Implications for Security Measures

no code implementations10 Jun 2024 Shuai Zhao, Meihuizi Jia, Zhongliang Guo, Leilei Gan, Xiaoyu Xu, Jie Fu, Yichao Feng, Fengjun Pan, Luu Anh Tuan

The large language models (LLMs), which bridge the gap between human language understanding and complex problem-solving, achieve state-of-the-art performance on several NLP tasks, particularly in few-shot and zero-shot settings.

SemRoDe: Macro Adversarial Training to Learn Representations That are Robust to Word-Level Attacks

1 code implementation27 Mar 2024 Brian Formento, Wenjie Feng, Chuan Sheng Foo, Luu Anh Tuan, See-Kiong Ng

Language models (LMs) are indispensable tools for natural language processing tasks, but their vulnerability to adversarial attacks remains a concern.

Word Embeddings

ToXCL: A Unified Framework for Toxic Speech Detection and Explanation

1 code implementation25 Mar 2024 Nhat M. Hoang, Xuan Long Do, Duc Anh Do, Duc Anh Vu, Luu Anh Tuan

This draws a unique need for unified frameworks that can effectively detect and explain implicit toxic speech.

Decoder Knowledge Distillation +1

Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning

no code implementations19 Feb 2024 Shuai Zhao, Leilei Gan, Luu Anh Tuan, Jie Fu, Lingjuan Lyu, Meihuizi Jia, Jinming Wen

Motivated by this insight, we developed a Poisoned Sample Identification Module (PSIM) leveraging PEFT, which identifies poisoned samples through confidence, providing robust defense against weight-poisoning backdoor attacks.

Backdoor Attack text-classification +1

Universal Vulnerabilities in Large Language Models: Backdoor Attacks for In-context Learning

no code implementations11 Jan 2024 Shuai Zhao, Meihuizi Jia, Luu Anh Tuan, Fengjun Pan, Jinming Wen

Our studies demonstrate that an attacker can manipulate the behavior of large language models by poisoning the demonstration context, without the need for fine-tuning the model.

Backdoor Attack In-Context Learning

READ-PVLA: Recurrent Adapter with Partial Video-Language Alignment for Parameter-Efficient Transfer Learning in Low-Resource Video-Language Modeling

1 code implementation12 Dec 2023 Thong Nguyen, Xiaobao Wu, Xinshuai Dong, Khoi Le, Zhiyuan Hu, Cong-Duy Nguyen, See-Kiong Ng, Luu Anh Tuan

Fully fine-tuning pretrained large-scale transformer models has become a popular paradigm for video-language modeling tasks, such as temporal language grounding and video-language summarization.

Language Modelling Transfer Learning

Exploring the Potential of Large Language Models in Computational Argumentation

1 code implementation15 Nov 2023 Guizhen Chen, Liying Cheng, Luu Anh Tuan, Lidong Bing

As large language models (LLMs) have demonstrated impressive capabilities in understanding context and generating natural language, it is worthwhile to evaluate the performance of LLMs on diverse computational argumentation tasks.

Argument Mining

Contrastive Chain-of-Thought Prompting

1 code implementation15 Nov 2023 Yew Ken Chia, Guizhen Chen, Luu Anh Tuan, Soujanya Poria, Lidong Bing

Compared to the conventional chain of thought, our approach provides both valid and invalid reasoning demonstrations, to guide the model to reason step-by-step while reducing reasoning mistakes.

Language Modelling valid

ChatKBQA: A Generate-then-Retrieve Framework for Knowledge Base Question Answering with Fine-tuned Large Language Models

1 code implementation13 Oct 2023 Haoran Luo, Haihong E, Zichen Tang, Shiyao Peng, Yikai Guo, Wentai Zhang, Chenghao Ma, Guanting Dong, Meina Song, Wei Lin, Yifan Zhu, Luu Anh Tuan

Knowledge Base Question Answering (KBQA) aims to answer natural language questions over large-scale knowledge bases (KBs), which can be summarized into two crucial steps: knowledge retrieval and semantic parsing.

Knowledge Base Question Answering Knowledge Graphs +2

Rethinking Negative Pairs in Code Search

1 code implementation12 Oct 2023 Haochen Li, Xin Zhou, Luu Anh Tuan, Chunyan Miao

In our proposed loss function, we apply three methods to estimate the weights of negative pairs and show that the vanilla InfoNCE loss is a special case of Soft-InfoNCE.

Code Search Contrastive Learning +2

Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models

no code implementations2 May 2023 Shuai Zhao, Jinming Wen, Luu Anh Tuan, Junbo Zhao, Jie Fu

Our method does not require external triggers and ensures correct labeling of poisoned samples, improving the stealthy nature of the backdoor attack.

Backdoor Attack Few-Shot Text Classification +1

Towards Interpretable Federated Learning

no code implementations27 Feb 2023 Anran Li, Rui Liu, Ming Hu, Luu Anh Tuan, Han Yu

Federated learning (FL) enables multiple data owners to build machine learning models collaboratively without exposing their private local data.

Federated Learning

Exploiting Contrastive Learning and Numerical Evidence for Confusing Legal Judgment Prediction

1 code implementation15 Nov 2022 Leilei Gan, Baokui Li, Kun Kuang, Yating Zhang, Lei Wang, Luu Anh Tuan, Yi Yang, Fei Wu

Given the fact description text of a legal case, legal judgment prediction (LJP) aims to predict the case's charge, law article and penalty term.

Contrastive Learning

Textual Manifold-based Defense Against Natural Language Adversarial Examples

1 code implementation5 Nov 2022 Dang Minh Nguyen, Luu Anh Tuan

To the best of our knowledge, this is the first NLP defense that leverages the manifold structure against adversarial attacks.

Improving Neural Cross-Lingual Summarization via Employing Optimal Transport Distance for Knowledge Distillation

1 code implementation7 Dec 2021 Thong Nguyen, Luu Anh Tuan

Current state-of-the-art cross-lingual summarization models employ multi-task learning paradigm, which works on a shared vocabulary module and relies on the self-attention mechanism to attend among tokens in two languages.

Knowledge Distillation Multi-Task Learning

Capturing Greater Context for Question Generation

1 code implementation22 Oct 2019 Luu Anh Tuan, Darsh J Shah, Regina Barzilay

Automatic question generation can benefit many applications ranging from dialogue systems to reading comprehension.

Question Answering Question Generation +3

Recurrently Controlled Recurrent Networks

1 code implementation NeurIPS 2018 Yi Tay, Luu Anh Tuan, Siu Cheung Hui

Recurrent neural networks (RNNs) such as long short-term memory and gated recurrent units are pivotal building blocks across a broad spectrum of sequence modeling problems.

Answer Selection General Classification +2

Self-Attentive Neural Collaborative Filtering

no code implementations17 Jun 2018 Yi Tay, Shuai Zhang, Luu Anh Tuan, Siu Cheung Hui

This paper has been withdrawn as we discovered a bug in our tensorflow implementation that involved accidental mixing of vectors across batches.

Collaborative Filtering

Reasoning with Sarcasm by Reading In-between

no code implementations ACL 2018 Yi Tay, Luu Anh Tuan, Siu Cheung Hui, Jian Su

Sarcasm is a sophisticated speech act which commonly manifests on social communities such as Twitter and Reddit.

Sarcasm Detection

Multi-range Reasoning for Machine Comprehension

no code implementations24 Mar 2018 Yi Tay, Luu Anh Tuan, Siu Cheung Hui

Similarly, we achieve competitive performance relative to AMANDA on the SearchQA benchmark and BiDAF on the NarrativeQA benchmark without using any LSTM/GRU layers.

Reading Comprehension

Multi-Pointer Co-Attention Networks for Recommendation

2 code implementations28 Jan 2018 Yi Tay, Luu Anh Tuan, Siu Cheung Hui

Our model operates on a multi-hierarchical paradigm and is based on the intuition that not all reviews are created equal, i. e., only a select few are important.

Recommendation Systems Representation Learning

Cross Temporal Recurrent Networks for Ranking Question Answer Pairs

1 code implementation21 Nov 2017 Yi Tay, Luu Anh Tuan, Siu Cheung Hui

This paper explores the idea of learning temporal gates for sequence pairs (question and answer), jointly influencing the learned representations in a pairwise manner.

SkipFlow: Incorporating Neural Coherence Features for End-to-End Automatic Text Scoring

1 code implementation14 Nov 2017 Yi Tay, Minh C. Phan, Luu Anh Tuan, Siu Cheung Hui

Our new method proposes a new \textsc{SkipFlow} mechanism that models relationships between snapshots of the hidden representations of a long short-term memory (LSTM) network as it reads.

Automated Essay Scoring Feature Engineering +1

Multi-task Neural Network for Non-discrete Attribute Prediction in Knowledge Graphs

no code implementations16 Aug 2017 Yi Tay, Luu Anh Tuan, Minh C. Phan, Siu Cheung Hui

Unfortunately, many state-of-the-art relational learning models ignore this information due to the challenging nature of dealing with non-discrete data types in the inherently binary-natured knowledge graphs.

Attribute Knowledge Graphs +3

Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering

1 code implementation25 Jul 2017 Yi Tay, Luu Anh Tuan, Siu Cheung Hui

The dominant neural architectures in question answer retrieval are based on recurrent or convolutional encoders configured with complex word matching layers.

Efficient Neural Network Feature Engineering +3

Utilizing Temporal Information for Taxonomy Construction

no code implementations TACL 2016 Luu Anh Tuan, Siu Cheung Hui, See Kiong Ng

Taxonomies play an important role in many applications by organizing domain knowledge into a hierarchy of {`}is-a{'} relations between terms.

Question Answering Time Series +1

Cannot find the paper you are looking for? You can Submit a new open access paper.