no code implementations • CONSTRAINT (ACL) 2022 • Jason Lucas, Limeng Cui, Thai Le, Dongwon Lee
The COVID-19 pandemic has created threats to global health control.
no code implementations • EMNLP 2020 • Adaku Uchendu, Thai Le, Kai Shu, Dongwon Lee
In recent years, the task of generating realistic short and long texts have made tremendous advancements.
no code implementations • 23 May 2025 • Binh Nguyen, Shuji Shi, Ryan Ofman, Thai Le
Recent advances in text-to-speech technologies have enabled realistic voice generation, fueling audio-based deepfake attacks such as fraud and impersonation.
no code implementations • 22 May 2025 • Bang Trinh Tran To, Thai Le
This work presents LURK (Latent UnleaRned Knowledge), a novel framework that probes for hidden retained knowledge in unlearned LLMs through adversarial suffix prompting.
no code implementations • 22 May 2025 • Viet Pham, Thai Le
To demonstrate such an attack, we develop CAIN, an algorithm that can automatically curate such harmful system prompts for a specific target question in a black-box setting or without the need to access the LLM's parameters.
no code implementations • 20 May 2025 • Tuc Nguyen, Yifan Hu, Thai Le
There are three major automated tasks in authorship privacy, namely authorship obfuscation (AO), authorship mimicking (AM), and authorship verification (AV).
no code implementations • 3 Jan 2025 • Christopher Burger, Charles Walter, Thai Le, Lingwei Chen
Recent work has investigated the concept of adversarial attacks on explainable AI (XAI) in the NLP domain with a focus on examining the vulnerability of local surrogate methods such as Lime to adversarial perturbations or small changes on the input of a machine learning (ML) model.
1 code implementation • 26 Dec 2024 • Risal Shahriar Shefin, Md Asifur Rahman, Thai Le, Sarra Alqahtani
This makes trust in the safety mechanisms of RL systems crucial for effective deployment.
1 code implementation • 15 Nov 2024 • Adaku Uchendu, Thai Le
We conclude by exploring the challenges and unresolved questions that persist in this niche field.
no code implementations • 19 Sep 2024 • James Michels, Ramya Bandarupalli, Amin Ahangar Akbari, Thai Le, Hong Xiao, Jing Li, Erik F. Y. Hom
Recent advances in Natural Language Processing (NLP) have ignited interest in developing effective methods for predicting protein-ligand interactions (PLIs) given their relevance to drug discovery and protein engineering efforts and the ever-growing volume of biochemical sequence and structural data available.
no code implementations • 20 Aug 2024 • Tuc Nguyen, James Michels, Hua Shen, Thai Le
In Explainable AI (XAI), counterfactual explanations (CEs) are a well-studied method to communicate feature relevance through contrastive reasoning of "what if" to explain AI models' predictions.
no code implementations • 24 Jun 2024 • Jooyoung Lee, Toshini Agrawal, Adaku Uchendu, Thai Le, Jinghui Chen, Dongwon Lee
We then leverage our proposed dataset to evaluate the plagiarism detection performance of five modern LLMs and three specialized plagiarism checkers.
no code implementations • 22 Jun 2024 • Christopher Burger, Charles Walter, Thai Le
Recent work has investigated the vulnerability of local surrogate methods to adversarial perturbations on a machine learning (ML) model's inputs, where the explanation is manipulated while the meaning and structure of the original input remains similar under the complex model.
no code implementations • 22 Jun 2024 • Christopher Burger, Yifan Hu, Thai Le
The location of knowledge within Generative Pre-trained Transformer (GPT)-like models has seen extensive recent investigation.
1 code implementation • 18 Feb 2024 • Cuong Dang, Dung D. Le, Thai Le
First, empirical analyses show that (a) extracted features can be used with a lightweight classifier such as Random Forest to predict the attack success rate effectively, and (b) features with the most influence on the model robustness have a clear correlation with the robustness.
no code implementations • 16 Feb 2024 • Tuc Nguyen, Thai Le
Several parameter-efficient fine-tuning methods based on adapters have been proposed as a streamlined approach to incorporate not only a single specialized knowledge into existing Pre-Trained Language Models (PLMs) but also multiple of them at once.
1 code implementation • 1 Feb 2024 • Eric Xing, Saranya Venkatraman, Thai Le, Dongwon Lee
AO is the corresponding adversarial task, aiming to modify a text in such a way that its semantics are preserved, yet an AA model cannot correctly infer its authorship.
no code implementations • 18 Jan 2024 • Tuc Nguyen, Thai Le
Existing works show that augmenting the training data of pre-trained language models (PLMs) for classification tasks fine-tuned via parameter-efficient fine-tuning methods (PEFT) using both clean and adversarial examples can enhance their robustness under adversarial attacks.
1 code implementation • 14 Nov 2023 • Nafis Irtiza Tripto, Saranya Venkatraman, Dominik Macko, Robert Moro, Ivan Srba, Adaku Uchendu, Thai Le, Dongwon Lee
In the realm of text manipulation and linguistic transformation, the question of authorship has been a subject of fascination and philosophical inquiry.
no code implementations • 25 Oct 2023 • Nafis Irtiza Tripto, Adaku Uchendu, Thai Le, Mattia Setzu, Fosca Giannotti, Dongwon Lee
Thus, we introduce the largest benchmark for spoken texts - HANSEN (Human ANd ai Spoken tExt beNchmark).
1 code implementation • 20 Oct 2023 • Dominik Macko, Robert Moro, Adaku Uchendu, Jason Samuel Lucas, Michiharu Yamashita, Matúš Pikuliak, Ivan Srba, Thai Le, Dongwon Lee, Jakub Simko, Maria Bielikova
There is a lack of research into capabilities of recent LLMs to generate convincing text in languages other than English and into performance of detectors of machine-generated text in multilingual settings.
1 code implementation • 22 Sep 2023 • Adaku Uchendu, Thai Le, Dongwon Lee
We propose TopFormer to improve existing AA solutions by capturing more linguistic patterns in deepfake texts by including a Topological Data Analysis (TDA) layer in the Transformer-based model.
1 code implementation • 21 May 2023 • Christopher Burger, Lingwei Chen, Thai Le
LIME has emerged as one of the most commonly referenced tools in explainable AI (XAI) frameworks that is integrated into critical machine learning applications--e. g., healthcare and finance.
2 code implementations • 3 Apr 2023 • Adaku Uchendu, Jooyoung Lee, Hua Shen, Thai Le, Ting-Hao 'Kenneth' Huang, Dongwon Lee
Advances in Large Language Models (e. g., GPT-4, LLaMA) have improved the generation of coherent sentences resembling human writing on a large scale, resulting in the creation of so-called deepfake texts.
no code implementations • 18 Mar 2023 • Yiran Ye, Thai Le, Dongwon Lee
Online texts with toxic content are a clear threat to the users on social media in particular and society in general.
no code implementations • 16 Jan 2023 • Thai Le, Ye Yiran, Yifan Hu, Dongwon Lee
CRYPTEXT is a data-intensive application that provides the users with a database and several tools to extract and interact with human-written perturbations.
no code implementations • 19 Oct 2022 • Adaku Uchendu, Thai Le, Dongwon Lee
Two interlocking research questions of growing interest and importance in privacy research are Authorship Attribution (AA) and Authorship Obfuscation (AO).
1 code implementation • Findings (ACL) 2022 • Thai Le, Jooyoung Lee, Kevin Yen, Yifan Hu, Dongwon Lee
We find that adversarial texts generated by ANTHRO achieve the best trade-off between (1) attack success rate, (2) semantic preservation of the original text, and (3) stealthiness--i. e. indistinguishable from human writings hence harder to be flagged as suspicious.
1 code implementation • 15 Mar 2022 • Jooyoung Lee, Thai Le, Jinghui Chen, Dongwon Lee
Our results suggest that (1) three types of plagiarism widely exist in LMs beyond memorization, (2) both size and decoding methods of LMs are strongly associated with the degrees of plagiarism they exhibit, and (3) fine-tuned LMs' plagiarism patterns vary based on their corpus similarity and homogeneity.
no code implementations • 20 Oct 2021 • Thai Le, Long Tran-Thanh, Dongwon Lee
To this question, we successfully demonstrate that indeed it is possible for adversaries to exploit computational learning mechanism such as reinforcement learning (RL) to maximize the influence of socialbots while avoiding being detected.
3 code implementations • Findings (EMNLP) 2021 • Adaku Uchendu, Zeyu Ma, Thai Le, Rui Zhang, Dongwon Lee
Recent progress in generative language models has enabled machines to generate astonishingly realistic texts.
no code implementations • 31 May 2021 • Duanshun Li, Jing Liu, Jinsung Jeon, Seoyoung Hong, Thai Le, Dongwon Lee, Noseong Park
On top of the prediction models, we define a budget-constrained flight frequency optimization problem to maximize the market influence over 2, 262 routes.
no code implementations • ACL 2021 • Thai Le, Noseong Park, Dongwon Lee
The Universal Trigger (UniTrigger) is a recently-proposed powerful adversarial textual attack method.
1 code implementation • ACL 2022 • Thai Le, Noseong Park, Dongwon Lee
Even though several methods have proposed to defend textual neural network (NN) models against black-box adversarial attacks, they often defend against a specific text perturbation strategy and/or require re-training the models from scratch.
1 code implementation • 1 Sep 2020 • Thai Le, Suhang Wang, Dongwon Lee
In recent years, the proliferation of so-called "fake news" has caused much disruptions in society and weakened the news ecosystem.
1 code implementation • 5 Nov 2019 • Thai Le, Suhang Wang, Dongwon Lee
Despite the recent development in the topic of explainable AI/ML for image and text data, the majority of current solutions are not suitable to explain the prediction of neural network models when the datasets are tabular and their features are in high-dimensional vectorized formats.
no code implementations • 5 Oct 2017 • Xinyue Cao, Thai Le, Jason, Zhang
In this paper, we make use of a dataset from the clickbait challenge 2017 (clickbait-challenge. com) comprising of over 21, 000 headlines/titles, each of which is annotated by at least five judgments from crowdsourcing on how clickbait it is.