Search Results for author: Thai Le

Found 12 papers, 4 papers with code

Authorship Attribution for Neural Text Generation

no code implementations EMNLP 2020 Adaku Uchendu, Thai Le, Kai Shu, Dongwon Lee

In recent years, the task of generating realistic short and long texts have made tremendous advancements.

Text Generation

Perturbations in the Wild: Leveraging Human-Written Text Perturbations for Realistic Adversarial Attack and Defense

1 code implementation Findings (ACL) 2022 Thai Le, Jooyoung Lee, Kevin Yen, Yifan Hu, Dongwon Lee

We find that adversarial texts generated by ANTHRO achieve the best trade-off between (1) attack success rate, (2) semantic preservation of the original text, and (3) stealthiness--i. e. indistinguishable from human writings hence harder to be flagged as suspicious.

Adversarial Attack

Do Language Models Plagiarize?

no code implementations15 Mar 2022 Jooyoung Lee, Thai Le, Jinghui Chen, Dongwon Lee

Past literature has illustrated that language models do not fully understand the context and sensitivity of text and can sometimes memorize phrases or sentences present in their training sets.

Language Modelling

Socialbots on Fire: Modeling Adversarial Behaviors of Socialbots via Multi-Agent Hierarchical Reinforcement Learning

no code implementations20 Oct 2021 Thai Le, Long Tran-Thanh, Dongwon Lee

To this question, we successfully demonstrate that indeed it is possible for adversaries to exploit computational learning mechanism such as reinforcement learning (RL) to maximize the influence of socialbots while avoiding being detected.

Adversarial Attack Hierarchical Reinforcement Learning +2

Large-Scale Data-Driven Airline Market Influence Maximization

no code implementations31 May 2021 Duanshun Li, Jing Liu, Jinsung Jeon, Seoyoung Hong, Thai Le, Dongwon Lee, Noseong Park

On top of the prediction models, we define a budget-constrained flight frequency optimization problem to maximize the market influence over 2, 262 routes.

SHIELD: Defending Textual Neural Networks against Multiple Black-Box Adversarial Attacks with Stochastic Multi-Expert Patcher

1 code implementation ACL 2022 Thai Le, Noseong Park, Dongwon Lee

Even though several methods have proposed to defend textual neural network (NN) models against black-box adversarial attacks, they often defend against a specific text perturbation strategy and/or require re-training the models from scratch.

Adversarial Robustness

MALCOM: Generating Malicious Comments to Attack Neural Fake News Detection Models

1 code implementation1 Sep 2020 Thai Le, Suhang Wang, Dongwon Lee

In recent years, the proliferation of so-called "fake news" has caused much disruptions in society and weakened the news ecosystem.

Fake News Detection

GRACE: Generating Concise and Informative Contrastive Sample to Explain Neural Network Model's Prediction

1 code implementation5 Nov 2019 Thai Le, Suhang Wang, Dongwon Lee

Despite the recent development in the topic of explainable AI/ML for image and text data, the majority of current solutions are not suitable to explain the prediction of neural network models when the datasets are tabular and their features are in high-dimensional vectorized formats.

Machine Learning Based Detection of Clickbait Posts in Social Media

no code implementations5 Oct 2017 Xinyue Cao, Thai Le, Jason, Zhang

In this paper, we make use of a dataset from the clickbait challenge 2017 (clickbait-challenge. com) comprising of over 21, 000 headlines/titles, each of which is annotated by at least five judgments from crowdsourcing on how clickbait it is.

Clickbait Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.