Search Results for author: Yujin Huang

Found 9 papers, 3 papers with code

Robustness of on-device Models: Adversarial Attack to Deep Learning Models on Android Apps

1 code implementation12 Jan 2021 Yujin Huang, Han Hu, Chunyang Chen

Deep learning has shown its power in many applications, including object detection in images, natural-language understanding, and speech recognition.

Adversarial Attack Image Classification +3

Smart App Attack: Hacking Deep Learning Models in Android Apps

1 code implementation23 Apr 2022 Yujin Huang, Chunyang Chen

We evaluate the attack effectiveness and generality in terms of four different settings including pre-trained models, datasets, transfer learning approaches and adversarial attack algorithms.

Adversarial Attack Binary Classification +1

Training-free Lexical Backdoor Attacks on Language Models

1 code implementation8 Feb 2023 Yujin Huang, Terry Yue Zhuo, Qiongkai Xu, Han Hu, Xingliang Yuan, Chunyang Chen

In this work, we propose Training-Free Lexical Backdoor Attack (TFLexAttack) as the first training-free backdoor attack on language models.

Backdoor Attack Data Poisoning +1

HiTSKT: A Hierarchical Transformer Model for Session-Aware Knowledge Tracing

no code implementations23 Dec 2022 Fucai Ke, Weiqing Wang, Weicong Tan, Lan Du, Yuan Jin, Yujin Huang, Hongzhi Yin

Knowledge tracing (KT) aims to leverage students' learning histories to estimate their mastery levels on a set of pre-defined skills, based on which the corresponding future performance can be accurately predicted.

Knowledge Tracing

Red teaming ChatGPT via Jailbreaking: Bias, Robustness, Reliability and Toxicity

no code implementations30 Jan 2023 Terry Yue Zhuo, Yujin Huang, Chunyang Chen, Zhenchang Xing

We believe that our findings may give light on future efforts to determine and mitigate the ethical hazards posed by machines in LLM applications.

Ethics Language Modelling

Beyond the Model: Data Pre-processing Attack to Deep Learning Models in Android Apps

no code implementations6 May 2023 Ye Sang, Yujin Huang, Shuo Huang, Helei Cui

In particular, our attack could influence the performance and latency of the model without affecting the operation of a DL app.

Data Poisoning

Energy-Latency Attacks to On-Device Neural Networks via Sponge Poisoning

no code implementations6 May 2023 Zijian Wang, Shuo Huang, Yujin Huang, Helei Cui

In recent years, on-device deep learning has gained attention as a means of developing affordable deep learning applications for mobile devices.

Cannot find the paper you are looking for? You can Submit a new open access paper.