Search Results for author: Mirazul Haque

Found 13 papers, 5 papers with code

ILFO: Adversarial Attack on Adaptive Neural Networks

no code implementations CVPR 2020 Mirazul Haque, Anki Chauhan, Cong Liu, Wei Yang

With the increasing number of layers and parameters in neural networks, the energy consumption of neural networks has become a great concern to society, especially to users of handheld or embedded devices.

Adversarial Attack

NODEAttack: Adversarial Attack on the Energy Consumption of Neural ODEs

no code implementations29 Sep 2021 Mirazul Haque, Simin Chen, Wasif Arman Haque, Cong Liu, Wei Yang

Unlike the memory cost, the energy consumption of the Neural ODEs during inference can be adaptive because of the adaptive nature of the ODE solvers.

Adversarial Attack Object Recognition

TransSlowDown: Efficiency Attacks on Neural Machine Translation Systems

no code implementations29 Sep 2021 Simin Chen, Mirazul Haque, Zihe Song, Cong Liu, Wei Yang

To further the understanding of such efficiency-oriented threats and raise the community’s concern on the efficiency robustness of NMT systems, we propose a new attack approach, TranSlowDown, to test the efficiency robustness of NMT systems.

Machine Translation NMT +1

EREBA: Black-box Energy Testing of Adaptive Neural Networks

no code implementations12 Feb 2022 Mirazul Haque, Yaswanth Yadlapalli, Wei Yang, Cong Liu

The test inputs generated by EREBA can increase the energy consumption of AdNNs by 2, 000% compared to the original inputs.

NICGSlowDown: Evaluating the Efficiency Robustness of Neural Image Caption Generation Models

1 code implementation CVPR 2022 Simin Chen, Zihe Song, Mirazul Haque, Cong Liu, Wei Yang

To further understand such efficiency-oriented threats, we propose a new attack approach, NICGSlowDown, to evaluate the efficiency robustness of NICG models.

DeepPerform: An Efficient Approach for Performance Testing of Resource-Constrained Neural Networks

no code implementations10 Oct 2022 Simin Chen, Mirazul Haque, Cong Liu, Wei Yang

To ensure an AdNN satisfies the performance requirements of resource-constrained applications, it is essential to conduct performance testing to detect IDPBs in the AdNN.

TestAug: A Framework for Augmenting Capability-based NLP Tests

1 code implementation COLING 2022 Guanqun Yang, Mirazul Haque, Qiaochu Song, Wei Yang, Xueqing Liu

Our experiments show that TestAug has three advantages over the existing work on behavioral testing: (1) TestAug can find more bugs than existing work; (2) The test cases in TestAug are more diverse; and (3) TestAug largely saves the manual efforts in creating the test suites.

The Dark Side of Dynamic Routing Neural Networks: Towards Efficiency Backdoor Injection

no code implementations CVPR 2023 Simin Chen, Hanlin Chen, Mirazul Haque, Cong Liu, Wei Yang

Recent advancements in deploying deep neural networks (DNNs) on resource-constrained devices have generated interest in input-adaptive dynamic neural networks (DyNNs).

Adversarial Attack

SlothSpeech: Denial-of-service Attack Against Speech Recognition Models

1 code implementation1 Jun 2023 Mirazul Haque, Rutvij Shah, Simin Chen, Berrak Şişman, Cong Liu, Wei Yang

We show that popular ASR models like Speech2Text model and Whisper model have dynamic computation based on different inputs, causing dynamic efficiency.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

HateModerate: Testing Hate Speech Detectors against Content Moderation Policies

1 code implementation23 Jul 2023 Jiangrui Zheng, Xueqing Liu, Guanqun Yang, Mirazul Haque, Xing Qian, Ravishka Rathnasuriya, Wei Yang, Girish Budhrani

We observe significant improvement in the models' conformity to content policies while having comparable scores on the original test data.

Hate Speech Detection

Dynamic Neural Network is All You Need: Understanding the Robustness of Dynamic Mechanisms in Neural Networks

1 code implementation17 Aug 2023 Mirazul Haque, Wei Yang

Then, through research studies, we provide insight into the design choices that can increase robustness of DyNNs against the attack generated using static model.

Cannot find the paper you are looking for? You can Submit a new open access paper.