Search Results for author: Ming Wen

Found 8 papers, 5 papers with code

An Extensive Study on Adversarial Attack against Pre-trained Models of Code

1 code implementation13 Nov 2023 Xiaohu Du, Ming Wen, Zichao Wei, Shangwen Wang, Hai Jin

Although several approaches have been proposed to generate adversarial examples for PTMC, the effectiveness and efficiency of such approaches, especially on different code intelligence tasks, has not been well understood.

Adversarial Attack

COMET: Coverage-guided Model Generation For Deep Learning Library Testing

1 code implementation2 Aug 2022 Meiziniu Li, Jialun Cao, Yongqiang Tian, Tsz On Li, Ming Wen, Shing-Chi Cheung

Techniques have been proposed to generate various DL models and apply them to test these libraries.

DeepFD: Automated Fault Diagnosis and Localization for Deep Learning Programs

1 code implementation4 May 2022 Jialun Cao, Meiziniu Li, Xiao Chen, Ming Wen, Yongqiang Tian, Bo Wu, Shing-Chi Cheung

Besides, for fault localization, DeepFD also outperforms the existing works, correctly locating 42% faulty programs, which almost doubles the best result (23%) achieved by the existing works.

Fault localization

Finding Deviated Behaviors of the Compressed DNN Models for Image Classifications

1 code implementation6 Dec 2021 Yongqiang Tian, Wuqi Zhang, Ming Wen, Shing-Chi Cheung, Chengnian Sun, Shiqing Ma, Yu Jiang

To this end, we propose DFLARE, a novel, search-based, black-box testing technique to automatically find triggering inputs that result in deviated behaviors in image classification tasks.

Image Classification Model Compression

SemMT: A Semantic-based Testing Approach for Machine Translation Systems

2 code implementations3 Dec 2020 Jialun Cao, Meiziniu Li, Yeting Li, Ming Wen, Shing-Chi Cheung

SemMT applies round-trip translation and measures the semantic similarity between the original and translated sentences.

Machine Translation Semantic Similarity +2

Attention Please: Consider Mockito when Evaluating Newly Proposed Automated Program Repair Techniques

no code implementations13 Dec 2018 Shangwen Wang, Ming Wen, Xiaoguang Mao, Deheng Yang

Our findings show that: 1) Mockito bugs are not more complex for repairing compared with bugs from other projects; 2) the bugs repaired by the state-of-the-art tools share the same repair patterns compared with those patterns required to repair Mockito bugs; however, 3) the state-of-the-art tools perform poorly on Mockito bugs (Nopol can only correctly fix one bug while SimFix and CapGen cannot fix any bug in Mockito even if all the buggy locations have been exposed).

Software Engineering

Cannot find the paper you are looking for? You can Submit a new open access paper.