no code implementations • 6 Mar 2025 • Simin Chen, Pranav Pusarla, Baishakhi Ray
The widespread use of these fixed benchmark datasets makes the benchmarking process to be static and thus particularly susceptible to data contamination, an unavoidable consequence of the extensive data collection processes used to train Code LLMs.
1 code implementation • 23 Feb 2025 • Simin Chen, Yiming Chen, Zexin Li, Yifan Jiang, Zhongwei Wan, Yixin He, Dezhi Ran, Tianle Gu, Haizhou Li, Tao Xie, Baishakhi Ray
To mitigate the risk of potential data contamination, LLM benchmarking has undergone a transformation from static to dynamic benchmarking.
1 code implementation • 24 Dec 2024 • Jiaqi Wu, Shihao Zhang, Simin Chen, Lixu Wang, Zehua Wang, Wei Chen, Fangyuan He, Zijian Tian, F. Richard Yu, Victor C. M. Leung
These results highlight ED-TOOLBOX as a superior solution for edge object detection.
no code implementations • 1 Nov 2024 • Jiaqi Wu, Simin Chen, Yuzhe Yang, Yijiang Li, Shiyue Hou, Rui Jing, Zehua Wang, Wei Chen, Zijian Tian
To address these challenges, we propose for the first time a federated discrete and transferable prompt tuning, namely FedDTPT, for black-box large language models.
1 code implementation • 28 Jan 2024 • Simin Chen, Xiaoning Feng, Xiaohong Han, Cong Liu, Wei Yang
In recent times, a plethora of Large Code Generation Models (LCGMs) have been proposed, showcasing significant potential in assisting developers with complex programming tasks.
1 code implementation • 12 Jan 2024 • Yufei Li, Simin Chen, Yanghong Guo, Wei Yang, Yue Dong, Cong Liu
We observe that these methods generally improve the uncertainty awareness of CodeLlama, with increased calibration quality and higher uncertainty estimation~(UE) precision.
no code implementations • 11 Jul 2023 • Simin Chen, Shiyi Wei, Cong Liu, Wei Yang
\tool tackles the dynamic nature of DyNNs by introducing a compilation mechanism that redistributes the control and data flow of the original DNN programs during the compilation process.
1 code implementation • 1 Jun 2023 • Mirazul Haque, Rutvij Shah, Simin Chen, Berrak Şişman, Cong Liu, Wei Yang
We show that popular ASR models like Speech2Text model and Whisper model have dynamic computation based on different inputs, causing dynamic efficiency.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
1 code implementation • 20 May 2023 • Yiming Chen, Simin Chen, Zexin Li, Wei Yang, Cong Liu, Robby T. Tan, Haizhou Li
Despite much success in natural language processing (NLP), pre-trained language models typically lead to a high computational cost during inference.
1 code implementation • CVPR 2023 • Zexin Li, Bangjie Yin, Taiping Yao, Juefeng Guo, Shouhong Ding, Simin Chen, Cong Liu
A hard challenge in developing practical face recognition (FR) attacks is due to the black-box nature of the target FR model, i. e., inaccessible gradient and parameter information to attackers.
no code implementations • CVPR 2023 • Simin Chen, Hanlin Chen, Mirazul Haque, Cong Liu, Wei Yang
Recent advancements in deploying deep neural networks (DNNs) on resource-constrained devices have generated interest in input-adaptive dynamic neural networks (DyNNs).
no code implementations • 10 Oct 2022 • Simin Chen, Mirazul Haque, Cong Liu, Wei Yang
To ensure an AdNN satisfies the performance requirements of resource-constrained applications, it is essential to conduct performance testing to detect IDPBs in the AdNN.
1 code implementation • 7 Oct 2022 • Xiaoning Feng, Xiaohong Han, Simin Chen, Wei Yang
In this paper, we make the first attempt to understand and test potential computation efficiency robustness in state-of-the-art LLMs.
no code implementations • 20 May 2022 • Simin Chen, Hamed Khanpour, Cong Liu, Wei Yang
With the privatization deployment of DNNs on edge devices, the security of on-device DNNs has raised significant concern.
1 code implementation • CVPR 2022 • Simin Chen, Zihe Song, Mirazul Haque, Cong Liu, Wei Yang
To further understand such efficiency-oriented threats, we propose a new attack approach, NICGSlowDown, to evaluate the efficiency robustness of NICG models.
no code implementations • 29 Sep 2021 • Simin Chen, Mirazul Haque, Zihe Song, Cong Liu, Wei Yang
To further the understanding of such efficiency-oriented threats and raise the community’s concern on the efficiency robustness of NMT systems, we propose a new attack approach, TranSlowDown, to test the efficiency robustness of NMT systems.
no code implementations • 29 Sep 2021 • Mirazul Haque, Simin Chen, Wasif Arman Haque, Cong Liu, Wei Yang
Unlike the memory cost, the energy consumption of the Neural ODEs during inference can be adaptive because of the adaptive nature of the ODE solvers.
1 code implementation • 23 Jul 2021 • Yufei Li, Simin Chen, Wei Yang
Experiments show that program distribution shift does degrade the DL model performance to varying degrees and that existing uncertainty methods all present certain limitations in quantifying uncertainty on program dataset.
no code implementations • 1 Jan 2021 • Simin Chen, Zihe Song, Lei Ma, Cong Liu, Wei Yang
We first theoretically clarify under which condition AttackDist can provide a certified detecting performance, then show that a potential application of AttackDist is distinguishing zero-day adversarial examples without knowing the mechanisms of new attacks.