no code implementations • 22 Feb 2025 • Yedi Zhang, Lei Huang, Pengfei Gao, Fu Song, Jun Sun, Jin Song Dong
Recognizing the documented susceptibility of real-valued neural networks to such attacks and the comparative robustness of quantized neural networks (QNNs), in this work, we introduce BFAVerifier, the first verification framework designed to formally verify the absence of bit-flip attacks or to identify all vulnerable parameters in a sound and rigorous manner.
no code implementations • 31 Jan 2025 • Xianglin Yang, Gelei Deng, Jieming Shi, Tianwei Zhang, Jin Song Dong
We propose a novel defense strategy, Safety Chain-of-Thought (SCoT), which harnesses the enhanced \textit{reasoning capabilities} of LLMs for proactive assessment of harmful inputs, rather than simply blocking them.
no code implementations • 30 Jan 2025 • Xi Weng, Jianing An, Xudong Ma, Binhang Qi, Jie Luo, Xi Yang, Jin Song Dong, Lei Huang
Self-supervised learning (SSL) methods via joint embedding architectures have proven remarkably effective at capturing semantically rich representations with strong clustering properties, magically in the absence of label supervision.
no code implementations • 17 Dec 2024 • Qi Zhou, Tianlin Li, Qing Guo, Dongxia Wang, Yun Lin, Yang Liu, Jin Song Dong
Instead of directly using responses from partial images for voting, we investigate using them to supervise the LVLM's responses to the original images.
no code implementations • 9 Dec 2024 • Yedi Zhang, Yufan Cai, Xinyue Zuo, Xiaokun Luan, Kailong Wang, Zhe Hou, Yifan Zhang, Zhiyuan Wei, Meng Sun, Jun Sun, Jing Sun, Jin Song Dong
Finally, we show that unifying these two computation paradigms -- integrating the flexibility and intelligence of LLMs with the rigorous reasoning abilities of FMs -- has transformative potential for the development of trustworthy AI software systems.
1 code implementation • ASE 2024 • Yifan Liao, Ming Xu, Yun Lin, Xiwen Teoh, Xiaofei Xie, Ruitao Feng, Frank Liaw, Hongyu Zhang, Jin Song Dong
Web applications are crucial infrastructures in the modern society, which have high demand of reliability and security.
no code implementations • 25 Oct 2024 • Zixiao Zhao, Jing Sun, Zhiyuan Wei, Cheng-Hao Cai, Zhe Hou, Jin Song Dong
In the field of automated programming, large language models (LLMs) have demonstrated foundational generative capabilities when given detailed task descriptions.
no code implementations • 9 Oct 2024 • Qi Guo, Zhen Tian, Minghao Yao, Yong Qi, Saiyu Qi, Yun Li, Jin Song Dong
Federated Unlearning (FU) enables clients to selectively remove the influence of specific data from a trained federated learning model, addressing privacy concerns and regulatory requirements.
no code implementations • 8 Sep 2024 • Yakun Zhang, Chen Liu, Xiaofei Xie, Yun Lin, Jin Song Dong, Dan Hao, Lu Zhang
Then, we propose a concretization technique that utilizes the general test logic to guide an LLM in generating the corresponding GUI test case (including events and assertions) for the target app.
no code implementations • 2 Jul 2024 • Qi Guo, Minghao Yao, Zhen Tian, Saiyu Qi, Yong Qi, Yun Lin, Jin Song Dong
Our core idea is the construction and application of the class contribution momentum indicator from individual, relative, and holistic perspectives, thereby achieving an effective and efficient contribution evaluation of heterogeneous participants without relying on an auxiliary test dataset.
no code implementations • 26 Jun 2024 • Yufan Cai, Zhe Hou, Xiaokun Luan, David Miguel Sanan Baena, Yun Lin, Jun Sun, Jin Song Dong
Moreover, the opaque procedure from specification to code provided by LLM is an uncontrolled black box.
1 code implementation • 24 May 2024 • Xianglin Yang, Jin Song Dong
Monitoring the training of neural networks is essential for identifying potential data anomalies, enabling timely interventions and conserving significant computational resources.
no code implementations • 23 May 2024 • Nhat Chung, Sensen Gao, Tuan-Anh Vu, Jie Zhang, Aishan Liu, Yun Lin, Jin Song Dong, Qing Guo
To further explore the risk in AD systems and the transferability of practical threats, we propose to leverage typographic attacks against AD systems relying on the decision-making capabilities of Vision-LLMs.
no code implementations • 30 Apr 2024 • Mark Huasong Meng, Hao Guan, Liuhuo Wan, Sin Gee Teo, Guangdong Bai, Jin Song Dong
We present PAODING, a toolkit to debloat pretrained neural network models through the lens of data-free pruning.
no code implementations • 24 Jun 2022 • Mark Huasong Meng, Guangdong Bai, Sin Gee Teo, Zhe Hou, Yan Xiao, Yun Lin, Jin Song Dong
For those reasons, there is a high demand for trustworthy and rigorous methods to verify the robustness of neural network models.
1 code implementation • 2 Apr 2022 • Mark Huasong Meng, Guangdong Bai, Sin Gee Teo, Jin Song Dong
This may be unrealistic in practice, as the data controllers are often reluctant to provide their model consumers with the original data.
no code implementations • 21 Mar 2022 • Yuting Yang, Pei Huang, Juan Cao, Jintao Li, Yun Lin, Jin Song Dong, Feifei Ma, Jian Zhang
Our attack technique targets the inherent vulnerabilities of NLP models, allowing us to generate samples even without interacting with the victim NLP model, as long as it is based on pre-trained language models (PLMs).
no code implementations • 31 Dec 2021 • Xianglin Yang, Yun Lin, Ruofan Liu, Zhenfeng He, Chao Wang, Jin Song Dong, Hong Mei
Moreover, our case study shows that our visual solution can well reflect the characteristics of various training scenarios, showing good potential of DVI as a debugging tool for analyzing deep learning training processes.
no code implementations • 29 Dec 2021 • Guoliang Dong, Jingyi Wang, Jun Sun, Sudipta Chattopadhyay, Xinyu Wang, Ting Dai, Jie Shi, Jin Song Dong
Furthermore, such attacks are impossible to eliminate, i. e., the adversarial perturbation is still possible after applying mitigation methods such as adversarial training.
1 code implementation • 6 Oct 2021 • Yan Xiao, Yun Lin, Ivan Beschastnikh, Changsheng Sun, David S. Rosenblum, Jin Song Dong
However, inputs may deviate from the training dataset distribution in real deployments.
no code implementations • 17 Jul 2021 • Peixin Zhang, Jingyi Wang, Jun Sun, Xinyu Wang, Guoliang Dong, Xingen Wang, Ting Dai, Jin Song Dong
In this work, we bridge the gap by proposing a scalable and effective approach for systematically searching for discriminatory samples while extending existing fairness testing approaches to address a more challenging domain, i. e., text classification.
no code implementations • 14 Nov 2019 • Yizhen Dong, Peixin Zhang, Jingyi Wang, Shuang Liu, Jun Sun, Jianye Hao, Xinyu Wang, Li Wang, Jin Song Dong, Dai Ting
In this work, we conduct an empirical study to evaluate the relationship between coverage, robustness and attack/defense metrics for DNN.
no code implementations • 3 Oct 2019 • Hadrien Bride, Zhe Hou, Jie Dong, Jin Song Dong, Ali Mirjalili
This paper introduces a new classification tool named Silas, which is built to provide a more transparent and dependable data analytics service.
no code implementations • 3 Oct 2019 • Hadrien Bride, Jin Song Dong, Ryan Green, Zhe Hou, Brendan Mahony, Martin Oxenham
We follow the "verification as planning" paradigm and propose to use model checking techniques to solve planning and goal reasoning problems for autonomous systems.
1 code implementation • 22 Sep 2019 • Guoliang Dong, Jingyi Wang, Jun Sun, Yang Zhang, Xinyu Wang, Ting Dai, Jin Song Dong, Xingen Wang
In this work, we propose an approach to extract probabilistic automata for interpreting an important class of neural networks, i. e., recurrent neural networks.