no code implementations • 19 May 2023 • Xiaowei Huang, Wenjie Ruan, Wei Huang, Gaojie Jin, Yi Dong, Changshun Wu, Saddek Bensalem, Ronghui Mu, Yi Qi, Xingyu Zhao, Kaiwen Cai, Yanghao Zhang, Sihao Wu, Peipei Xu, Dengyu Wu, Andre Freitas, Mustafa A. Mustafa
Considering the fast development of LLMs, this survey does not intend to be complete (although it includes 300 references), especially when it comes to the applications of LLMs in various domains, but rather a collection of organised literature reviews and discussions to support the quick understanding of the safety and trustworthiness issues from the perspective of V&V.
no code implementations • 3 Apr 2023 • Chi Zhang, Wenjie Ruan, Fu Wang, Peipei Xu, Geyong Min, Xiaowei Huang
Verification plays an essential role in the formal analysis of safety-critical systems.
no code implementations • 3 Mar 2023 • Yuanying Cai, Chuheng Zhang, Wei Shen, Xuyun Zhang, Wenjie Ruan, Longbo Huang
Inspired by the recent success of sequence modeling in RL and the use of masked language model for pre-training, we propose a masked model for pre-training in RL, RePreM (Representation Pre-training with Masked Model), which trains the encoder combined with transformer blocks to predict the masked states or actions in a trajectory.
1 code implementation • 29 Jan 2023 • Fu Wang, Peipei Xu, Wenjie Ruan, Xiaowei Huang
Deep neural networks (DNNs) are known to be vulnerable to adversarial geometric transformation.
1 code implementation • 28 Jan 2023 • Chi Zhang, Wenjie Ruan, Peipei Xu
We then reveal the working principles of applying Lipschitzian optimisation on NNCS verification and illustrate it by verifying an adaptive cruise control model.
1 code implementation • 17 Jan 2023 • Liantao Ma, Chaohe Zhang, Junyi Gao, Xianfeng Jiao, Zhihao Yu, Xinyu Ma, Yasha Wang, Wen Tang, Xinju Zhao, Wenjie Ruan, Tao Wang
Here, our objective is to develop a deep learning model for a real-time, individualized, and interpretable mortality prediction model - AICare.
1 code implementation • 22 Dec 2022 • Ronghui Mu, Wenjie Ruan, Leandro Soriano Marcolino, Gaojie Jin, Qiang Ni
The experimental results show that our method produces meaningful guaranteed robustness for all models and environments.
Multi-agent Reinforcement Learning
reinforcement-learning
+1
1 code implementation • 5 Sep 2022 • Han Wu, Syed Yunas, Sareh Rowlands, Wenjie Ruan, Johan Wahlstrom
Intelligent robots rely on object detection models to perceive the environment.
1 code implementation • 1 Aug 2022 • Zheng Wang, Wenjie Ruan
Recent research on the robustness of deep learning has shown that Vision Transformers (ViTs) surpass the Convolutional Neural Networks (CNNs) under some perturbations, e. g., natural corruption, adversarial attacks, etc.
no code implementations • 17 Jul 2022 • Xiangyu Yin, Wenjie Ruan, Jonathan Fieldsend
In this paper, we propose a novel adversarial attack method to generate noises for single object tracking under black-box settings, where perturbations are merely added on initial frames of tracking sequences, which is difficult to be noticed from the perspective of a whole video clip.
1 code implementation • 15 Jul 2022 • Ronghui Mu, Wenjie Ruan, Leandro S. Marcolino, Qiang Ni
Thus, we propose an efficient verification framework, 3DVerifier, to tackle both challenges by adopting a linear relaxation function to bound the multiplication layer and combining forward and backward propagation to compute the certified bounds of the outputs of the point cloud models.
1 code implementation • 5 Jul 2022 • Tianle Zhang, Wenjie Ruan, Jonathan E. Fieldsend
Our experiments demonstrate the effectiveness and flexibility of PRoA in terms of evaluating the probabilistic robustness against a broad range of functional perturbations, and PRoA can scale well to various large-scale deep neural networks compared to existing state-of-the-art baselines.
1 code implementation • 10 Nov 2021 • Ronghui Mu, Wenjie Ruan, Leandro Soriano Marcolino, Qiang Ni
In recent years, a significant amount of research efforts concentrated on adversarial attacks on images, while adversarial video attacks have seldom been explored.
no code implementations • 24 Aug 2021 • Wenjie Ruan, Xinping Yi, Xiaowei Huang
This tutorial aims to introduce the fundamentals of adversarial robustness of deep learning, presenting a well-structured review of up-to-date techniques to assess the vulnerability of various types of deep learning models to adversarial examples.
2 code implementations • 16 Mar 2021 • Han Wu, Syed Yunas, Sareh Rowlands, Wenjie Ruan, Johan Wahlstrom
As research in deep neural networks advances, deep convolutional networks become promising for autonomous driving tasks.
1 code implementation • 4 Mar 2021 • Fu Wang, Yanghao Zhang, Yanbin Zheng, Wenjie Ruan
Therefore, based on the magnitude of the gradient, we propose a general acceleration strategy, M+ acceleration, which enables an automatic and highly effective method of adjusting the training procedure.
1 code implementation • 4 Jan 2021 • Yanghao Zhang, Fu Wang, Wenjie Ruan
Although there are a great number of adversarial attacks on deep learning based classifiers, how to attack object detection systems has been rarely studied.
2 code implementations • 15 Oct 2020 • Yanghao Zhang, Wenjie Ruan, Fu Wang, Xiaowei Huang
Extensive experiments are conducted on CIFAR-10 and ImageNet datasets with six deep neural network models including GoogleLeNet, VGG16/19, ResNet101/152, and DenseNet121.
1 code implementation • 30 Sep 2020 • Han Wu, Wenjie Ruan, Jiangtao Wang, Dingchang Zheng, Bei Liu, Yayuan Gen, Xiangfei Chai, Jian Chen, Kunwei Li, Shaolin Li, Sumi Helal
The black-box nature of machine learning models hinders the deployment of some high-accuracy models in medical diagnosis.
1 code implementation • 13 Sep 2020 • Peipei Xu, Wenjie Ruan, Xiaowei Huang
In this paper, we define safety risks by requesting the alignment of the network's decision with human perception.
no code implementations • 17 Jul 2020 • Liantao Ma, Xinyu Ma, Junyi Gao, Chaohe Zhang, Zhihao Yu, Xianfeng Jiao, Wenjie Ruan, Yasha Wang, Wen Tang, Jiangtao Wang
Due to the characteristics of COVID-19, the epidemic develops rapidly and overwhelms health service systems worldwide.
1 code implementation • 27 Nov 2019 • Liantao Ma, Junyi Gao, Yasha Wang, Chaohe Zhang, Jiangtao Wang, Wenjie Ruan, Wen Tang, Xin Gao, Xinyu Ma
It also models the correlation between clinical features to enhance the ones which strongly indicate the health status and thus can maintain a state-of-the-art performance in terms of prediction accuracy while providing qualitative interpretability.
1 code implementation • 27 Nov 2019 • Liantao Ma, Chaohe Zhang, Yasha Wang, Wenjie Ruan, Jiantao Wang, Wen Tang, Xinyu Ma, Xin Gao, Junyi Gao
Predicting the patient's clinical outcome from the historical electronic medical records (EMR) is a fundamental research problem in medical informatics.
1 code implementation • 5 Nov 2019 • Wei Huang, Youcheng Sun, Xingyu Zhao, James Sharp, Wenjie Ruan, Jie Meng, Xiaowei Huang
The test metrics and test case generation algorithm are implemented into a tool TestRNN, which is then evaluated on a set of LSTM benchmarks.
no code implementations • 18 Dec 2018 • Xiaowei Huang, Daniel Kroening, Wenjie Ruan, James Sharp, Youcheng Sun, Emese Thamo, Min Wu, Xinping Yi
In the past few years, significant progress has been made on deep neural networks (DNNs) in achieving human-level performance on several long-standing tasks.
1 code implementation • 10 Jul 2018 • Min Wu, Matthew Wicker, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska
In this paper, we study two variants of pointwise robustness, the maximum safe radius problem, which for a given input sample computes the minimum distance to an adversarial example, and the feature robustness problem, which aims to quantify the robustness of individual features to adversarial perturbations.
2 code implementations • 6 May 2018 • Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska
Verifying correctness of deep neural networks (DNNs) is challenging.
2 code implementations • 30 Apr 2018 • Youcheng Sun, Min Wu, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska, Daniel Kroening
Concolic testing combines program execution and symbolic analysis to explore the execution paths of a software program.
2 code implementations • 16 Apr 2018 • Wenjie Ruan, Min Wu, Youcheng Sun, Xiaowei Huang, Daniel Kroening, Marta Kwiatkowska
In this paper we focus on the $L_0$ norm and aim to compute, for a trained DNN and an input, the maximal radius of a safe norm ball around the input within which there are no adversarial examples.