no code implementations • ECCV 2020 • Yanda Meng, Wei Meng, Dongxu Gao, Yitian Zhao, Xiaoyun Yang, Xiaowei Huang, Yalin Zheng
In particular, thanks to the proposed aggregation GCN, our network benefits from direct feature learning of the instances’ boundary locations and the spatial information propagation across the image.
no code implementations • 24 Aug 2024 • Sihao Wu, Xingyu Zhao, Xiaowei Huang
Data efficiency of learning, which plays a key role in the Reinforcement Learning (RL) training process, becomes even more important in continual RL with sequential environments.
no code implementations • 16 Aug 2024 • Jinwei Hu, Yi Dong, Xiaowei Huang
Guardrails have become an integral part of Large language models (LLMs), by moderating harmful or toxic response in order to maintain LLMs' alignment to human expectations.
no code implementations • 15 Jul 2024 • Wangyu Wu, Tianhong Dai, Zhenhong Chen, Xiaowei Huang, Fei Ma, Jimin Xiao
Weakly Supervised Semantic Segmentation (WSSS) using only image-level labels has gained significant attention due to its cost-effectiveness.
Contrastive Learning Weakly supervised Semantic Segmentation +1
no code implementations • 11 Jul 2024 • ZiHao Zhou, Shudong Liu, Maizhen Ning, Wei Liu, Jindong Wang, Derek F. Wong, Xiaowei Huang, Qiufeng Wang, Kaizhu Huang
Exceptional mathematical reasoning ability is one of the key features that demonstrate the power of large language models (LLMs).
no code implementations • 1 Jul 2024 • Gaojie Jin, Ronghui Mu, Xinping Yi, Xiaowei Huang, Lijun Zhang
The Invariant Risk Minimization (IRM) approach aims to address the challenge of domain generalization by training a feature representation that remains invariant across multiple environments.
no code implementations • 5 Jun 2024 • Zihan Ye, Shreyank N. Gowda, Xiaobo Jin, Xiaowei Huang, Haotian Xu, Yaochu Jin, Kaizhu Huang
For class-level effectiveness, we design a two-branch generation structure that consists of a Diffusion-based Feature Generator (DFG) and a Diffusion-based Representation Generator (DRG).
Ranked #1 on Zero-Shot Learning on AwA2
no code implementations • 3 Jun 2024 • Yi Dong, Ronghui Mu, Yanghao Zhang, Siqi Sun, Tianle Zhang, Changshun Wu, Gaojie Jin, Yi Qi, Jinwei Hu, Jie Meng, Saddek Bensalem, Xiaowei Huang
In the burgeoning field of Large Language Models (LLMs), developing a robust safety mechanism, colloquially known as "safeguards" or "guardrails", has become imperative to ensure the ethical use of LLMs within prescribed boundaries.
no code implementations • 3 Jun 2024 • Jiaxu Liu, Xinping Yi, Sihao Wu, Xiangyu Yin, Tianle Zhang, Xiaowei Huang, Shi Jin
While Hyperbolic Graph Neural Network (HGNN) has recently emerged as a powerful tool dealing with hierarchical graph data, the limitations of scalability and efficiency hinder itself from generalizing to deep models.
no code implementations • 23 May 2024 • Hanwei Zhang, Luo Cheng, Qisong He, Wei Huang, Renjue Li, Ronan Sicre, Xiaowei Huang, Holger Hermanns, Lijun Zhang
As with other ML tasks, classification models are notoriously brittle in the presence of adversarial attacks.
no code implementations • 21 May 2024 • Jiaxu Liu, Xiangyu Yin, Sihao Wu, Jianhong Wang, Meng Fang, Xinping Yi, Xiaowei Huang
With the proliferation of red-teaming strategies for Large Language Models (LLMs), the deficiency in the literature about improving the safety and robustness of LLM defense strategies is becoming increasingly pronounced.
1 code implementation • 15 Apr 2024 • Dengyu Wu, Yi Qi, Kaiwen Cai, Gaojie Jin, Xinping Yi, Xiaowei Huang
Notably, with STR and cutoff, SNN achieves 2. 14 to 2. 89 faster in inference compared to the pre-configured timestep with near-zero accuracy drop of 0. 50% to 0. 64% over the event-based datasets.
no code implementations • 2 Apr 2024 • Zhiming Chi, Jianan Ma, Pengfei Yang, Cheng-Chao Huang, Renjue Li, Xiaowei Huang, Lijun Zhang
Existing neuron-level methods using limited data lack efficacy in fixing adversaries due to the inherent complexity of adversarial attack mechanisms, while adversarial training, leveraging a large number of adversarial samples to enhance robustness, lacks provability.
no code implementations • 27 Mar 2024 • Changshun Wu, WeiCheng He, Chih-Hong Cheng, Xiaowei Huang, Saddek Bensalem
Nevertheless, integrating OoD detection into state-of-the-art (SOTA) object detection DNNs poses significant challenges, partly due to the complexity introduced by the SOTA OoD construction methods, which require the modification of DNN architecture and the introduction of complex loss functions.
no code implementations • 12 Mar 2024 • Zongxin Liu, Pengfei Yang, Lijun Zhang, Xiaowei Huang
Neural networks in safety-critical applications face increasing safety and security concerns due to their susceptibility to little disturbance.
1 code implementation • CVPR 2024 • Yanghao Zhang, Tianle Zhang, Ronghui Mu, Xiaowei Huang, Wenjie Ruan
As a generalization of conventional AT, we re-define the problem of adversarial training as a min-max-max framework, to ensure both robustness and fairness of the trained model.
1 code implementation • 23 Feb 2024 • Yi Zhang, Yun Tang, Wenjie Ruan, Xiaowei Huang, Siddartha Khastgir, Paul Jennings, Xingyu Zhao
Text-to-Image (T2I) Diffusion Models (DMs) have shown impressive abilities in generating high-quality images based on simple text descriptions.
no code implementations • 2 Feb 2024 • Yi Dong, Ronghui Mu, Gaojie Jin, Yi Qi, Jinwei Hu, Xingyu Zhao, Jie Meng, Wenjie Ruan, Xiaowei Huang
As Large Language Models (LLMs) become more integrated into our daily lives, it is crucial to identify and mitigate their risks, especially when the risks can have profound impacts on human users and societies.
1 code implementation • 2 Feb 2024 • Yi Dong, Yingjie Wang, Mariana Gama, Mustafa A. Mustafa, Geert Deconinck, Xiaowei Huang
In the realm of power systems, the increasing involvement of residential users in load forecasting applications has heightened concerns about data privacy.
1 code implementation • 12 Dec 2023 • Xiangyu Yin, Sihao Wu, Jiaxu Liu, Meng Fang, Xingyu Zhao, Xiaowei Huang, Wenjie Ruan
Then, to mitigate the vulnerability of existing GCRL algorithms, we introduce Adversarial Representation Tactics, which combines Semi-Contrastive Adversarial Augmentation with Sensitivity-Aware Regularizer to improve the adversarial robustness of the underlying RL agent against various types of perturbations.
no code implementations • 11 Dec 2023 • Ronghui Mu, Leandro Soriano Marcolino, Tianle Zhang, Yanghao Zhang, Xiaowei Huang, Wenjie Ruan
Reinforcement Learning (RL) has achieved remarkable success in safety-critical areas, but it can be weakened by adversarial attacks.
no code implementations • 15 Oct 2023 • Wangyu Wu, Tianhong Dai, Xiaowei Huang, Fei Ma, Jimin Xiao
In this paper, we introduce a novel ViT-based WSSS method named top-K pooling with patch contrastive learning (TKP-PCL), which employs a top-K pooling layer to alleviate the limitations of previous max pooling selection.
Contrastive Learning Weakly supervised Semantic Segmentation +1
no code implementations • 15 Oct 2023 • Wangyu Wu, Tianhong Dai, Xiaowei Huang, Fei Ma, Jimin Xiao
In this process, the existing images and image-level labels provide the necessary control information, where GPT is employed to enrich the prompts, leading to the generation of diverse backgrounds.
1 code implementation • 3 Oct 2023 • Jiaxu Liu, Xinping Yi, Xiaowei Huang
Hyperbolic graph convolutional networks (HGCNs) have demonstrated significant potential in extracting information from hierarchical graphs.
no code implementations • 9 Sep 2023 • Jiaxu Liu, Xinping Yi, Tianle Zhang, Xiaowei Huang
In traditional Graph Neural Networks (GNNs), the assumption of a fixed embedding manifold often limits their adaptability to diverse graph geometries.
no code implementations • 5 Sep 2023 • Yuze Liu, Ziming Zhao, Tiehua Zhang, Kang Wang, Xin Chen, Xiaowei Huang, Jun Yin, Zhishu Shen
Sleep stage classification is crucial for detecting patients' health conditions.
no code implementations • 4 Sep 2023 • ZiHao Zhou, Qiufeng Wang, Mingyu Jin, Jie Yao, Jianan Ye, Wei Liu, Wei Wang, Xiaowei Huang, Kaizhu Huang
Instead of attacking prompts in the use of LLMs, we propose a MathAttack model to attack MWP samples which are closer to the essence of security in solving math problems.
1 code implementation • 5 Aug 2023 • Maizhen Ning, Qiu-Feng Wang, Kaizhu Huang, Xiaowei Huang
For the diagram encoder, we pre-train it under a multi-label classification framework with the symbolic characters as labels.
no code implementations • 20 Jul 2023 • Saddek Bensalem, Chih-Hong Cheng, Wei Huang, Xiaowei Huang, Changshun Wu, Xingyu Zhao
Machine learning has made remarkable advancements, but confidently utilising learning-enabled components in safety-critical domains still poses challenges.
no code implementations • 14 Jul 2023 • Kaiwen Cai, Chris Xiaoxuan Lu, Xingyu Zhao, Xiaowei Huang
Most image retrieval research focuses on improving predictive performance, ignoring scenarios where the reliability of the prediction is also crucial.
1 code implementation • 15 Jun 2023 • ZiHao Zhou, Maizhen Ning, Qiufeng Wang, Jie Yao, Wei Wang, Xiaowei Huang, Kaizhu Huang
We then feed them to a question generator together with the scenario to obtain the corresponding diverse questions, forming a new MWP with a variety of questions and equations.
no code implementations • 19 May 2023 • Xiaowei Huang, Wenjie Ruan, Wei Huang, Gaojie Jin, Yi Dong, Changshun Wu, Saddek Bensalem, Ronghui Mu, Yi Qi, Xingyu Zhao, Kaiwen Cai, Yanghao Zhang, Sihao Wu, Peipei Xu, Dengyu Wu, Andre Freitas, Mustafa A. Mustafa
Large Language Models (LLMs) have exploded a new heatwave of AI for their ability to engage end-users in human-level conversations with detailed and articulate answers across many knowledge domains.
2 code implementations • 3 Apr 2023 • Yi Qi, Xingyu Zhao, Siddartha Khastgir, Xiaowei Huang
Can safety analysis make use of Large Language Models (LLMs)?
no code implementations • 3 Apr 2023 • Chi Zhang, Wenjie Ruan, Fu Wang, Peipei Xu, Geyong Min, Xiaowei Huang
Verification plays an essential role in the formal analysis of safety-critical systems.
1 code implementation • CVPR 2023 • Gaojie Jin, Xinping Yi, Dengyu Wu, Ronghui Mu, Xiaowei Huang
The randomized weights enable our design of a novel adversarial training method via Taylor expansion of a small Gaussian noise, and we show that the new adversarial training method can flatten loss landscape and find flat minima.
no code implementations • 3 Feb 2023 • Yi Dong, Zhongguo Li, Xingyu Zhao, Zhengtao Ding, Xiaowei Huang
Then, based on the distributed optimisation algorithm, an output regulation method is utilised to solve the optimal coordination problem for general linear dynamic systems.
1 code implementation • 29 Jan 2023 • Fu Wang, Peipei Xu, Wenjie Ruan, Xiaowei Huang
Deep neural networks (DNNs) are known to be vulnerable to adversarial geometric transformation.
2 code implementations • 23 Jan 2023 • Dengyu Wu, Gaojie Jin, Han Yu, Xinping Yi, Xiaowei Huang
The Top-K cutoff technique optimises the inference of SNN, and the regularisation are proposed to affect the training and construct SNN with optimised performance for cutoff.
1 code implementation • 31 Oct 2022 • Tiehua Zhang, Yuze Liu, Yao Yao, Youhua Xia, Xin Chen, Xiaowei Huang, Jiong Jin
Heterogeneous graph neural network has unleashed great potential on graph representation learning and shown superior performance on downstream tasks such as node classification and clustering.
no code implementations • 26 Oct 2022 • Yi Dong, Xingyu Zhao, Sen Wang, Xiaowei Huang
Deep Reinforcement Learning (DRL) has achieved impressive performance in robotics and autonomous systems (RAS).
1 code implementation • 29 Sep 2022 • Kaiwen Cai, Chris Xiaoxuan Lu, Xiaowei Huang
In this work, we present CUE, a novel uncertainty estimation method for dense prediction tasks in 3D point clouds.
1 code implementation • ICCV 2023 • Wei Huang, Xingyu Zhao, Gaojie Jin, Xiaowei Huang
Finally, we demonstrate two applications of our methods: ranking robust XAI methods and selecting training schemes to improve both classification and interpretation robustness.
no code implementations • 1 Aug 2022 • Yi Dong, Yang Chen, Xingyu Zhao, Xiaowei Huang
With the employment of smart meters, massive data on consumer behaviour can be collected by retailers.
no code implementations • 7 Jun 2022 • Tiehua Zhang, Yuze Liu, Zhishu Shen, Rui Xu, Xin Chen, Xiaowei Huang, Xi Zheng
Spatial-temporal data contains rich information and has been widely studied in recent years due to the rapid development of relevant applications in many fields.
1 code implementation • 17 May 2022 • Wei Huang, Xingyu Zhao, Alec Banks, Victoria Cox, Xiaowei Huang
In this paper, we propose a new robustness testing approach for detecting AEs that considers both the feature level distribution and the pixel level distribution, capturing the perceptual quality of adversarial perturbations.
1 code implementation • CVPR 2022 • Gaojie Jin, Xinping Yi, Wei Huang, Sven Schewe, Xiaowei Huang
In this paper, we show that treating model weights as random variables allows for enhancing adversarial training through \textbf{S}econd-Order \textbf{S}tatistics \textbf{O}ptimization (S$^2$O) with respect to the weights.
no code implementations • 9 Mar 2022 • Yanda Meng, Xu Chen, Dongxu Gao, Yitian Zhao, Xiaoyun Yang, Yihong Qiao, Xiaowei Huang, Yalin Zheng
In this paper, we propose a novel multi-level aggregation network to regress the coordinates of the vertices of a 3D face from a single 2D image in an end-to-end manner.
1 code implementation • 8 Mar 2022 • Yanda Meng, Joshua Bridge, Meng Wei, Yitian Zhao, Yihong Qiao, Xiaoyun Yang, Xiaowei Huang, Yalin Zheng
This paper proposes an adaptive auxiliary task learning based approach for object counting problems.
2 code implementations • 3 Mar 2022 • Kaiwen Cai, Chris Xiaoxuan Lu, Xiaowei Huang
Then, supervised by the pretrained teacher net, a student net with an additional variance branch is trained to finetune the embedding priors and estimate the uncertainty sample by sample.
no code implementations • 23 Jan 2022 • Gaojie Jin, Xinping Yi, Pengfei Yang, Lijun Zhang, Sven Schewe, Xiaowei Huang
While dropout is known to be a successful regularization technique, insights into the mechanisms that lead to this success are still lacking.
no code implementations • 22 Jan 2022 • Gaojie Jin, Xinping Yi, Xiaowei Huang
This paper proposes to study neural networks through neuronal correlation, a statistical measure of correlated neuronal activity on the penultimate layer.
no code implementations • 29 Dec 2021 • Tiehua Zhang, Yuze Liu, Xin Chen, Xiaowei Huang, Feng Zhu, Xi Zheng
Graph representation learning has drawn increasing attention in recent years, especially for learning the low dimensional embedding at both node and graph level for classification and recommendations tasks.
no code implementations • 30 Nov 2021 • Yi Dong, Wei Huang, Vibhav Bharti, Victoria Cox, Alec Banks, Sen Wang, Xingyu Zhao, Sven Schewe, Xiaowei Huang
The increasing use of Machine Learning (ML) components embedded in autonomous systems -- so-called Learning-Enabled Systems (LESs) -- has resulted in the pressing need to assure their functional safety.
1 code implementation • 27 Oct 2021 • Yanda Meng, Hongrun Zhang, Dongxu Gao, Yitian Zhao, Xiaoyun Yang, Xuesheng Qian, Xiaowei Huang, Yalin Zheng
Our model is well-suited to obtain global semantic region information while also accommodates local spatial boundary characteristics simultaneously.
1 code implementation • 14 Sep 2021 • Yi Dong, Xingyu Zhao, Xiaowei Huang
While Deep Reinforcement Learning (DRL) provides transformational capabilities to the control of Robotics and Autonomous Systems (RAS), the black-box nature of DRL and uncertain deployment environments of RAS pose new challenges on its dependability.
no code implementations • 24 Aug 2021 • Wenjie Ruan, Xinping Yi, Xiaowei Huang
This tutorial aims to introduce the fundamentals of adversarial robustness of deep learning, presenting a well-structured review of up-to-date techniques to assess the vulnerability of various types of deep learning models to adversarial examples.
1 code implementation • ICCV 2021 • Yanda Meng, Hongrun Zhang, Yitian Zhao, Xiaoyun Yang, Xuesheng Qian, Xiaowei Huang, Yalin Zheng
Semi-supervised approaches for crowd counting attract attention, as the fully supervised paradigm is expensive and laborious due to its request for a large number of images of dense crowd scenarios and their annotations.
1 code implementation • 2 Jun 2021 • Xingyu Zhao, Wei Huang, Alec Banks, Victoria Cox, David Flynn, Sven Schewe, Xiaowei Huang
The utilisation of Deep Learning (DL) is advancing into increasingly more sophisticated applications.
no code implementations • 29 Apr 2021 • Weizhu Qian, Bowei Chen, Xiaowei Huang
We propose a new approach to train a variational information bottleneck (VIB) that improves its robustness to adversarial perturbations.
no code implementations • 13 Apr 2021 • Xingyu Zhao, Wei Huang, Sven Schewe, Yi Dong, Xiaowei Huang
The utilisation of Deep Learning (DL) raises new challenges regarding its dependability in critical applications.
1 code implementation • 5 Mar 2021 • Nicolas Berthier, Amany Alshareef, James Sharp, Sven Schewe, Xiaowei Huang
Intensive research has been conducted on the verification and validation of deep neural networks (DNNs), aiming to understand if, and how, DNNs can be applied to safety critical applications.
1 code implementation • 1 Mar 2021 • Dengyu Wu, Xinping Yi, Xiaowei Huang
In this paper, we argue that this trend of "energy for accuracy" is not necessary -- a little energy can go a long way to achieve the near-zero accuracy loss.
2 code implementations • 5 Dec 2020 • Xingyu Zhao, Wei Huang, Xiaowei Huang, Valentin Robu, David Flynn
Given the pressing need for assuring algorithmic transparency, Explainable AI (XAI) has emerged as one of the key areas of AI research.
no code implementations • NeurIPS 2020 • Gaojie Jin, Xinping Yi, Liang Zhang, Lijun Zhang, Sven Schewe, Xiaowei Huang
This paper studies the novel concept of weight correlation in deep neural networks and discusses its impact on the networks' generalisation ability.
2 code implementations • 16 Oct 2020 • Wei Huang, Xingyu Zhao, Xiaowei Huang
Whilst, as the increasing use of machine learning models in security-critical applications, the embedding and extraction of malicious knowledge are equivalent to the notorious backdoor attack and its defence, respectively.
2 code implementations • 15 Oct 2020 • Yanghao Zhang, Wenjie Ruan, Fu Wang, Xiaowei Huang
Extensive experiments are conducted on CIFAR-10 and ImageNet datasets with six deep neural network models including GoogleLeNet, VGG16/19, ResNet101/152, and DenseNet121.
1 code implementation • 12 Oct 2020 • Gaojie Jin, Xinping Yi, Liang Zhang, Lijun Zhang, Sven Schewe, Xiaowei Huang
This paper studies the novel concept of weight correlation in deep neural networks and discusses its impact on the networks' generalisation ability.
1 code implementation • 13 Sep 2020 • Peipei Xu, Wenjie Ruan, Xiaowei Huang
In this paper, we define safety risks by requesting the alignment of the network's decision with human perception.
no code implementations • 10 Sep 2020 • Zhixuan Xu, Minghui Qian, Xiaowei Huang, Jie Meng
In this paper, we propose a novel deep learning architecture for cascade growth prediction, called CasGCN, which employs the graph convolutional network to extract structural features from a graphical input, followed by the application of the attention mechanism on both the extracted features and the temporal information before conducting cascade size prediction.
no code implementations • 23 Jul 2020 • Peter Stringer, Rafael C. Cardoso, Xiaowei Huang, Louise A. Dennis
Long-term autonomy requires autonomous systems to adapt as their capabilities no longer perform as expected.
no code implementations • 10 Jul 2020 • João Batista Pereira Matos Juúnior, Lucas Carvalho Cordeiro, Marcelo d'Amorim, Xiaowei Huang
Algorithmically, DAEGEN uses a local search-based optimization algorithm to find DIAEs by iteratively perturbing an input to maximize the difference of two models on predicting the input.
no code implementations • 7 Mar 2020 • Xingyu Zhao, Alec Banks, James Sharp, Valentin Robu, David Flynn, Michael Fisher, Xiaowei Huang
Increasingly sophisticated mathematical modelling processes from Machine Learning are being used to analyse complex data.
no code implementations • 6 Feb 2020 • Youcheng Sun, Yifan Zhou, Simon Maskell, James Sharp, Xiaowei Huang
However, it is unclear if and how the adversarial examples over learning components can affect the overall system-level reliability.
1 code implementation • 5 Nov 2019 • Wei Huang, Youcheng Sun, Xingyu Zhao, James Sharp, Wenjie Ruan, Jie Meng, Xiaowei Huang
The test metrics and test case generation algorithm are implemented into a tool TestRNN, which is then evaluated on a set of LSTM benchmarks.
no code implementations • 22 Aug 2019 • Xingyu Zhao, Matt Osborne, Jenny Lantair, Valentin Robu, David Flynn, Xiaowei Huang, Michael Fisher, Fabio Papacchini, Angelo Ferrando
The battery is a key component of autonomous robots.
1 code implementation • 6 Aug 2019 • Youcheng Sun, Hana Chockler, Xiaowei Huang, Daniel Kroening
The black-box nature of deep neural networks (DNNs) makes it impossible to understand why a particular output is produced, creating demand for "Explainable AI".
no code implementations • WS 2019 • Shuai Chen, Yuanhang Huang, Xiaowei Huang, Haoming Qin, Jun Yan, Buzhou Tang
This is the system description of the Harbin Institute of Technology Shenzhen (HITSZ) team for the first and second subtasks of the fourth Social Media Mining for Health Applications (SMM4H) shared task in 2019.
1 code implementation • 20 Jun 2019 • Wei Huang, Youcheng Sun, Xiaowei Huang, James Sharp
Recurrent neural networks (RNNs) have been widely applied to various sequential tasks such as text processing, video recognition, and molecular property prediction.
no code implementations • 26 Feb 2019 • Jianlin Li, Pengfei Yang, Jiangchao Liu, Liqian Chen, Xiaowei Huang, Lijun Zhang
Several verification approaches have been developed to automatically prove or disprove safety properties of DNNs.
no code implementations • 18 Dec 2018 • Xiaowei Huang, Daniel Kroening, Wenjie Ruan, James Sharp, Youcheng Sun, Emese Thamo, Min Wu, Xinping Yi
In the past few years, significant progress has been made on deep neural networks (DNNs) in achieving human-level performance on several long-standing tasks.
1 code implementation • 10 Jul 2018 • Min Wu, Matthew Wicker, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska
In this paper, we study two variants of pointwise robustness, the maximum safe radius problem, which for a given input sample computes the minimum distance to an adversarial example, and the feature robustness problem, which aims to quantify the robustness of individual features to adversarial perturbations.
2 code implementations • 6 May 2018 • Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska
Verifying correctness of deep neural networks (DNNs) is challenging.
2 code implementations • 30 Apr 2018 • Youcheng Sun, Min Wu, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska, Daniel Kroening
Concolic testing combines program execution and symbolic analysis to explore the execution paths of a software program.
2 code implementations • 16 Apr 2018 • Wenjie Ruan, Min Wu, Youcheng Sun, Xiaowei Huang, Daniel Kroening, Marta Kwiatkowska
In this paper we focus on the $L_0$ norm and aim to compute, for a trained DNN and an input, the maximal radius of a safe norm ball around the input within which there are no adversarial examples.
no code implementations • 10 Mar 2018 • Youcheng Sun, Xiaowei Huang, Daniel Kroening, James Sharp, Matthew Hill, Rob Ashmore
In this paper, inspired by the MC/DC coverage criterion, we propose a family of four novel test criteria that are tailored to structural features of DNNs and their semantics.
no code implementations • 21 Oct 2017 • Matthew Wicker, Xiaowei Huang, Marta Kwiatkowska
In this paper, we focus on image classifiers and propose a feature-guided black-box approach to test the safety of deep neural networks that requires no such knowledge.
2 code implementations • 21 Oct 2016 • Xiaowei Huang, Marta Kwiatkowska, Sen Wang, Min Wu
Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations.
no code implementations • 18 Apr 2016 • Xiaowei Huang, Ji Ruan, Qingliang Chen, Kaile Su
Social norms are powerful formalism in coordinating autonomous agents' behaviour to achieve certain objectives.