no code implementations • ECCV 2020 • Yanda Meng, Wei Meng, Dongxu Gao, Yitian Zhao, Xiaoyun Yang, Xiaowei Huang, Yalin Zheng
In particular, thanks to the proposed aggregation GCN, our network benefits from direct feature learning of the instances’ boundary locations and the spatial information propagation across the image.
no code implementations • 19 May 2023 • Xiaowei Huang, Wenjie Ruan, Wei Huang, Gaojie Jin, Yi Dong, Changshun Wu, Saddek Bensalem, Ronghui Mu, Yi Qi, Xingyu Zhao, Kaiwen Cai, Yanghao Zhang, Sihao Wu, Peipei Xu, Dengyu Wu, Andre Freitas, Mustafa A. Mustafa
Considering the fast development of LLMs, this survey does not intend to be complete (although it includes 300 references), especially when it comes to the applications of LLMs in various domains, but rather a collection of organised literature reviews and discussions to support the quick understanding of the safety and trustworthiness issues from the perspective of V&V.
no code implementations • 3 Apr 2023 • Chi Zhang, Wenjie Ruan, Fu Wang, Peipei Xu, Geyong Min, Xiaowei Huang
Verification plays an essential role in the formal analysis of safety-critical systems.
1 code implementation • 3 Apr 2023 • Yi Qi, Xingyu Zhao, Xiaowei Huang
While LLMs are being quickly applied to many AI application domains, we are interested in the following question: Can safety analysis for safety-critical systems make use of LLMs?
1 code implementation • CVPR 2023 • Gaojie Jin, Xinping Yi, Dengyu Wu, Ronghui Mu, Xiaowei Huang
The randomized weights enable our design of a novel adversarial training method via Taylor expansion of a small Gaussian noise, and we show that the new adversarial training method can flatten loss landscape and find flat minima.
no code implementations • 3 Feb 2023 • Yi Dong, Zhongguo Li, Xingyu Zhao, Zhengtao Ding, Xiaowei Huang
Then, based on the distributed optimisation algorithm, an output regulation method is utilised to solve the optimal coordination problem for general linear dynamic systems.
1 code implementation • 29 Jan 2023 • Fu Wang, Peipei Xu, Wenjie Ruan, Xiaowei Huang
Deep neural networks (DNNs) are known to be vulnerable to adversarial geometric transformation.
1 code implementation • 23 Jan 2023 • Dengyu Wu, Gaojie Jin, Han Yu, Xinping Yi, Xiaowei Huang
Two novel optimisation techniques are presented to achieve AOI-SNNs: a regularisation and a cutoff.
1 code implementation • 31 Oct 2022 • Tiehua Zhang, Yuze Liu, Yao Yao, Youhua Xia, Xin Chen, Xiaowei Huang, Jiong Jin
Heterogeneous graph neural network has unleashed great potential on graph representation learning and shown superior performance on downstream tasks such as node classification and clustering.
no code implementations • 26 Oct 2022 • Yi Dong, Xingyu Zhao, Sen Wang, Xiaowei Huang
Deep Reinforcement Learning (DRL) has achieved impressive performance in robotics and autonomous systems (RASs).
1 code implementation • 29 Sep 2022 • Kaiwen Cai, Chris Xiaoxuan Lu, Xiaowei Huang
In this work, we present CUE, a novel uncertainty estimation method for dense prediction tasks in 3D point clouds.
1 code implementation • 19 Aug 2022 • Wei Huang, Xingyu Zhao, Gaojie Jin, Xiaowei Huang
Interpretability of Deep Learning (DL) models is arguably the barrier in front of trustworthy AI.
no code implementations • 1 Aug 2022 • Yi Dong, Yang Chen, Xingyu Zhao, Xiaowei Huang
With the employment of smart meters, massive data on consumer behaviour can be collected by retailers.
no code implementations • 7 Jun 2022 • Tiehua Zhang, Yuze Liu, Zhishu Shen, Rui Xu, Xin Chen, Xiaowei Huang, Xi Zheng
Spatial-temporal data contains rich information and has been widely studied in recent years due to the rapid development of relevant applications in many fields.
1 code implementation • 17 May 2022 • Wei Huang, Xingyu Zhao, Alec Banks, Victoria Cox, Xiaowei Huang
In this paper, we propose a new robustness testing approach for detecting AEs that considers both the input distribution and the perceptual quality of inputs.
1 code implementation • CVPR 2022 • Gaojie Jin, Xinping Yi, Wei Huang, Sven Schewe, Xiaowei Huang
In this paper, we show that treating model weights as random variables allows for enhancing adversarial training through \textbf{S}econd-Order \textbf{S}tatistics \textbf{O}ptimization (S$^2$O) with respect to the weights.
no code implementations • 9 Mar 2022 • Yanda Meng, Xu Chen, Dongxu Gao, Yitian Zhao, Xiaoyun Yang, Yihong Qiao, Xiaowei Huang, Yalin Zheng
In this paper, we propose a novel multi-level aggregation network to regress the coordinates of the vertices of a 3D face from a single 2D image in an end-to-end manner.
1 code implementation • 8 Mar 2022 • Yanda Meng, Joshua Bridge, Meng Wei, Yitian Zhao, Yihong Qiao, Xiaoyun Yang, Xiaowei Huang, Yalin Zheng
This paper proposes an adaptive auxiliary task learning based approach for object counting problems.
1 code implementation • 3 Mar 2022 • Kaiwen Cai, Chris Xiaoxuan Lu, Xiaowei Huang
Then, supervised by the pretrained teacher net, a student net with an additional variance branch is trained to finetune the embedding priors and estimate the uncertainty sample by sample.
no code implementations • 23 Jan 2022 • Gaojie Jin, Xinping Yi, Pengfei Yang, Lijun Zhang, Sven Schewe, Xiaowei Huang
While dropout is known to be a successful regularization technique, insights into the mechanisms that lead to this success are still lacking.
no code implementations • 22 Jan 2022 • Gaojie Jin, Xinping Yi, Xiaowei Huang
This paper proposes to study neural networks through neuronal correlation, a statistical measure of correlated neuronal activity on the penultimate layer.
no code implementations • 29 Dec 2021 • Tiehua Zhang, Yuze Liu, Xin Chen, Xiaowei Huang, Feng Zhu, Xi Zheng
Graph representation learning has drawn increasing attention in recent years, especially for learning the low dimensional embedding at both node and graph level for classification and recommendations tasks.
no code implementations • 30 Nov 2021 • Yi Dong, Wei Huang, Vibhav Bharti, Victoria Cox, Alec Banks, Sen Wang, Xingyu Zhao, Sven Schewe, Xiaowei Huang
The increasing use of Machine Learning (ML) components embedded in autonomous systems -- so-called Learning-Enabled Systems (LESs) -- has resulted in the pressing need to assure their functional safety.
1 code implementation • 27 Oct 2021 • Yanda Meng, Hongrun Zhang, Dongxu Gao, Yitian Zhao, Xiaoyun Yang, Xuesheng Qian, Xiaowei Huang, Yalin Zheng
Our model is well-suited to obtain global semantic region information while also accommodates local spatial boundary characteristics simultaneously.
1 code implementation • 14 Sep 2021 • Yi Dong, Xingyu Zhao, Xiaowei Huang
While Deep Reinforcement Learning (DRL) provides transformational capabilities to the control of Robotics and Autonomous Systems (RAS), the black-box nature of DRL and uncertain deployment environments of RAS pose new challenges on its dependability.
no code implementations • 24 Aug 2021 • Wenjie Ruan, Xinping Yi, Xiaowei Huang
This tutorial aims to introduce the fundamentals of adversarial robustness of deep learning, presenting a well-structured review of up-to-date techniques to assess the vulnerability of various types of deep learning models to adversarial examples.
1 code implementation • ICCV 2021 • Yanda Meng, Hongrun Zhang, Yitian Zhao, Xiaoyun Yang, Xuesheng Qian, Xiaowei Huang, Yalin Zheng
Semi-supervised approaches for crowd counting attract attention, as the fully supervised paradigm is expensive and laborious due to its request for a large number of images of dense crowd scenarios and their annotations.
1 code implementation • 2 Jun 2021 • Xingyu Zhao, Wei Huang, Alec Banks, Victoria Cox, David Flynn, Sven Schewe, Xiaowei Huang
The utilisation of Deep Learning (DL) is advancing into increasingly more sophisticated applications.
no code implementations • 29 Apr 2021 • Weizhu Qian, Bowei Chen, Xiaowei Huang
We propose a new approach to train a variational information bottleneck (VIB) that improves its robustness to adversarial perturbations.
no code implementations • 13 Apr 2021 • Xingyu Zhao, Wei Huang, Sven Schewe, Yi Dong, Xiaowei Huang
The utilisation of Deep Learning (DL) raises new challenges regarding its dependability in critical applications.
1 code implementation • 5 Mar 2021 • Nicolas Berthier, Amany Alshareef, James Sharp, Sven Schewe, Xiaowei Huang
Intensive research has been conducted on the verification and validation of deep neural networks (DNNs), aiming to understand if, and how, DNNs can be applied to safety critical applications.
1 code implementation • 1 Mar 2021 • Dengyu Wu, Xinping Yi, Xiaowei Huang
In this paper, we argue that this trend of "energy for accuracy" is not necessary -- a little energy can go a long way to achieve the near-zero accuracy loss.
2 code implementations • 5 Dec 2020 • Xingyu Zhao, Wei Huang, Xiaowei Huang, Valentin Robu, David Flynn
Given the pressing need for assuring algorithmic transparency, Explainable AI (XAI) has emerged as one of the key areas of AI research.
no code implementations • NeurIPS 2020 • Gaojie Jin, Xinping Yi, Liang Zhang, Lijun Zhang, Sven Schewe, Xiaowei Huang
This paper studies the novel concept of weight correlation in deep neural networks and discusses its impact on the networks' generalisation ability.
2 code implementations • 16 Oct 2020 • Wei Huang, Xingyu Zhao, Xiaowei Huang
Whilst, as the increasing use of machine learning models in security-critical applications, the embedding and extraction of malicious knowledge are equivalent to the notorious backdoor attack and its defence, respectively.
2 code implementations • 15 Oct 2020 • Yanghao Zhang, Wenjie Ruan, Fu Wang, Xiaowei Huang
Extensive experiments are conducted on CIFAR-10 and ImageNet datasets with six deep neural network models including GoogleLeNet, VGG16/19, ResNet101/152, and DenseNet121.
1 code implementation • 12 Oct 2020 • Gaojie Jin, Xinping Yi, Liang Zhang, Lijun Zhang, Sven Schewe, Xiaowei Huang
This paper studies the novel concept of weight correlation in deep neural networks and discusses its impact on the networks' generalisation ability.
1 code implementation • 13 Sep 2020 • Peipei Xu, Wenjie Ruan, Xiaowei Huang
In this paper, we define safety risks by requesting the alignment of the network's decision with human perception.
no code implementations • 10 Sep 2020 • Zhixuan Xu, Minghui Qian, Xiaowei Huang, Jie Meng
In this paper, we propose a novel deep learning architecture for cascade growth prediction, called CasGCN, which employs the graph convolutional network to extract structural features from a graphical input, followed by the application of the attention mechanism on both the extracted features and the temporal information before conducting cascade size prediction.
no code implementations • 23 Jul 2020 • Peter Stringer, Rafael C. Cardoso, Xiaowei Huang, Louise A. Dennis
Long-term autonomy requires autonomous systems to adapt as their capabilities no longer perform as expected.
no code implementations • 10 Jul 2020 • João Batista Pereira Matos Juúnior, Lucas Carvalho Cordeiro, Marcelo d'Amorim, Xiaowei Huang
Algorithmically, DAEGEN uses a local search-based optimization algorithm to find DIAEs by iteratively perturbing an input to maximize the difference of two models on predicting the input.
no code implementations • 7 Mar 2020 • Xingyu Zhao, Alec Banks, James Sharp, Valentin Robu, David Flynn, Michael Fisher, Xiaowei Huang
Increasingly sophisticated mathematical modelling processes from Machine Learning are being used to analyse complex data.
no code implementations • 6 Feb 2020 • Youcheng Sun, Yifan Zhou, Simon Maskell, James Sharp, Xiaowei Huang
However, it is unclear if and how the adversarial examples over learning components can affect the overall system-level reliability.
1 code implementation • 5 Nov 2019 • Wei Huang, Youcheng Sun, Xingyu Zhao, James Sharp, Wenjie Ruan, Jie Meng, Xiaowei Huang
The test metrics and test case generation algorithm are implemented into a tool TestRNN, which is then evaluated on a set of LSTM benchmarks.
no code implementations • 22 Aug 2019 • Xingyu Zhao, Matt Osborne, Jenny Lantair, Valentin Robu, David Flynn, Xiaowei Huang, Michael Fisher, Fabio Papacchini, Angelo Ferrando
The battery is a key component of autonomous robots.
1 code implementation • 6 Aug 2019 • Youcheng Sun, Hana Chockler, Xiaowei Huang, Daniel Kroening
The black-box nature of deep neural networks (DNNs) makes it impossible to understand why a particular output is produced, creating demand for "Explainable AI".
no code implementations • WS 2019 • Shuai Chen, Yuanhang Huang, Xiaowei Huang, Haoming Qin, Jun Yan, Buzhou Tang
This is the system description of the Harbin Institute of Technology Shenzhen (HITSZ) team for the first and second subtasks of the fourth Social Media Mining for Health Applications (SMM4H) shared task in 2019.
1 code implementation • 20 Jun 2019 • Wei Huang, Youcheng Sun, Xiaowei Huang, James Sharp
Recurrent neural networks (RNNs) have been widely applied to various sequential tasks such as text processing, video recognition, and molecular property prediction.
no code implementations • 26 Feb 2019 • Jianlin Li, Pengfei Yang, Jiangchao Liu, Liqian Chen, Xiaowei Huang, Lijun Zhang
Several verification approaches have been developed to automatically prove or disprove safety properties of DNNs.
no code implementations • 18 Dec 2018 • Xiaowei Huang, Daniel Kroening, Wenjie Ruan, James Sharp, Youcheng Sun, Emese Thamo, Min Wu, Xinping Yi
In the past few years, significant progress has been made on deep neural networks (DNNs) in achieving human-level performance on several long-standing tasks.
1 code implementation • 10 Jul 2018 • Min Wu, Matthew Wicker, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska
In this paper, we study two variants of pointwise robustness, the maximum safe radius problem, which for a given input sample computes the minimum distance to an adversarial example, and the feature robustness problem, which aims to quantify the robustness of individual features to adversarial perturbations.
2 code implementations • 6 May 2018 • Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska
Verifying correctness of deep neural networks (DNNs) is challenging.
2 code implementations • 30 Apr 2018 • Youcheng Sun, Min Wu, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska, Daniel Kroening
Concolic testing combines program execution and symbolic analysis to explore the execution paths of a software program.
2 code implementations • 16 Apr 2018 • Wenjie Ruan, Min Wu, Youcheng Sun, Xiaowei Huang, Daniel Kroening, Marta Kwiatkowska
In this paper we focus on the $L_0$ norm and aim to compute, for a trained DNN and an input, the maximal radius of a safe norm ball around the input within which there are no adversarial examples.
no code implementations • 10 Mar 2018 • Youcheng Sun, Xiaowei Huang, Daniel Kroening, James Sharp, Matthew Hill, Rob Ashmore
In this paper, inspired by the MC/DC coverage criterion, we propose a family of four novel test criteria that are tailored to structural features of DNNs and their semantics.
no code implementations • 21 Oct 2017 • Matthew Wicker, Xiaowei Huang, Marta Kwiatkowska
In this paper, we focus on image classifiers and propose a feature-guided black-box approach to test the safety of deep neural networks that requires no such knowledge.
2 code implementations • 21 Oct 2016 • Xiaowei Huang, Marta Kwiatkowska, Sen Wang, Min Wu
Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations.
no code implementations • 18 Apr 2016 • Xiaowei Huang, Ji Ruan, Qingliang Chen, Kaile Su
Social norms are powerful formalism in coordinating autonomous agents' behaviour to achieve certain objectives.