no code implementations • 18 Feb 2024 • Yejiang Yang, Zihao Mo, Hoang-Dung Tran, Weiming Xiang
This paper proposes a transition system abstraction framework for neural network dynamical system models to enhance the model interpretability, with applications to complex dynamical systems such as human behavior learning and verification.
no code implementations • 18 Feb 2024 • Zihao Mo, Yejiang Yang, Shuaizheng Lu, Weiming Xiang
Based on the computed output discrepancy, the repairing method first initializes a new training set for the compressed networks to narrow down the discrepancy between the two neural networks and improve the performance of the compressed network.
no code implementations • 26 Apr 2023 • Yejiang Yang, Zihao Mo, Weiming Xiang
Then, a collection of small-scale neural networks that are computationally efficient are trained as the local dynamical description for their corresponding topologies.
no code implementations • 26 Apr 2023 • Wesley Cooke, Zihao Mo, Weiming Xiang
Neural network model compression techniques can address the computation issue of deep neural networks on embedded devices in industrial systems.
no code implementations • 17 Jan 2023 • Weiming Xiang, Zhongzhu Shao
A reachability-based algorithm is proposed to accurately compute the model reduction precision.
no code implementations • 2 Feb 2022 • Weiming Xiang, Zhongzhu Shao
In this paper, we propose a concept of approximate bisimulation relation for feedforward neural networks.
no code implementations • 27 Jul 2021 • Yejiang Yang, Weiming Xiang
In this paper, a robust optimization framework is developed to train shallow neural networks based on reachability analysis of neural networks.
no code implementations • 26 Apr 2020 • Weiming Xiang, Hoang-Dung Tran, Xiaodong Yang, Taylor T. Johnson
Then, in combination with reachability methods developed for various dynamical system classes modeled by ordinary differential equations, a recursive algorithm is developed for over-approximating the reachable set of the closed-loop system.
2 code implementations • 12 Apr 2020 • Hoang-Dung Tran, Stanley Bak, Weiming Xiang, Taylor T. Johnson
Set-based analysis methods can detect or prove the absence of bounded adversarial attacks, which can then be used to evaluate the effectiveness of neural network training methodology.
no code implementations • 12 Apr 2020 • Hoang-Dung Tran, Xiaodong Yang, Diego Manzanas Lopez, Patrick Musau, Luan Viet Nguyen, Weiming Xiang, Stanley Bak, Taylor T. Johnson
For learning-enabled CPS, such as closed-loop control systems incorporating neural networks, NNV provides exact and over-approximate reachability analysis schemes for linear plant models and FFNN controllers with piecewise-linear activation functions, such as ReLUs.
1 code implementation • 2 Mar 2020 • Xiaodong Yang, Hoang-Dung Tran, Weiming Xiang, Taylor Johnson
To address this challenge, we propose a parallelizable technique to compute exact reachable sets of a neural network to an input set.
1 code implementation • 14 Dec 2018 • Weiming Xiang, Hoang-Dung Tran, Taylor T. Johnson
As such feedforward networks are memoryless, they can be abstractly represented as mathematical functions, and the reachability analysis of the neural network amounts to interval analysis problems.
1 code implementation • 3 Oct 2018 • Weiming Xiang, Patrick Musau, Ayana A. Wild, Diego Manzanas Lopez, Nathaniel Hamilton, Xiaodong Yang, Joel Rosenfeld, Taylor T. Johnson
This survey presents an overview of verification techniques for autonomous systems, with a focus on safety-critical autonomous cyber-physical systems (CPS) and subcomponents thereof.
1 code implementation • 25 May 2018 • Weiming Xiang, Taylor T. Johnson
This paper develops methods for estimating the reachable set and verifying safety properties of dynamical systems under control of neural network-based controllers that may be implemented in embedded software.
Systems and Control
no code implementations • 21 Dec 2017 • Weiming Xiang, Hoang-Dung Tran, Taylor T. Johnson
Due to the complicate, nonlinear, non-convex nature of neural networks, formal safety guarantees for the output behaviors of neural networks will be crucial for their applications in safety-critical systems. In this paper, the output reachable set computation and safety verification problems for a class of neural networks consisting of Rectified Linear Unit (ReLU) activation functions are addressed.
no code implementations • 9 Aug 2017 • Weiming Xiang, Hoang-Dung Tran, Taylor T. Johnson
In this paper, the output reachable estimation and safety verification problems for multi-layer perceptron neural networks are addressed.