Search Results for author: Stanley Bak

Found 13 papers, 5 papers with code

The Fourth International Verification of Neural Networks Competition (VNN-COMP 2023): Summary and Results

2 code implementations28 Dec 2023 Christopher Brix, Stanley Bak, Changliu Liu, Taylor T. Johnson

This report summarizes the 4th International Verification of Neural Networks Competition (VNN-COMP 2023), held as a part of the 6th Workshop on Formal Methods for ML-Enabled Autonomous Systems (FoMLAS), that was collocated with the 35th International Conference on Computer-Aided Verification (CAV).

On the Difficulty of Intersection Checking with Polynomial Zonotopes

no code implementations17 May 2023 Yushen Huang, Ertai Luo, Stanley Bak, Yifan Sun

The standard method for intersection checking with polynomial zonotopes is a two-part algorithm that overapproximates a polynomial zonotope with a regular zonotope and then, if the overapproximation error is deemed too large, splits the set and recursively tries again.

Motion Planning

First Three Years of the International Verification of Neural Networks Competition (VNN-COMP)

no code implementations14 Jan 2023 Christopher Brix, Mark Niklas Müller, Stanley Bak, Taylor T. Johnson, Changliu Liu

This paper presents a summary and meta-analysis of the first three iterations of the annual International Verification of Neural Networks Competition (VNN-COMP) held in 2020, 2021, and 2022.

Image Classification reinforcement-learning +1

The Third International Verification of Neural Networks Competition (VNN-COMP 2022): Summary and Results

1 code implementation20 Dec 2022 Mark Niklas Müller, Christopher Brix, Stanley Bak, Changliu Liu, Taylor T. Johnson

This report summarizes the 3rd International Verification of Neural Networks Competition (VNN-COMP 2022), held as a part of the 5th Workshop on Formal Methods for ML-Enabled Autonomous Systems (FoMLAS), which was collocated with the 34th International Conference on Computer-Aided Verification (CAV).

Provable Fairness for Neural Network Models using Formal Verification

no code implementations16 Dec 2022 Giorgian Borca-Tasciuc, Xingzhi Guo, Stanley Bak, Steven Skiena

Machine learning models are increasingly deployed for critical decision-making tasks, making it important to verify that they do not contain gender or racial biases picked up from training data.

Decision Making Fairness

Provably Safe Reinforcement Learning via Action Projection using Reachability Analysis and Polynomial Zonotopes

no code implementations19 Oct 2022 Niklas Kochdumper, Hanna Krasowski, Xiao Wang, Stanley Bak, Matthias Althoff

While reinforcement learning produces very promising results for many applications, its main disadvantage is the lack of safety guarantees, which prevents its use in safety-critical systems.

reinforcement-learning Reinforcement Learning (RL) +1

Open- and Closed-Loop Neural Network Verification using Polynomial Zonotopes

no code implementations6 Jul 2022 Niklas Kochdumper, Christian Schilling, Matthias Althoff, Stanley Bak

We present a novel approach to efficiently compute tight non-convex enclosures of the image through neural networks with ReLU, sigmoid, or hyperbolic tangent activation functions.

The Second International Verification of Neural Networks Competition (VNN-COMP 2021): Summary and Results

3 code implementations31 Aug 2021 Stanley Bak, Changliu Liu, Taylor Johnson

This report summarizes the second International Verification of Neural Networks Competition (VNN-COMP 2021), held as a part of the 4th Workshop on Formal Methods for ML-Enabled Autonomous Systems that was collocated with the 33rd International Conference on Computer-Aided Verification (CAV).

NNV: The Neural Network Verification Tool for Deep Neural Networks and Learning-Enabled Cyber-Physical Systems

no code implementations12 Apr 2020 Hoang-Dung Tran, Xiaodong Yang, Diego Manzanas Lopez, Patrick Musau, Luan Viet Nguyen, Weiming Xiang, Stanley Bak, Taylor T. Johnson

For learning-enabled CPS, such as closed-loop control systems incorporating neural networks, NNV provides exact and over-approximate reachability analysis schemes for linear plant models and FFNN controllers with piecewise-linear activation functions, such as ReLUs.

Verification of Deep Convolutional Neural Networks Using ImageStars

2 code implementations12 Apr 2020 Hoang-Dung Tran, Stanley Bak, Weiming Xiang, Taylor T. Johnson

Set-based analysis methods can detect or prove the absence of bounded adversarial attacks, which can then be used to evaluate the effectiveness of neural network training methodology.

Image Classification Pose Estimation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.