Search Results for author: Chih-Hong Cheng

Found 32 papers, 2 papers with code

BAM: Box Abstraction Monitors for Real-time OoD Detection in Object Detection

no code implementations27 Mar 2024 Changshun Wu, WeiCheng He, Chih-Hong Cheng, Xiaowei Huang, Saddek Bensalem

Nevertheless, integrating OoD detection into state-of-the-art (SOTA) object detection DNNs poses significant challenges, partly due to the complexity introduced by the SOTA OoD construction methods, which require the modification of DNN architecture and the introduction of complex loss functions.

Object object-detection +2

EC-IoU: Orienting Safety for Object Detectors via Ego-Centric Intersection-over-Union

no code implementations20 Mar 2024 Brian Hsuan-Cheng Liao, Chih-Hong Cheng, Hasan Esen, Alois Knoll

This paper presents safety-oriented object detection via a novel Ego-Centric Intersection-over-Union (EC-IoU) measure, addressing practical concerns when applying state-of-the-art learning-based perception models in safety-critical domains such as autonomous driving.

Autonomous Driving Object +3

Instance-Level Safety-Aware Fidelity of Synthetic Data and Its Calibration

no code implementations10 Feb 2024 Chih-Hong Cheng, Paul Stöckel, Xingyu Zhao

Modeling and calibrating the fidelity of synthetic data is paramount in shaping the future of safe and reliable self-driving technology by offering a cost-effective and scalable alternative to real-world data collection.

Runtime Monitoring DNN-Based Perception

no code implementations6 Oct 2023 Chih-Hong Cheng, Michael Luttenberger, Rongjie Yan

Deep neural networks (DNNs) are instrumental in realizing complex perception systems.

Safeguarding Learning-based Control for Smart Energy Systems with Sampling Specifications

no code implementations11 Aug 2023 Chih-Hong Cheng, Venkatesh Prasad Venkataramanan, Pragya Kirti Gupta, Yun-Fei Hsu, Simon Burton

We study challenges using reinforcement learning in controlling energy systems, where apart from performance requirements, one has additional safety requirements such as avoiding blackouts.

reinforcement-learning Safe Reinforcement Learning

Safety Performance of Neural Networks in the Presence of Covariate Shift

no code implementations24 Jul 2023 Chih-Hong Cheng, Harald Ruess, Konstantinos Theodorou

The reshaped test set reflects the distribution of neuron activation values as observed during operation, and may therefore be used for re-evaluating safety performance in the presence of covariate shift.

What, Indeed, is an Achievable Provable Guarantee for Learning-Enabled Safety Critical Systems

no code implementations20 Jul 2023 Saddek Bensalem, Chih-Hong Cheng, Wei Huang, Xiaowei Huang, Changshun Wu, Xingyu Zhao

Machine learning has made remarkable advancements, but confidently utilising learning-enabled components in safety-critical domains still poses challenges.

Towards Rigorous Design of OoD Detectors

no code implementations14 Jun 2023 Chih-Hong Cheng, Changshun Wu, Harald Ruess, Saddek Bensalem

Out-of-distribution (OoD) detection techniques are instrumental for safety-related neural networks.

Out of Distribution (OOD) Detection

Potential-based Credit Assignment for Cooperative RL-based Testing of Autonomous Vehicles

no code implementations28 May 2023 Utku Ayvaz, Chih-Hong Cheng, Hao Shen

While autonomous vehicles (AVs) may perform remarkably well in generic real-life cases, their irrational action in some unforeseen cases leads to critical safety concerns.

Autonomous Vehicles counterfactual +2

EvCenterNet: Uncertainty Estimation for Object Detection using Evidential Learning

no code implementations6 Mar 2023 Monish R. Nallapareddy, Kshitij Sirohi, Paulo L. J. Drews-Jr, Wolfram Burgard, Chih-Hong Cheng, Abhinav Valada

In this work, we propose EvCenterNet, a novel uncertainty-aware 2D object detection framework using evidential learning to directly estimate both classification and regression uncertainties.

Decision Making object-detection +2

Butterfly Effect Attack: Tiny and Seemingly Unrelated Perturbations for Object Detection

no code implementations14 Nov 2022 Nguyen Anh Vu Doan, Arda Yüksel, Chih-Hong Cheng

This work aims to explore and identify tiny and seemingly unrelated perturbations of images in object detection that will lead to performance degradation.

Object object-detection +1

Prioritizing Corners in OoD Detectors via Symbolic String Manipulation

no code implementations16 May 2022 Chih-Hong Cheng, Changshun Wu, Emmanouil Seferis, Saddek Bensalem

We consider the definition of "in-distribution" characterized in the feature space by a union of hyperrectangles learned from the training dataset.

Traffic Sign Recognition

Unaligned but Safe -- Formally Compensating Performance Limitations for Imprecise 2D Object Detection

no code implementations10 Feb 2022 Tobias Schuster, Emmanouil Seferis, Simon Burton, Chih-Hong Cheng

We address a special sub-type of performance limitations: the prediction bounding box cannot be perfectly aligned with the ground truth, but the computed Intersection-over-Union metric is always larger than a given threshold.

object-detection Object Detection

Are Transformers More Robust? Towards Exact Robustness Verification for Transformers

no code implementations8 Feb 2022 Brian Hsuan-Cheng Liao, Chih-Hong Cheng, Hasan Esen, Alois Knoll

As an emerging type of Neural Networks (NNs), Transformers are used in many domains ranging from Natural Language Processing to Autonomous Driving.

Autonomous Driving Object Recognition

Logically Sound Arguments for the Effectiveness of ML Safety Measures

no code implementations4 Nov 2021 Chih-Hong Cheng, Tobias Schuster, Simon Burton

We investigate the issues of achieving sufficient rigor in the arguments for the safety of machine learning functions.

Automated Theorem Proving

Safety Metrics for Semantic Segmentation in Autonomous Driving

no code implementations21 May 2021 Chih-Hong Cheng, Alois Knoll, Hsuan-Cheng Liao

Within the context of autonomous driving, safety-related metrics for deep neural networks have been widely studied for image classification and object detection.

Autonomous Driving Clustering +4

Monitoring Object Detection Abnormalities via Data-Label and Post-Algorithm Abstractions

no code implementations29 Mar 2021 Yuhang Chen, Chih-Hong Cheng, Jun Yan, Rongjie Yan

While object detection modules are essential functionalities for any autonomous vehicle, the performance of such modules that are implemented using deep neural networks can be, in many cases, unreliable.

object-detection Object Detection

Testing Autonomous Systems with Believed Equivalence Refinement

no code implementations8 Mar 2021 Chih-Hong Cheng, Rongjie Yan

Continuous engineering of autonomous driving functions commonly requires deploying vehicles in road testing to obtain inputs that cause problematic decisions.

Autonomous Driving

Provably-Robust Runtime Monitoring of Neuron Activation Patterns

no code implementations24 Nov 2020 Chih-Hong Cheng

For deep neural networks (DNNs) to be used in safety-critical autonomous driving tasks, it is desirable to monitor in operation time if the input for the DNN is similar to the data used in DNN training.

Autonomous Driving

Continuous Safety Verification of Neural Networks

no code implementations12 Oct 2020 Chih-Hong Cheng, Rongjie Yan

Deploying deep neural networks (DNNs) as core functions in autonomous driving creates unique verification and validation challenges.

Autonomous Driving valid

Safety-Aware Hardening of 3D Object Detection Neural Network Systems

no code implementations25 Mar 2020 Chih-Hong Cheng

We study how state-of-the-art neural networks for 3D object detection using a single-stage pipeline can be made safety aware.

3D Object Detection object-detection

Towards Robust Direct Perception Networks for Automated Driving

no code implementations30 Sep 2019 Chih-Hong Cheng

We further extend the loss function and define a new provably robust criterion that is parametric to the allowed output tolerance $\Delta$, the layer index $\tilde{l}$ where perturbation is considered, and the maximum perturbation amount $\kappa$.

Towards Safety Verification of Direct Perception Neural Networks

1 code implementation9 Apr 2019 Chih-Hong Cheng, Chung-Hao Huang, Thomas Brunner, Vahid Hashemi

We study the problem of safety verification of direct perception neural networks, where camera images are used as inputs to produce high-level features for autonomous vehicles to make control decisions.

Autonomous Vehicles

Architecting Dependable Learning-enabled Autonomous Systems: A Survey

no code implementations27 Feb 2019 Chih-Hong Cheng, Dhiraj Gulati, Rongjie Yan

We provide a summary over architectural approaches that can be used to construct dependable learning-enabled autonomous systems, with a focus on automated driving.

nn-dependability-kit: Engineering Neural Networks for Safety-Critical Autonomous Driving Systems

1 code implementation16 Nov 2018 Chih-Hong Cheng, Chung-Hao Huang, Georg Nührenberg

Can engineering neural networks be approached in a disciplined way similar to how engineers build software for civil aircraft?

Autonomous Driving

Runtime Monitoring Neuron Activation Patterns

no code implementations18 Sep 2018 Chih-Hong Cheng, Georg Nührenberg, Hirotoshi Yasuoka

For using neural networks in safety critical domains, it is important to know if a decision made by a neural network is supported by prior similarities in training.

Towards Dependability Metrics for Neural Networks

no code implementations6 Jun 2018 Chih-Hong Cheng, Georg Nührenberg, Chung-Hao Huang, Harald Ruess, Hirotoshi Yasuoka

Artificial neural networks (NN) are instrumental in realizing highly-automated driving functionality.

Quantitative Projection Coverage for Testing ML-enabled Autonomous Systems

no code implementations11 May 2018 Chih-Hong Cheng, Chung-Hao Huang, Hirotoshi Yasuoka

Systematically testing models learned from neural networks remains a crucial unsolved barrier to successfully justify safety for autonomous vehicles engineered using data-driven approach.

Autonomous Vehicles

Verification of Binarized Neural Networks via Inter-Neuron Factoring

no code implementations9 Oct 2017 Chih-Hong Cheng, Georg Nührenberg, Chung-Hao Huang, Harald Ruess

We study the problem of formal verification of Binarized Neural Networks (BNN), which have recently been proposed as a energy-efficient alternative to traditional learning networks.

Neural Networks for Safety-Critical Applications - Challenges, Experiments and Perspectives

no code implementations4 Sep 2017 Chih-Hong Cheng, Frederik Diehl, Yassine Hamza, Gereon Hinz, Georg Nührenberg, Markus Rickert, Harald Ruess, Michael Troung-Le

We propose a methodology for designing dependable Artificial Neural Networks (ANN) by extending the concepts of understandability, correctness, and validity that are crucial ingredients in existing certification standards.

Maximum Resilience of Artificial Neural Networks

no code implementations28 Apr 2017 Chih-Hong Cheng, Georg Nührenberg, Harald Ruess

The deployment of Artificial Neural Networks (ANNs) in safety-critical applications poses a number of new verification and certification challenges.

Cannot find the paper you are looking for? You can Submit a new open access paper.