Search Results for author: Luca Daniel

Found 33 papers, 13 papers with code

Neural Network Control Policy Verification With Persistent Adversarial Perturbation

no code implementations ICML 2020 Yuh-Shyang Wang, Tsui-Wei Weng, Luca Daniel

In this paper, we show how to combine recent works on static neural network certification tools with robust control theory to certify a neural network policy in a control loop.

One step closer to unbiased aleatoric uncertainty estimation

1 code implementation16 Dec 2023 Wang Zhang, Ziwen Ma, Subhro Das, Tsui-Wei Weng, Alexandre Megretski, Luca Daniel, Lam M. Nguyen

Neural networks are powerful tools in various applications, and quantifying their uncertainty is crucial for reliable decision-making.

Decision Making

Rare Event Probability Learning by Normalizing Flows

no code implementations29 Oct 2023 Zhenggqi Gao, Dinghuai Zhang, Luca Daniel, Duane S. Boning

Next, it estimates the rare event probability by utilizing importance sampling in conjunction with the last proposal.

PIFON-EPT: MR-Based Electrical Property Tomography Using Physics-Informed Fourier Networks

no code implementations23 Feb 2023 Xinling Yu, José E. C. Serrallés, Ilias I. Giannakopoulos, Ziyue Liu, Luca Daniel, Riccardo Lattanzi, Zheng Zhang

PIFON-EPT is the first method that can simultaneously reconstruct EP and transmit fields from incomplete noisy MR measurements, providing new opportunities for EPT research.

Denoising

ConCerNet: A Contrastive Learning Based Framework for Automated Conservation Law Discovery and Trustworthy Dynamical System Prediction

1 code implementation11 Feb 2023 Wang Zhang, Tsui-Wei Weng, Subhro Das, Alexandre Megretski, Luca Daniel, Lam M. Nguyen

Deep neural networks (DNN) have shown great capacity of modeling a dynamical system; nevertheless, they usually do not obey physics constraints such as conservation laws.

Contrastive Learning

Certified Interpretability Robustness for Class Activation Mapping

no code implementations26 Jan 2023 Alex Gu, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel

Interpreting machine learning models is challenging but crucial for ensuring the safety of deep networks in autonomous driving systems.

Autonomous Driving

MR-Based Electrical Property Reconstruction Using Physics-Informed Neural Networks

no code implementations23 Oct 2022 Xinling Yu, José E. C. Serrallés, Ilias I. Giannakopoulos, Ziyue Liu, Luca Daniel, Riccardo Lattanzi, Zheng Zhang

Electrical properties (EP), namely permittivity and electric conductivity, dictate the interactions between electromagnetic waves and biological tissue.

SynBench: Task-Agnostic Benchmarking of Pretrained Representations using Synthetic Data

no code implementations6 Oct 2022 Ching-Yun Ko, Pin-Yu Chen, Jeet Mohapatra, Payel Das, Luca Daniel

Given a pretrained model, the representations of data synthesized from the Gaussian mixture are used to compare with our reference to infer the quality.

Benchmarking Representation Learning

Efficient Certification for Probabilistic Robustness

no code implementations29 Sep 2021 Victor Rong, Alexandre Megretski, Luca Daniel, Tsui-Wei Weng

Recent developments on the robustness of neural networks have primarily emphasized the notion of worst-case adversarial robustness in both verification and robust training.

Adversarial Robustness

Tactics on Refining Decision Boundary for Improving Certification-based Robust Training

no code implementations29 Sep 2021 Wang Zhang, Lam M. Nguyen, Subhro Das, Pin-Yu Chen, Sijia Liu, Alexandre Megretski, Luca Daniel, Tsui-Wei Weng

In verification-based robust training, existing methods utilize relaxation based methods to bound the worst case performance of neural networks given certain perturbation.

Adversarially Robust Conformal Prediction

no code implementations ICLR 2022 Asaf Gendler, Tsui-Wei Weng, Luca Daniel, Yaniv Romano

By combining conformal prediction with randomized smoothing, our proposed method forms a prediction set with finite-sample coverage guarantee that holds for any data distribution with $\ell_2$-norm bounded adversarial noise, generated by any adversarial attack algorithm.

Adversarial Attack Conformal Prediction +1

Fast Training of Provably Robust Neural Networks by SingleProp

no code implementations1 Feb 2021 Akhilan Boopathy, Tsui-Wei Weng, Sijia Liu, Pin-Yu Chen, Gaoyuan Zhang, Luca Daniel

Recent works have developed several methods of defending neural networks against adversarial attacks with certified guarantees.

Handling Initial Conditions in Vector Fitting for Real Time Modeling of Power System Dynamics

no code implementations25 Nov 2020 Tommaso Bradde, Samuel Chevalier, Marco De Stefano, Stefano Grivet-Talocia, Luca Daniel

This paper develops a predictive modeling algorithm, denoted as Real-Time Vector Fitting (RTVF), which is capable of approximating the real-time linearized dynamics of multi-input multi-output (MIMO) dynamical systems via rational transfer function matrices.

Accelerated Probabilistic Power Flow in Electrical Distribution Networks via Model Order Reduction and Neumann Series Expansion

no code implementations28 Oct 2020 Samuel Chevalier, Luca Schenato, Luca Daniel

This subspace is used to construct and update a reduced order model (ROM) of the full nonlinear system, resulting in a highly efficient simulation for future voltage profiles.

Higher-Order Certification for Randomized Smoothing

no code implementations NeurIPS 2020 Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel

We also provide a framework that generalizes the calculation for certification using higher-order information.

Robust Deep Reinforcement Learning through Adversarial Loss

2 code implementations NeurIPS 2021 Tuomas Oikarinen, Wang Zhang, Alexandre Megretski, Luca Daniel, Tsui-Wei Weng

To address this issue, we propose RADIAL-RL, a principled framework to train reinforcement learning agents with improved robustness against $l_p$-norm bounded adversarial attacks.

Adversarial Attack Atari Games +3

Proper Network Interpretability Helps Adversarial Robustness in Classification

1 code implementation ICML 2020 Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Cynthia Liu, Pin-Yu Chen, Shiyu Chang, Luca Daniel

Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability (namely, making network interpretation maps visually similar), or interpretability is itself susceptible to adversarial attacks.

Adversarial Robustness Classification +3

Hidden Cost of Randomized Smoothing

no code implementations2 Mar 2020 Jeet Mohapatra, Ching-Yun Ko, Tsui-Wei, Weng, Sijia Liu, Pin-Yu Chen, Luca Daniel

The fragility of modern machine learning models has drawn a considerable amount of attention from both academia and the public.

Towards Verifying Robustness of Neural Networks Against Semantic Perturbations

1 code implementation19 Dec 2019 Jeet Mohapatra, Tsui-Wei, Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel

Verifying robustness of neural networks given a specified threat model is a fundamental yet challenging task.

Image Classification

Fastened CROWN: Tightened Neural Network Robustness Certificates

1 code implementation2 Dec 2019 Zhaoyang Lyu, Ching-Yun Ko, Zhifeng Kong, Ngai Wong, Dahua Lin, Luca Daniel

We draw inspiration from such work and further demonstrate the optimality of deterministic CROWN (Zhang et al. 2018) solutions in a given linear programming problem under mild constraints.

Efficient Training of Robust and Verifiable Neural Networks

no code implementations25 Sep 2019 Akhilan Boopathy, Lily Weng, Sijia Liu, Pin-Yu Chen, Luca Daniel

We propose that many common certified defenses can be viewed under a unified framework of regularization.

Visual Interpretability Alone Helps Adversarial Robustness

no code implementations25 Sep 2019 Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Pin-Yu Chen, Shiyu Chang, Luca Daniel

Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability, and interpretability is itself susceptible to adversarial attacks.

Adversarial Robustness

Verification of Neural Network Control Policy Under Persistent Adversarial Perturbation

no code implementations18 Aug 2019 Yuh-Shyang Wang, Tsui-Wei Weng, Luca Daniel

In this paper, we show how to combine recent works on neural network certification tools (which are mainly used in static settings such as image classification) with robust control theory to certify a neural network policy in a control loop.

Image Classification

POPQORN: Quantifying Robustness of Recurrent Neural Networks

2 code implementations17 May 2019 Ching-Yun Ko, Zhaoyang Lyu, Tsui-Wei Weng, Luca Daniel, Ngai Wong, Dahua Lin

The vulnerability to adversarial attacks has been a critical issue for deep neural networks.

PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach

no code implementations18 Dec 2018 Tsui-Wei Weng, Pin-Yu Chen, Lam M. Nguyen, Mark S. Squillante, Ivan Oseledets, Luca Daniel

With deep neural networks providing state-of-the-art machine learning models for numerous machine learning tasks, quantifying the robustness of these models has become an important area of research.

BIG-bench Machine Learning

CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks

2 code implementations29 Nov 2018 Akhilan Boopathy, Tsui-Wei Weng, Pin-Yu Chen, Sijia Liu, Luca Daniel

This motivates us to propose a general and efficient framework, CNN-Cert, that is capable of certifying robustness on general convolutional neural networks.

Efficient Neural Network Robustness Certification with General Activation Functions

14 code implementations NeurIPS 2018 Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, Luca Daniel

Finding minimum distortion of adversarial examples and thus certifying robustness in neural network classifiers for given data points is known to be a challenging problem.

Computational Efficiency Efficient Neural Network

On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm

1 code implementation19 Oct 2018 Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Aurelie Lozano, Cho-Jui Hsieh, Luca Daniel

We apply extreme value theory on the new formal robustness guarantee and the estimated robustness is called second-order CLEVER score.

Towards Fast Computation of Certified Robustness for ReLU Networks

6 code implementations ICML 2018 Tsui-Wei Weng, huan zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Duane Boning, Inderjit S. Dhillon, Luca Daniel

Verifying the robustness property of a general Rectified Linear Unit (ReLU) network is an NP-complete problem [Katz, Barrett, Dill, Julian and Kochenderfer CAV17].

Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach

1 code implementation ICLR 2018 Tsui-Wei Weng, huan zhang, Pin-Yu Chen, Jin-Feng Yi, Dong Su, Yupeng Gao, Cho-Jui Hsieh, Luca Daniel

Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness.

Cannot find the paper you are looking for? You can Submit a new open access paper.