Search Results for author: Linyi Li

Found 31 papers, 21 papers with code

COLEP: Certifiably Robust Learning-Reasoning Conformal Prediction via Probabilistic Circuits

1 code implementation17 Mar 2024 Mintong Kang, Nezihe Merve Gürel, Linyi Li, Bo Li

In this work, we propose a certifiably robust learning-reasoning conformal prediction framework (COLEP) via probabilistic circuits, which comprise a data-driven learning component that trains statistical models to learn different semantic concepts, and a reasoning component that encodes knowledge and characterizes the relationships among the trained models for logic reasoning.

Conformal Prediction

COMMIT: Certifying Robustness of Multi-Sensor Fusion Systems against Semantic Attacks

no code implementations4 Mar 2024 Zijian Huang, Wenda Chu, Linyi Li, Chejian Xu, Bo Li

In this work, we propose the first robustness certification framework COMMIT certify robustness of multi-sensor fusion systems against semantic attacks.

Autonomous Vehicles object-detection +2

Pixel-wise Smoothing for Certified Robustness against Camera Motion Perturbations

1 code implementation22 Sep 2023 Hanjiang Hu, Zuxin Liu, Linyi Li, Jiacheng Zhu, Ding Zhao

The current certification process for assessing robustness is costly and time-consuming due to the extensive number of image projections required for Monte Carlo sampling in the 3D camera motion space.

Autonomous Driving

Reliability Assurance for Deep Neural Network Architectures Against Numerical Defects

1 code implementation13 Feb 2023 Linyi Li, Yuhao Zhang, Luyao Ren, Yingfei Xiong, Tao Xie

To assure high reliability against numerical defects, in this paper, we propose the RANUM approach including novel techniques for three reliability assurance tasks: detection of potential numerical defects, confirmation of potential-defect feasibility, and suggestion of defect fixes.

Fairness in Federated Learning via Core-Stability

no code implementations3 Nov 2022 Bhaskar Ray Chaudhury, Linyi Li, Mintong Kang, Bo Li, Ruta Mehta

Nonetheless, the heterogeneity nature of distributed data makes it challenging to define and ensure fairness among local agents.

Decision Making Fairness +1

LOT: Layer-wise Orthogonal Training on Improving $\ell_2$ Certified Robustness

1 code implementation20 Oct 2022 Xiaojun Xu, Linyi Li, Bo Li

On the other hand, as existing works show that semi-supervised training helps improve empirical robustness, we aim to bridge the gap and prove that semi-supervised learning also improves the certified robustness of Lipschitz-bounded models.

Adversarial Robustness

Robustness Certification of Visual Perception Models via Camera Motion Smoothing

1 code implementation4 Oct 2022 Hanjiang Hu, Zuxin Liu, Linyi Li, Jiacheng Zhu, Ding Zhao

To this end, we study the robustness of the visual perception model under camera motion perturbations to investigate the influence of camera motion on robotic perception.

Image Classification

CARE: Certifiably Robust Learning with Reasoning via Variational Inference

1 code implementation12 Sep 2022 Jiawei Zhang, Linyi Li, Ce Zhang, Bo Li

In particular, we propose a certifiably robust learning with reasoning pipeline (CARE), which consists of a learning component and a reasoning component.

Variational Inference

General Cutting Planes for Bound-Propagation-Based Neural Network Verification

2 code implementations11 Aug 2022 huan zhang, Shiqi Wang, Kaidi Xu, Linyi Li, Bo Li, Suman Jana, Cho-Jui Hsieh, J. Zico Kolter

Our generalized bound propagation method, GCP-CROWN, opens up the opportunity to apply general cutting plane methods for neural network verification while benefiting from the efficiency and GPU acceleration of bound propagation methods.

FOCUS: Fairness via Agent-Awareness for Federated Learning on Heterogeneous Data

no code implementations21 Jul 2022 Wenda Chu, Chulin Xie, Boxin Wang, Linyi Li, Lang Yin, Arash Nourian, Han Zhao, Bo Li

However, due to the heterogeneous nature of local data, it is challenging to optimize or even define fairness of the trained global model for the agents.

Fairness Federated Learning

Double Sampling Randomized Smoothing

2 code implementations16 Jun 2022 Linyi Li, Jiawei Zhang, Tao Xie, Bo Li

To overcome this hurdle, we propose a Double Sampling Randomized Smoothing (DSRS) framework, which exploits the sampled probability from an additional smoothing distribution to tighten the robustness certification of the previous smoothed classifier.

Can pruning improve certified robustness of neural networks?

1 code implementation15 Jun 2022 Zhangheng Li, Tianlong Chen, Linyi Li, Bo Li, Zhangyang Wang

Given the fact that neural networks are often over-parameterized, one effective way to reduce such computational overhead is neural network pruning, by removing redundant parameters from trained neural networks.

Network Pruning

Certifying Some Distributional Fairness with Subpopulation Decomposition

1 code implementation31 May 2022 Mintong Kang, Linyi Li, Maurice Weber, Yang Liu, Ce Zhang, Bo Li

In this paper, we first formulate the certified fairness of an ML model trained on a given data distribution as an optimization problem based on the model performance loss bound on a fairness constrained distribution, which is within bounded distributional distance with the training distribution.

Fairness

COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks

1 code implementation ICLR 2022 Fan Wu, Linyi Li, Chejian Xu, huan zhang, Bhavya Kailkhura, Krishnaram Kenthapadi, Ding Zhao, Bo Li

We leverage COPA to certify three RL environments trained with different algorithms and conclude: (1) The proposed robust aggregation protocols such as temporal aggregation can significantly improve the certifications; (2) Our certification for both per-state action stability and cumulative reward bound are efficient and tight; (3) The certification for different training algorithms and environments are different, implying their intrinsic robustness properties.

Offline RL reinforcement-learning +1

SapientML: Synthesizing Machine Learning Pipelines by Learning from Human-Written Solutions

no code implementations18 Feb 2022 Ripon K. Saha, Akira Ura, Sonal Mahajan, Chenguang Zhu, Linyi Li, Yang Hu, Hiroaki Yoshida, Sarfraz Khurshid, Mukul R. Prasad

In this work we propose an AutoML technique SapientML, that can learn from a corpus of existing datasets and their human-written pipelines, and efficiently generate a high-quality pipeline for a predictive task on a new dataset.

AutoML BIG-bench Machine Learning +1

Certifying Out-of-Domain Generalization for Blackbox Functions

1 code implementation3 Feb 2022 Maurice Weber, Linyi Li, Boxin Wang, Zhikuan Zhao, Bo Li, Ce Zhang

As a result, the wider application of these techniques is currently limited by its scalability and flexibility -- these techniques often do not scale to large-scale datasets with modern deep neural networks or cannot handle loss functions which may be non-smooth such as the 0-1 loss.

Domain Generalization

TPC: Transformation-Specific Smoothing for Point Cloud Models

2 code implementations30 Jan 2022 Wenda Chu, Linyi Li, Bo Li

In this paper, we propose a transformation-specific smoothing framework TPC, which provides tight and scalable robustness guarantees for point cloud models against semantic transformation attacks.

Autonomous Vehicles

CARD: Certifiably Robust Machine Learning Pipeline via Domain Knowledge Integration

no code implementations29 Sep 2021 Jiawei Zhang, Linyi Li, Bo Li

In particular, we express the domain knowledge as first-order logic rules and embed these logic rules in a probabilistic graphical model.

BIG-bench Machine Learning

On the Certified Robustness for Ensemble Models and Beyond

no code implementations ICLR 2022 Zhuolin Yang, Linyi Li, Xiaojun Xu, Bhavya Kailkhura, Tao Xie, Bo Li

Thus, to explore the conditions that guarantee to provide certifiably robust ensemble ML models, we first prove that diversified gradient and large confidence margin are sufficient and necessary conditions for certifiably robust ensemble models under the model-smoothness assumption.

CROP: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing

2 code implementations ICLR 2022 Fan Wu, Linyi Li, Zijian Huang, Yevgeniy Vorobeychik, Ding Zhao, Bo Li

We then develop a local smoothing algorithm for policies derived from Q-functions to guarantee the robustness of actions taken along the trajectory; we also develop a global smoothing algorithm for certifying the lower bound of a finite-horizon cumulative reward, as well as a novel local smoothing algorithm to perform adaptive search in order to obtain tighter reward certification.

Atari Games Autonomous Vehicles +2

Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation

1 code implementation10 Jun 2021 Jiawei Zhang, Linyi Li, Huichen Li, Xiaolu Zhang, Shuang Yang, Bo Li

In this paper, we show that such efficiency highly depends on the scale at which the attack is applied, and attacking at the optimal scale significantly improves the efficiency.

Face Recognition

TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness

no code implementations NeurIPS 2021 Zhuolin Yang, Linyi Li, Xiaojun Xu, Shiliang Zuo, Qian Chen, Pan Zhou, Benjamin I. P. Rubinstein, Ce Zhang, Bo Li

To answer these questions, in this work we first theoretically analyze and outline sufficient conditions for adversarial transferability between models; then propose a practical algorithm to reduce the transferability between base models within an ensemble to improve its robustness.

TRS: Transferability Reduced Ensemble via Encouraging Gradient Diversity and Model Smoothness

1 code implementation NeurIPS 2021 Zhuolin Yang, Linyi Li, Xiaojun Xu, Shiliang Zuo, Qian Chen, Benjamin Rubinstein, Pan Zhou, Ce Zhang, Bo Li

To answer these questions, in this work we first theoretically analyze and outline sufficient conditions for adversarial transferability between models; then propose a practical algorithm to reduce the transferability between base models within an ensemble to improve its robustness.

Nonlinear Projection Based Gradient Estimation for Query Efficient Blackbox Attacks

1 code implementation25 Feb 2021 Huichen Li, Linyi Li, Xiaojun Xu, Xiaolu Zhang, Shuang Yang, Bo Li

We aim to bridge the gap between the two by investigating how to efficiently estimate gradient based on a projected low-dimensional space.

On the Limitations of Denoising Strategies as Adversarial Defenses

no code implementations17 Dec 2020 Zhonghan Niu, Zhaoxi Chen, Linyi Li, YuBin Yang, Bo Li, JinFeng Yi

Surprisingly, our experimental results show that even if most of the perturbations in each dimension is eliminated, it is still difficult to obtain satisfactory robustness.

Denoising

SoK: Certified Robustness for Deep Neural Networks

2 code implementations9 Sep 2020 Linyi Li, Tao Xie, Bo Li

Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on a wide range of tasks.

Autonomous Driving

Improving Certified Robustness via Statistical Learning with Logical Reasoning

1 code implementation28 Feb 2020 Zhuolin Yang, Zhikuan Zhao, Boxin Wang, Jiawei Zhang, Linyi Li, Hengzhi Pei, Bojan Karlas, Ji Liu, Heng Guo, Ce Zhang, Bo Li

Intensive algorithmic efforts have been made to enable the rapid improvements of certificated robustness for complex ML models recently.

BIG-bench Machine Learning Logical Reasoning

TSS: Transformation-Specific Smoothing for Robustness Certification

1 code implementation27 Feb 2020 Linyi Li, Maurice Weber, Xiaojun Xu, Luka Rimanic, Bhavya Kailkhura, Tao Xie, Ce Zhang, Bo Li

Moreover, to the best of our knowledge, TSS is the first approach that achieves nontrivial certified robustness on the large-scale ImageNet dataset.

Influence-Directed Explanations for Deep Convolutional Networks

2 code implementations ICLR 2018 Klas Leino, Shayak Sen, Anupam Datta, Matt Fredrikson, Linyi Li

We study the problem of explaining a rich class of behavioral properties of deep neural networks.

Case Study: Explaining Diabetic Retinopathy Detection Deep CNNs via Integrated Gradients

no code implementations27 Sep 2017 Linyi Li, Matt Fredrikson, Shayak Sen, Anupam Datta

In this report, we applied integrated gradients to explaining a neural network for diabetic retinopathy detection.

Diabetic Retinopathy Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.