1 code implementation • NeurIPS 2021 • Yujia Huang, huan zhang, Yuanyuan Shi, J Zico Kolter, Anima Anandkumar
Certified robustness is a desirable property for deep neural networks in safety-critical applications, and popular training algorithms can certify robustness of a neural network by computing a global bound on its Lipschitz constant.
no code implementations • 29 Sep 2021 • Runtian Zhai, Chen Dan, J Zico Kolter, Pradeep Kumar Ravikumar
Prior work has proposed various reweighting algorithms to improve the worst-group performance of machine learning models for fairness.
no code implementations • 29 Sep 2021 • huan zhang, Shiqi Wang, Kaidi Xu, Yihan Wang, Suman Jana, Cho-Jui Hsieh, J Zico Kolter
In this work, we formulate an adversarial attack using a branch-and-bound (BaB) procedure on ReLU neural networks and search adversarial examples in the activation space corresponding to binary variables in a mixed integer programming (MIP) formulation.
no code implementations • ICLR 2022 • Colin Wei, J Zico Kolter
Our key insights are that these interval bounds can be obtained as the fixed-point solution to an IBP-inspired equilibrium equation, and furthermore, that this solution always exists and is unique when the layer obeys a certain parameterization.
no code implementations • ICLR 2022 • Samuel Sokota, Hengyuan Hu, David J Wu, J Zico Kolter, Jakob Nicolaus Foerster, Noam Brown
Furthermore, because this specialization occurs after the action or policy has already been decided, BFT does not require the belief model to process it as input.
no code implementations • ICLR 2022 • Shaojie Bai, Vladlen Koltun, J Zico Kolter
A deep equilibrium (DEQ) model abandons traditional depth by solving for the fixed point of a single nonlinear layer $f_\theta$.
no code implementations • 29 Sep 2021 • Gaurav Manek, J Zico Kolter
Model-based reinforcement learning (MBRL) methods are often more data-efficient and quicker to converge than their model-free counterparts, but typically rely crucially on accurate modeling of the environment dynamics and associated uncertainty in order to perform well.
Model-based Reinforcement Learning reinforcement-learning +1
no code implementations • NeurIPS 2021 • Shiqi Wang, huan zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, J Zico Kolter
We develop $\beta$-CROWN, a new bound propagation based method that can fully encode neuron split constraints in branch-and-bound (BaB) based complete verification via optimizable parameters $\beta$.
no code implementations • ICML Workshop AML 2021 • Mohammad Sadegh Norouzzadeh, Wan-Yi Lin, Leonid Boytsov, Leslie Rice, huan zhang, Filipe Condessa, J Zico Kolter
Most pre-trained classifiers, though they may work extremely well on the domain they were trained upon, are not trained in a robust fashion, and therefore are sensitive to adversarial attacks.
no code implementations • ICML Workshop AML 2021 • Wan-Yi Lin, Fatemeh Sheikholeslami, Jinghao Shi, Leslie Rice, J Zico Kolter
This paper proposes a certifiable defense against adversarial patch attacks on image classification.
no code implementations • ICLR 2021 • Chirag Pabbaraju, Ezra Winston, J Zico Kolter
Several methods have been proposed in recent years to provide bounds on the Lipschitz constants of deep networks, which can be used to provide robustness guarantees, generalization bounds, and characterize the smoothness of decision boundaries.
no code implementations • 1 Jan 2021 • Wan-Yi Lin, Fatemeh Sheikholeslami, Jinghao Shi, Leslie Rice, J Zico Kolter
Our method improves upon the current state of the art in defending against patch attacks on CIFAR10 and ImageNet, both in terms of certified accuracy and inference time.
3 code implementations • ICLR 2021 • Rizal Fathony, Anit Kumar Sahu, Devin Willmott, J Zico Kolter
Although deep networks are typically used to approximate functions over high dimensional inputs, recent work has increased interest in neural networks as function approximators for low-dimensional-but-complex functions, such as representing images as a function of pixel coordinates, solving differential equations, or representing signed distance fields or neural radiance fields.
1 code implementation • ICLR 2021 • Fatemeh Sheikholeslami, Ali Lotfi, J Zico Kolter
Adversarial attacks against deep networks can be defended against either by building robust classifiers or, by creating classifiers that can \emph{detect} the presence of adversarial perturbations.