On the popular FEMNIST dataset, we demonstrate that on average we successfully recover >45% of the client's images from realistic FedAvg updates computed on 10 local epochs of 10 batches each with 5 images, compared to only <10% using the baseline.
Tree-based models are used in many high-stakes application domains such as finance and medicine, where robustness and interpretability are of utmost importance.
State-of-the-art neural network verifiers are fundamentally based on one of two paradigms: either encoding the whole verification problem via tight multi-neuron convex relaxations or applying a Branch-and-Bound (BaB) procedure leveraging imprecise but fast bounding methods on a large number of easier subproblems.
To address this key challenge, we propose to train a bug detector in two phases, first on a synthetic bug distribution to adapt the model to the bug detection domain, and then on a real bug distribution to drive the model towards the real distribution.
Randomized Smoothing (RS) is considered the state-of-the-art approach to obtain certifiably robust models for challenging tasks.
Our experiments demonstrate that LAMP reconstructs the original text significantly more precisely than prior work: we recover 5x more bigrams and $23\%$ longer subsequences on average.
Interval analysis (or interval bound propagation, IBP) is a popular technique for verifying and training provably robust deep neural networks, a fundamental challenge in the area of reliable machine learning.
This enables us to learn individually fair representations that map similar individuals close together by using adversarial training to minimize the distance between their representations.
We demonstrate that existing leakage attacks can be seen as approximations of this optimal adversary with different assumptions on the probability distributions of the input data and gradients.
Monotone Operator Equilibrium Models (monDEQs) represent a class of models combining the powerful deep equilibrium paradigm with convergence guarantees.
Existing neural network verifiers compute a proof that each input is handled correctly under a given perturbation by propagating a convex set of reachable values at each layer.
The problem of fixing errors in programs has attracted substantial interest over the years.
Ranked #1 on Program Repair on TFix's Code Patches Data
In this work, we address this challenge and introduce 3DCertify, the first verifier able to certify the robustness of point cloud models.
Formal verification of neural networks is critical for their safe adoption in real-world applications.
Reliable evaluation of adversarial defenses is a challenging task, currently limited to an expert who manually crafts attacks that exploit the defense's inner workings or approaches based on an ensemble of fixed attacks, none of which may be effective for the specific defense at hand.
Further, we investigate the possibility of designing and training with relaxations that are tight, continuous and not sensitive.
In this work, we propose a new architecture which addresses this challenge and enables one to boost the certified robustness of any state-of-the-art deep network, while controlling the overall accuracy loss, without requiring retraining.
Recent work introduces zkay, a system for specifying and enforcing data privacy in smart contracts.
Programming Languages Cryptography and Security
We introduce the concept of provably robust adversarial examples for deep neural networks - connected input regions constructed from standard adversarial examples which are guaranteed to be robust to a set of real-world perturbations (such as changes in pixel intensity and geometric transformations).
GPUPoly scales to large networks: for example, it can prove the robustness of a 1M neuron, 34-layer deep residual network in approximately 34. 5 ms. We believe GPUPoly is a promising step towards practical verification of real-world neural networks.
We present a scalable and precise verifier for recurrent neural networks, called Prover based on two novel ideas: (i) a method to compute a set of polyhedral abstractions for the non-convex and nonlinear recurrent update functions by combining sampling, optimization, and Fermat's theorem, and (ii) a gradient descent based algorithm for abstraction refinement guided by the certification problem that combines multiple abstractions for each neuron.
We experimentally show that this training method, named convex layerwise adversarial training (COLT), is promising and achieves the best of both worlds -- it produces a state-of-the-art neural network with certified robustness of 60. 5% and accuracy of 78. 4% on the challenging CIFAR-10 dataset with a 2/255 L-infinity perturbation.
We develop an effective generation of adversarial attacks on neural models that output a sequence of probability distributions rather than a sequence of single values.
We extend randomized smoothing to cover parameterized transformations (e. g., rotations, translations) and certify robustness in the parameter space (e. g., rotation angle).
That is, our method enables the data producer to learn and certify a representation where for a data point all similar individuals are at $\ell_\infty$-distance at most $\epsilon$, thus allowing data consumers to certify individual fairness by proving $\epsilon$-robustness of their classifier.
Machine learning and deep learning in particular has been recently used to successfully address many tasks in the domain of code such as finding and fixing bugs, code completion, decompilation, type inference and many others.
We explore a new domain of learning to infer user interface attributes that helps developers automate the process of user interface implementation.
The use of neural networks in safety-critical computer vision systems calls for their robustness certification against natural geometric transformations (e. g., rotation, scaling).
We propose a new parametric framework, called k-ReLU, for computing precise and scalable convex relaxations used to certify neural networks.
In deep reinforcement learning (RL), adversarial attacks can trick an agent into unwanted states and disrupt training.
We present a novel statistical certification method that generalizes prior work based on smoothing to handle richer perturbations.
We present a novel approach for verification of neural networks which combines scalable over-approximation methods with precise (mixed integer) linear programming.
We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100.
As deep neural networks have become the state of the art for solving complex reinforcement learning tasks, susceptibility to perceptual adversarial examples have become a concern.
To address this problem, we present Securify, a security analyzer for Ethereum smart contracts that is scalable, fully automated, and able to prove contract behaviors as safe/unsafe with respect to a given property.
Cryptography and Security
We present a novel approach for training neural abstract architectures which in- corporates (partial) supervision over the machine’s interpretable components.
Our algorithm asks at most $|F| \cdot OPT(F_\vee)$ membership queries where $OPT(F_\vee)$ is the minimum worst case number of membership queries for learning $F_\vee$.
In this paper we present a new, automated approach for creating static analyzers: instead of manually providing the various inference rules of the analyzer, the key idea is to learn these rules from a dataset of programs.