Search Results for author: Martin Vechev

Found 54 papers, 26 papers with code

Data Leakage in Federated Averaging

no code implementations24 Jun 2022 Dimitar I. Dimitrov, Mislav Balunović, Nikola Konstantinov, Martin Vechev

On the popular FEMNIST dataset, we demonstrate that on average we successfully recover >45% of the client's images from realistic FedAvg updates computed on 10 local epochs of 10 batches each with 5 images, compared to only <10% using the baseline.

Federated Learning

(De-)Randomized Smoothing for Decision Stump Ensembles

no code implementations27 May 2022 Miklós Z. Horváth, Mark Niklas Müller, Marc Fischer, Martin Vechev

Tree-based models are used in many high-stakes application domains such as finance and medicine, where robustness and interpretability are of utmost importance.

Complete Verification via Multi-Neuron Relaxation Guided Branch-and-Bound

1 code implementation ICLR 2022 Claudio Ferrari, Mark Niklas Muller, Nikola Jovanovic, Martin Vechev

State-of-the-art neural network verifiers are fundamentally based on one of two paradigms: either encoding the whole verification problem via tight multi-neuron convex relaxations or applying a Branch-and-Bound (BaB) procedure leveraging imprecise but fast bounding methods on a large number of easier subproblems.

On Distribution Shift in Learning-based Bug Detectors

1 code implementation21 Apr 2022 Jingxuan He, Luca Beurer-Kellner, Martin Vechev

To address this key challenge, we propose to train a bug detector in two phases, first on a synthetic bug distribution to adapt the model to the bug detection domain, and then on a real bug distribution to drive the model towards the real distribution.

Contrastive Learning

Robust and Accurate -- Compositional Architectures for Randomized Smoothing

1 code implementation1 Apr 2022 Miklós Z. Horváth, Mark Niklas Müller, Marc Fischer, Martin Vechev

Randomized Smoothing (RS) is considered the state-of-the-art approach to obtain certifiably robust models for challenging tasks.

LAMP: Extracting Text from Gradients with Language Model Priors

no code implementations17 Feb 2022 Dimitar I. Dimitrov, Mislav Balunović, Nikola Jovanović, Martin Vechev

Our experiments demonstrate that LAMP reconstructs the original text significantly more precisely than prior work: we recover 5x more bigrams and $23\%$ longer subsequences on average.

Federated Learning Language Modelling

The Fundamental Limits of Interval Arithmetic for Neural Networks

no code implementations9 Dec 2021 Matthew Mirman, Maximilian Baader, Martin Vechev

Interval analysis (or interval bound propagation, IBP) is a popular technique for verifying and training provably robust deep neural networks, a fundamental challenge in the area of reliable machine learning.

Latent Space Smoothing for Individually Fair Representations

1 code implementation26 Nov 2021 Momchil Peychev, Anian Ruoss, Mislav Balunović, Maximilian Baader, Martin Vechev

This enables us to learn individually fair representations that map similar individuals close together by using adversarial training to minimize the distance between their representations.

Fairness Representation Learning

Bayesian Framework for Gradient Leakage

1 code implementation ICLR 2022 Mislav Balunović, Dimitar I. Dimitrov, Robin Staab, Martin Vechev

We demonstrate that existing leakage attacks can be seen as approximations of this optimal adversary with different assumptions on the probability distributions of the input data and gradients.

Federated Learning

Effective Certification of Monotone Deep Equilibrium Models

no code implementations14 Oct 2021 Mark Niklas Müller, Robin Staab, Marc Fischer, Martin Vechev

Monotone Operator Equilibrium Models (monDEQs) represent a class of models combining the powerful deep equilibrium paradigm with convergence guarantees.

Avoiding Robust Misclassifications for Improved Robustness without Accuracy Loss

no code implementations29 Sep 2021 Yannick Merkli, Pavol Bielik, Petar Tsankov, Martin Vechev

Our results show that our method effectively reduces robust and inaccurate samples by up to 97. 28%.

Shared Certificates for Neural Network Verification

no code implementations1 Sep 2021 Christian Sprecher, Marc Fischer, Dimitar I. Dimitrov, Gagandeep Singh, Martin Vechev

Existing neural network verifiers compute a proof that each input is handled correctly under a given perturbation by propagating a convex set of reachable values at each layer.

Scalable Certified Segmentation via Randomized Smoothing

1 code implementation1 Jul 2021 Marc Fischer, Maximilian Baader, Martin Vechev

We present a new certification method for image and point cloud segmentation based on randomized smoothing.

Point Cloud Segmentation

Boosting Randomized Smoothing with Variance Reduced Classifiers

1 code implementation ICLR 2022 Miklós Z. Horváth, Mark Niklas Müller, Marc Fischer, Martin Vechev

Randomized Smoothing (RS) is a promising method for obtaining robustness certificates by evaluating a base model under noise.

Fair Normalizing Flows

1 code implementation ICLR 2022 Mislav Balunović, Anian Ruoss, Martin Vechev

Fair representation learning is an attractive approach that promises fairness of downstream predictors by encoding sensitive data.

Fairness Representation Learning +1

Robustness Certification for Point Cloud Models

1 code implementation ICCV 2021 Tobias Lorenz, Anian Ruoss, Mislav Balunović, Gagandeep Singh, Martin Vechev

In this work, we address this challenge and introduce 3DCertify, the first verifier able to certify the robustness of point cloud models.

Automated Discovery of Adaptive Attacks on Adversarial Defenses

1 code implementation NeurIPS 2021 Chengyuan Yao, Pavol Bielik, Petar Tsankov, Martin Vechev

Reliable evaluation of adversarial defenses is a challenging task, currently limited to an expert who manually crafts attacks that exploit the defense's inner workings or approaches based on an ensemble of fixed attacks, none of which may be effective for the specific defense at hand.

Certified Defenses: Why Tighter Relaxations May Hurt Training

no code implementations12 Feb 2021 Nikola Jovanović, Mislav Balunović, Maximilian Baader, Martin Vechev

Further, we investigate the possibility of designing and training with relaxations that are tight, continuous and not sensitive.

PODS: Policy Optimization via Differentiable Simulation

no code implementations1 Jan 2021 Miguel Angel Zamora Mora, Momchil Peychev, Sehoon Ha, Martin Vechev, Stelian Coros

Current reinforcement learning (RL) methods use simulation models as simple black-box oracles.

Boosting Certified Robustness of Deep Networks via a Compositional Architecture

no code implementations ICLR 2021 Mark Niklas Mueller, Mislav Balunovic, Martin Vechev

In this work, we propose a new architecture which addresses this challenge and enables one to boost the certified robustness of any state-of-the-art deep network, while controlling the overall accuracy loss, without requiring retraining.

Efficient Certification of Spatial Robustness

1 code implementation19 Sep 2020 Anian Ruoss, Maximilian Baader, Mislav Balunović, Martin Vechev

Recent work has exposed the vulnerability of computer vision models to vector field attacks.

zkay v0.2: Practical Data Privacy for Smart Contracts

1 code implementation2 Sep 2020 Nick Baumann, Samuel Steffen, Benjamin Bichsel, Petar Tsankov, Martin Vechev

Recent work introduces zkay, a system for specifying and enforcing data privacy in smart contracts.

Programming Languages Cryptography and Security

Provably Robust Adversarial Examples

no code implementations ICLR 2022 Dimitar I. Dimitrov, Gagandeep Singh, Timon Gehr, Martin Vechev

We introduce the concept of provably robust adversarial examples for deep neural networks - connected input regions constructed from standard adversarial examples which are guaranteed to be robust to a set of real-world perturbations (such as changes in pixel intensity and geometric transformations).

Scaling Polyhedral Neural Network Verification on GPUs

no code implementations20 Jul 2020 Christoph Müller, François Serre, Gagandeep Singh, Markus Püschel, Martin Vechev

GPUPoly scales to large networks: for example, it can prove the robustness of a 1M neuron, 34-layer deep residual network in approximately 34. 5 ms. We believe GPUPoly is a promising step towards practical verification of real-world neural networks.

Autonomous Driving Medical Diagnosis

Scalable Polyhedral Verification of Recurrent Neural Networks

1 code implementation27 May 2020 Wonryong Ryou, Jiayu Chen, Mislav Balunovic, Gagandeep Singh, Andrei Dan, Martin Vechev

We present a scalable and precise verifier for recurrent neural networks, called Prover based on two novel ideas: (i) a method to compute a set of polyhedral abstractions for the non-convex and nonlinear recurrent update functions by combining sampling, optimization, and Fermat's theorem, and (ii) a gradient descent based algorithm for abstraction refinement guided by the certification problem that combines multiple abstractions for each neuron.

Guiding Program Synthesis by Learning to Generate Examples

1 code implementation ICLR 2020 Larissa Laich, Pavol Bielik, Martin Vechev

A key challenge of existing program synthesizers is ensuring that the synthesized program generalizes well.

Program Synthesis

Adversarial Training and Provable Defenses: Bridging the Gap

1 code implementation ICLR 2020 Mislav Balunovic, Martin Vechev

We experimentally show that this training method, named convex layerwise adversarial training (COLT), is promising and achieves the best of both worlds -- it produces a state-of-the-art neural network with certified robustness of 60. 5% and accuracy of 78. 4% on the challenging CIFAR-10 dataset with a 2/255 L-infinity perturbation.

Robustness Certification of Generative Models

no code implementations30 Apr 2020 Matthew Mirman, Timon Gehr, Martin Vechev

Generative neural networks can be used to specify continuous transformations between images via latent-space interpolation.

Adversarial Attacks on Probabilistic Autoregressive Forecasting Models

1 code implementation ICML 2020 Raphaël Dang-Nhu, Gagandeep Singh, Pavol Bielik, Martin Vechev

We develop an effective generation of adversarial attacks on neural models that output a sequence of probability distributions rather than a sequence of single values.

Decision Making Time Series

Certified Defense to Image Transformations via Randomized Smoothing

1 code implementation NeurIPS 2020 Marc Fischer, Maximilian Baader, Martin Vechev

We extend randomized smoothing to cover parameterized transformations (e. g., rotations, translations) and certify robustness in the parameter space (e. g., rotation angle).

Provable Adversarial Defense

Learning Certified Individually Fair Representations

1 code implementation NeurIPS 2020 Anian Ruoss, Mislav Balunović, Marc Fischer, Martin Vechev

That is, our method enables the data producer to learn and certify a representation where for a data point all similar individuals are at $\ell_\infty$-distance at most $\epsilon$, thus allowing data consumers to certify individual fairness by proving $\epsilon$-robustness of their classifier.

Fairness Representation Learning

Adversarial Robustness for Code

1 code implementation ICML 2020 Pavol Bielik, Martin Vechev

Machine learning and deep learning in particular has been recently used to successfully address many tasks in the domain of code such as finding and fixing bugs, code completion, decompilation, type inference and many others.

Adversarial Robustness BIG-bench Machine Learning +1

Learning to Infer User Interface Attributes from Images

no code implementations31 Dec 2019 Philippe Schlattner, Pavol Bielik, Martin Vechev

We explore a new domain of learning to infer user interface attributes that helps developers automate the process of user interface implementation.

Imitation Learning

Certifying Geometric Robustness of Neural Networks

1 code implementation NeurIPS 2019 Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, Martin Vechev

The use of neural networks in safety-critical computer vision systems calls for their robustness certification against natural geometric transformations (e. g., rotation, scaling).

Beyond the Single Neuron Convex Barrier for Neural Network Certification

1 code implementation NeurIPS 2019 Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, Martin Vechev

We propose a new parametric framework, called k-ReLU, for computing precise and scalable convex relaxations used to certify neural networks.

Online Robustness Training for Deep Reinforcement Learning

no code implementations3 Nov 2019 Marc Fischer, Matthew Mirman, Steven Stalder, Martin Vechev

In deep reinforcement learning (RL), adversarial attacks can trick an agent into unwanted states and disrupt training.

reinforcement-learning

Universal Approximation with Certified Networks

1 code implementation ICLR 2020 Maximilian Baader, Matthew Mirman, Martin Vechev

To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.

Statistical Verification of General Perturbations by Gaussian Smoothing

no code implementations25 Sep 2019 Marc Fischer, Maximilian Baader, Martin Vechev

We present a novel statistical certification method that generalizes prior work based on smoothing to handle richer perturbations.

Verification of Generative-Model-Based Visual Transformations

no code implementations25 Sep 2019 Matthew Mirman, Timon Gehr, Martin Vechev

Generative networks are promising models for specifying visual transformations.

Robustness Certification with Refinement

no code implementations ICLR 2019 Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev

We present a novel approach for verification of neural networks which combines scalable over-approximation methods with precise (mixed integer) linear programming.

A Provable Defense for Deep Residual Networks

1 code implementation29 Mar 2019 Matthew Mirman, Gagandeep Singh, Martin Vechev

We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100.

Adversarial Defense Novel Concepts

Fast and Effective Robustness Certification

no code implementations NeurIPS 2018 Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin Vechev

We present a new method and system, called DeepZ, for certifying neural network robustness based on abstract interpretation.

Distilled Agent DQN for Provable Adversarial Robustness

no code implementations27 Sep 2018 Matthew Mirman, Marc Fischer, Martin Vechev

As deep neural networks have become the state of the art for solving complex reinforcement learning tasks, susceptibility to perceptual adversarial examples have become a concern.

Adversarial Robustness reinforcement-learning

Training Neural Machines with Trace-Based Supervision

no code implementations ICML 2018 Matthew Mirman, Dimitar Dimitrov, Pavle Djordjevic, Timon Gehr, Martin Vechev

We investigate the effectiveness of trace-based supervision methods for training existing neural abstract machines.

Differentiable Abstract Interpretation for Provably Robust Neural Networks

1 code implementation ICML 2018 Matthew Mirman, Timon Gehr, Martin Vechev

We introduce a scalable method for training robust neural networks based on abstract interpretation.

Securify: Practical Security Analysis of Smart Contracts

2 code implementations4 Jun 2018 Petar Tsankov, Andrei Dan, Dana Drachsler Cohen, Arthur Gervais, Florian Buenzli, Martin Vechev

To address this problem, we present Securify, a security analyzer for Ethereum smart contracts that is scalable, fully automated, and able to prove contract behaviors as safe/unsafe with respect to a given property.

Cryptography and Security

Training Neural Machines with Partial Traces

no code implementations ICLR 2018 Matthew Mirman, Dimitar Dimitrov, Pavle Djordjevich, Timon Gehr, Martin Vechev

We present a novel approach for training neural abstract architectures which in- corporates (partial) supervision over the machine’s interpretable components.

Learning Disjunctions of Predicates

no code implementations15 Jun 2017 Nader H. Bshouty, Dana Drachsler-Cohen, Martin Vechev, Eran Yahav

Our algorithm asks at most $|F| \cdot OPT(F_\vee)$ membership queries where $OPT(F_\vee)$ is the minimum worst case number of membership queries for learning $F_\vee$.

Program Synthesis

Learning a Static Analyzer from Data

no code implementations6 Nov 2016 Pavol Bielik, Veselin Raychev, Martin Vechev

In this paper we present a new, automated approach for creating static analyzers: instead of manually providing the various inference rules of the analyzer, the key idea is to learn these rules from a dataset of programs.

Cannot find the paper you are looking for? You can Submit a new open access paper.