Search Results for author: Martin Vechev

Found 96 papers, 51 papers with code

AlphaIntegrator: Transformer Action Search for Symbolic Integration Proofs

no code implementations3 Oct 2024 Mert Ünsal, Timon Gehr, Martin Vechev

We present the first correct-by-construction learning-based system for step-by-step mathematical integration.

Discovering Clues of Spoofed LM Watermarks

no code implementations3 Oct 2024 Thibaud Gloaguen, Nikola Jovanović, Robin Staab, Martin Vechev

Namely, we show that regardless of their underlying approach, all current spoofing methods consistently leave observable artifacts in spoofed texts, indicative of watermark forgery.

Polyrating: A Cost-Effective and Bias-Aware Rating System for LLM Evaluation

no code implementations1 Sep 2024 Jasper Dekoninck, Maximilian Baader, Martin Vechev

Rating-based human evaluation has become an essential tool to accurately evaluate the impressive performance of Large language models (LLMs).

Practical Attacks against Black-box Code Completion Engines

no code implementations5 Aug 2024 Slobodan Jenko, Jingxuan He, Niels Mündler, Mark Vero, Martin Vechev

Modern code completion engines, powered by large language models, have demonstrated impressive capabilities to generate functionally correct code based on surrounding context.

Code Completion

Mitigating Catastrophic Forgetting in Language Transfer via Model Merging

no code implementations11 Jul 2024 Anton Alexandrov, Veselin Raychev, Mark Niklas Müller, Ce Zhang, Martin Vechev, Kristina Toutanova

As open-weight large language models (LLMs) achieve ever more impressive performances across a wide range of tasks in English, practitioners aim to adapt these models to different languages.

Code Agents are State of the Art Software Testers

no code implementations18 Jun 2024 Niels Mündler, Mark Niklas Müller, Jingxuan He, Martin Vechev

Rigorous software testing is crucial for developing and maintaining high-quality code, making automated test generation a promising avenue for both improving software quality and boosting the effectiveness of code generation methods.

Code Generation Code Repair +1

A Synthetic Dataset for Personal Attribute Inference

2 code implementations11 Jun 2024 Hanna Yukhymenko, Robin Staab, Mark Vero, Martin Vechev

Recently, powerful Large Language Models (LLMs) have become easily accessible to hundreds of millions of users worldwide.

Attribute Personality Trait Recognition +4

CTBENCH: A Library and Benchmark for Certified Training

no code implementations7 Jun 2024 Yuhao Mao, Stefan Balauca, Martin Vechev

Training certifiably robust neural networks is an important but challenging task.

Exploiting LLM Quantization

no code implementations28 May 2024 Kazuki Egashira, Mark Vero, Robin Staab, Jingxuan He, Martin Vechev

Quantization leverages lower-precision weights to reduce the memory usage of large language models (LLMs) and is a key technique for enabling their deployment on commodity hardware.

Code Generation Quantization

Back to the Drawing Board for Fair Representation Learning

no code implementations28 May 2024 Angéline Pouget, Nikola Jovanović, Mark Vero, Robin Staab, Martin Vechev

The evaluation of FRL methods in many recent works primarily focuses on the tradeoff between downstream fairness and accuracy with respect to a single task that was used to approximate the utility of representations during training (proxy task).

Fairness Representation Learning

ConStat: Performance-Based Contamination Detection in Large Language Models

no code implementations25 May 2024 Jasper Dekoninck, Mark Niklas Müller, Martin Vechev

To overcome these limitations, we propose a novel definition of contamination as artificially inflated and non-generalizing benchmark performance instead of the inclusion of benchmark samples in the training data.

DAGER: Exact Gradient Inversion for Large Language Models

no code implementations24 May 2024 Ivo Petrov, Dimitar I. Dimitrov, Maximilian Baader, Mark Niklas Müller, Martin Vechev

Federated learning works by aggregating locally computed gradients from multiple clients, thus enabling collaborative training without sharing private client data.

Decoder Federated Learning

Private Attribute Inference from Images with Vision-Language Models

no code implementations16 Apr 2024 Batuhan Tömekçe, Mark Vero, Robin Staab, Martin Vechev

As large language models (LLMs) become ubiquitous in our daily tasks and digital interactions, associated privacy risks are increasingly in focus.

Attribute

Overcoming the Paradox of Certified Training with Gaussian Smoothing

no code implementations11 Mar 2024 Stefan Balauca, Mark Niklas Müller, Yuhao Mao, Maximilian Baader, Marc Fischer, Martin Vechev

While scaling PGPE training remains challenging due to high computational cost, we show that by using a not theoretically sound, yet much cheaper smoothing approximation, we obtain better certified accuracies than state-of-the-art methods when training on the same network architecture.

SPEAR:Exact Gradient Inversion of Batches in Federated Learning

no code implementations6 Mar 2024 Dimitar I. Dimitrov, Maximilian Baader, Mark Niklas Müller, Martin Vechev

In this work, we propose SPEAR, the first algorithm reconstructing whole batches with $b >1$ exactly.

Federated Learning

Watermark Stealing in Large Language Models

no code implementations29 Feb 2024 Nikola Jovanović, Robin Staab, Martin Vechev

LLM watermarking has attracted attention as a promising way to detect AI-generated content, with some works suggesting that current schemes may already be fit for deployment.

Large Language Models are Advanced Anonymizers

no code implementations21 Feb 2024 Robin Staab, Mark Vero, Mislav Balunović, Martin Vechev

Recent work in privacy research on large language models has shown that they achieve near human-level performance at inferring personal data from real-world online texts.

Text Anonymization

DeepCode AI Fix: Fixing Security Vulnerabilities with Large Language Models

no code implementations19 Feb 2024 Berkay Berabi, Alexey Gronskiy, Veselin Raychev, Gishor Sivanrupan, Victor Chibotaru, Martin Vechev

We show that the task is difficult as it requires the model to learn long-range code relationships, a task that inherently relies on extensive amounts of training data.

Code Repair Few-Shot Learning +1

Instruction Tuning for Secure Code Generation

1 code implementation14 Feb 2024 Jingxuan He, Mark Vero, Gabriela Krasnopolska, Martin Vechev

However, existing instruction tuning schemes overlook a crucial aspect: the security of generated code.

Code Generation

Guiding LLMs The Right Way: Fast, Non-Invasive Constrained Generation

no code implementations7 Feb 2024 Luca Beurer-Kellner, Marc Fischer, Martin Vechev

To ensure that text generated by large language models (LLMs) is in an expected format, constrained decoding proposes to enforce strict formal language constraints during generation.

Evading Data Contamination Detection for Language Models is (too) Easy

2 code implementations5 Feb 2024 Jasper Dekoninck, Mark Niklas Müller, Maximilian Baader, Marc Fischer, Martin Vechev

Large language models are widespread, with their performance on benchmarks frequently guiding user preferences for one model over another.

Controlled Text Generation via Language Model Arithmetic

1 code implementation24 Nov 2023 Jasper Dekoninck, Marc Fischer, Luca Beurer-Kellner, Martin Vechev

In addition, the framework allows for more precise control of generated text than direct prompting and prior controlled text generation (CTG) techniques.

Language Modelling Text Generation

From Principle to Practice: Vertical Data Minimization for Machine Learning

1 code implementation17 Nov 2023 Robin Staab, Nikola Jovanović, Mislav Balunović, Martin Vechev

We propose a novel vertical DM (vDM) workflow based on data generalization, which by design ensures that no full-resolution client data is collected during training and deployment of models, benefiting client privacy by reducing the attack surface in case of a breach.

Automated Classification of Model Errors on ImageNet

1 code implementation NeurIPS 2023 Momchil Peychev, Mark Niklas Müller, Marc Fischer, Martin Vechev

To address this, new label-sets and evaluation protocols have been proposed for ImageNet showing that state-of-the-art models already achieve over 95% accuracy and shifting the focus on investigating why the remaining errors persist.

Classification

Prompt Sketching for Large Language Models

no code implementations8 Nov 2023 Luca Beurer-Kellner, Mark Niklas Müller, Marc Fischer, Martin Vechev

This way, sketching grants users more control over the generation process, e. g., by providing a reasoning framework via intermediate instructions, leading to better overall results.

Arithmetic Reasoning Benchmarking +3

Expressivity of ReLU-Networks under Convex Relaxations

no code implementations7 Nov 2023 Maximilian Baader, Mark Niklas Müller, Yuhao Mao, Martin Vechev

We show that: (i) more advanced relaxations allow a larger class of univariate functions to be expressed as precisely analyzable ReLU networks, (ii) more precise relaxations can allow exponentially larger solution spaces of ReLU networks encoding the same functions, and (iii) even using the most precise single-neuron relaxations, it is impossible to construct precisely analyzable ReLU networks that express multivariate, convex, monotone CPWL functions.

Beyond Memorization: Violating Privacy Via Inference with Large Language Models

2 code implementations11 Oct 2023 Robin Staab, Mark Vero, Mislav Balunović, Martin Vechev

In this work, we present the first comprehensive study on the capabilities of pretrained LLMs to infer personal attributes from text.

Memorization Text Anonymization

CuTS: Customizable Tabular Synthetic Data Generation

1 code implementation7 Jul 2023 Mark Vero, Mislav Balunović, Martin Vechev

To ensure high synthetic data quality in the presence of custom specifications, CuTS is pre-trained on the original dataset and fine-tuned on a differentiable loss automatically derived from the provided specifications using novel relaxations.

Fairness Synthetic Data Generation +1

Understanding Certified Training with Interval Bound Propagation

1 code implementation17 Jun 2023 Yuhao Mao, Mark Niklas Müller, Marc Fischer, Martin Vechev

We, then, derive sufficient and necessary conditions on weight matrices for IBP bounds to become exact and demonstrate that these impose strong regularization, explaining the empirically observed trade-off between robustness and accuracy in certified training.

Hiding in Plain Sight: Disguising Data Stealing Attacks in Federated Learning

2 code implementations5 Jun 2023 Kostadin Garov, Dimitar I. Dimitrov, Nikola Jovanović, Martin Vechev

Malicious server (MS) attacks have enabled the scaling of data stealing in federated learning to large batch sizes and secure aggregation, settings previously considered private.

Decoder Federated Learning

TAPS: Connecting Certified and Adversarial Training

2 code implementations8 May 2023 Yuhao Mao, Mark Niklas Müller, Marc Fischer, Martin Vechev

Training certifiably robust neural networks remains a notoriously hard problem.

Efficient Certified Training and Robustness Verification of Neural ODEs

1 code implementation9 Mar 2023 Mustafa Zeqiri, Mark Niklas Müller, Marc Fischer, Martin Vechev

Neural Ordinary Differential Equations (NODEs) are a novel neural architecture, built around initial value problems with learned dynamics which are solved during inference.

Time Series Time Series Forecasting

Large Language Models for Code: Security Hardening and Adversarial Testing

1 code implementation10 Feb 2023 Jingxuan He, Martin Vechev

The task is parametric and takes as input a binary property to guide the LM to generate secure or unsafe code, while preserving the LM's capability of generating functionally correct code.

Code Generation Program Synthesis

Human-Guided Fair Classification for Natural Language Processing

1 code implementation20 Dec 2022 Florian E. Dorner, Momchil Peychev, Nikola Konstantinov, Naman Goel, Elliott Ash, Martin Vechev

While existing research has started to address this gap, current methods are based on hardcoded word replacements, resulting in specifications with limited expressivity or ones that fail to fully align with human intuition (e. g., in cases of asymmetric counterfactuals).

Classification Fairness +1

Prompting Is Programming: A Query Language for Large Language Models

1 code implementation12 Dec 2022 Luca Beurer-Kellner, Marc Fischer, Martin Vechev

We show that LMQL can capture a wide range of state-of-the-art prompting methods in an intuitive way, especially facilitating interactive flows that are challenging to implement with existing high-level APIs.

Code Generation Language Modelling +1

Private and Reliable Neural Network Inference

1 code implementation27 Oct 2022 Nikola Jovanović, Marc Fischer, Samuel Steffen, Martin Vechev

We employ these building blocks to enable privacy-preserving NN inference with robustness and fairness guarantees in a system called Phoenix.

Fairness Privacy Preserving

FARE: Provably Fair Representation Learning with Practical Certificates

1 code implementation13 Oct 2022 Nikola Jovanović, Mislav Balunović, Dimitar I. Dimitrov, Martin Vechev

To produce a practical certificate, we develop and apply a statistical procedure that computes a finite sample high-confidence upper bound on the unfairness of any downstream classifier trained on FARE embeddings.

Fairness Representation Learning

Certified Training: Small Boxes are All You Need

1 code implementation10 Oct 2022 Mark Niklas Müller, Franziska Eckert, Marc Fischer, Martin Vechev

To obtain, deterministic guarantees of adversarial robustness, specialized training methods are used.

Adversarial Robustness

TabLeak: Tabular Data Leakage in Federated Learning

1 code implementation4 Oct 2022 Mark Vero, Mislav Balunović, Dimitar I. Dimitrov, Martin Vechev

A successful attack for tabular data must address two key challenges unique to the domain: (i) obtaining a solution to a high-variance mixed discrete-continuous optimization problem, and (ii) enabling human assessment of the reconstruction as unlike for image and text data, direct human inspection is not possible.

Federated Learning Reconstruction Attack +1

Data Leakage in Federated Averaging

1 code implementation24 Jun 2022 Dimitar I. Dimitrov, Mislav Balunović, Nikola Konstantinov, Martin Vechev

On the popular FEMNIST dataset, we demonstrate that on average we successfully recover >45% of the client's images from realistic FedAvg updates computed on 10 local epochs of 10 batches each with 5 images, compared to only <10% using the baseline.

Federated Learning

(De-)Randomized Smoothing for Decision Stump Ensembles

1 code implementation27 May 2022 Miklós Z. Horváth, Mark Niklas Müller, Marc Fischer, Martin Vechev

Whereas most prior work on randomized smoothing focuses on evaluating arbitrary base models approximately under input randomization, the key insight of our work is that decision stump ensembles enable exact yet efficient evaluation via dynamic programming.

Complete Verification via Multi-Neuron Relaxation Guided Branch-and-Bound

1 code implementation ICLR 2022 Claudio Ferrari, Mark Niklas Muller, Nikola Jovanovic, Martin Vechev

State-of-the-art neural network verifiers are fundamentally based on one of two paradigms: either encoding the whole verification problem via tight multi-neuron convex relaxations or applying a Branch-and-Bound (BaB) procedure leveraging imprecise but fast bounding methods on a large number of easier subproblems.

On Distribution Shift in Learning-based Bug Detectors

1 code implementation21 Apr 2022 Jingxuan He, Luca Beurer-Kellner, Martin Vechev

To address this key challenge, we propose to train a bug detector in two phases, first on a synthetic bug distribution to adapt the model to the bug detection domain, and then on a real bug distribution to drive the model towards the real distribution.

Contrastive Learning

Robust and Accurate -- Compositional Architectures for Randomized Smoothing

1 code implementation1 Apr 2022 Miklós Z. Horváth, Mark Niklas Müller, Marc Fischer, Martin Vechev

Randomized Smoothing (RS) is considered the state-of-the-art approach to obtain certifiably robust models for challenging tasks.

LAMP: Extracting Text from Gradients with Language Model Priors

2 code implementations17 Feb 2022 Mislav Balunović, Dimitar I. Dimitrov, Nikola Jovanović, Martin Vechev

Recent work shows that sensitive user data can be reconstructed from gradient updates, breaking the key privacy promise of federated learning.

Federated Learning Language Modelling

The Fundamental Limits of Interval Arithmetic for Neural Networks

no code implementations9 Dec 2021 Matthew Mirman, Maximilian Baader, Martin Vechev

Interval analysis (or interval bound propagation, IBP) is a popular technique for verifying and training provably robust deep neural networks, a fundamental challenge in the area of reliable machine learning.

valid

Latent Space Smoothing for Individually Fair Representations

1 code implementation26 Nov 2021 Momchil Peychev, Anian Ruoss, Mislav Balunović, Maximilian Baader, Martin Vechev

This enables us to learn individually fair representations that map similar individuals close together by using adversarial training to minimize the distance between their representations.

Fairness Representation Learning

Bayesian Framework for Gradient Leakage

2 code implementations ICLR 2022 Mislav Balunović, Dimitar I. Dimitrov, Robin Staab, Martin Vechev

We demonstrate that existing leakage attacks can be seen as approximations of this optimal adversary with different assumptions on the probability distributions of the input data and gradients.

Federated Learning

Abstract Interpretation of Fixpoint Iterators with Applications to Neural Networks

1 code implementation14 Oct 2021 Mark Niklas Müller, Marc Fischer, Robin Staab, Martin Vechev

We present a new abstract interpretation framework for the precise over-approximation of numerical fixpoint iterators.

Avoiding Robust Misclassifications for Improved Robustness without Accuracy Loss

no code implementations29 Sep 2021 Yannick Merkli, Pavol Bielik, Petar Tsankov, Martin Vechev

Our results show that our method effectively reduces robust and inaccurate samples by up to 97. 28%.

Shared Certificates for Neural Network Verification

1 code implementation1 Sep 2021 Marc Fischer, Christian Sprecher, Dimitar I. Dimitrov, Gagandeep Singh, Martin Vechev

We perform an extensive experimental evaluation to demonstrate the effectiveness of shared certificates in reducing the verification cost on a range of datasets and attack specifications on image classifiers including the popular patch and geometric perturbations.

Scalable Certified Segmentation via Randomized Smoothing

1 code implementation1 Jul 2021 Marc Fischer, Maximilian Baader, Martin Vechev

We present a new certification method for image and point cloud segmentation based on randomized smoothing.

Point Cloud Segmentation Segmentation

Boosting Randomized Smoothing with Variance Reduced Classifiers

1 code implementation ICLR 2022 Miklós Z. Horváth, Mark Niklas Müller, Marc Fischer, Martin Vechev

Randomized Smoothing (RS) is a promising method for obtaining robustness certificates by evaluating a base model under noise.

Fair Normalizing Flows

1 code implementation ICLR 2022 Mislav Balunović, Anian Ruoss, Martin Vechev

Fair representation learning is an attractive approach that promises fairness of downstream predictors by encoding sensitive data.

Fairness Representation Learning +1

Robustness Certification for Point Cloud Models

1 code implementation ICCV 2021 Tobias Lorenz, Anian Ruoss, Mislav Balunović, Gagandeep Singh, Martin Vechev

In this work, we address this challenge and introduce 3DCertify, the first verifier able to certify the robustness of point cloud models.

Automated Discovery of Adaptive Attacks on Adversarial Defenses

1 code implementation NeurIPS 2021 Chengyuan Yao, Pavol Bielik, Petar Tsankov, Martin Vechev

Reliable evaluation of adversarial defenses is a challenging task, currently limited to an expert who manually crafts attacks that exploit the defense's inner workings or approaches based on an ensemble of fixed attacks, none of which may be effective for the specific defense at hand.

On the Paradox of Certified Training

no code implementations12 Feb 2021 Nikola Jovanović, Mislav Balunović, Maximilian Baader, Martin Vechev

Certified defenses based on convex relaxations are an established technique for training provably robust models.

Boosting Certified Robustness of Deep Networks via a Compositional Architecture

no code implementations ICLR 2021 Mark Niklas Mueller, Mislav Balunovic, Martin Vechev

In this work, we propose a new architecture which addresses this challenge and enables one to boost the certified robustness of any state-of-the-art deep network, while controlling the overall accuracy loss, without requiring retraining.

Efficient Certification of Spatial Robustness

1 code implementation19 Sep 2020 Anian Ruoss, Maximilian Baader, Mislav Balunović, Martin Vechev

Recent work has exposed the vulnerability of computer vision models to vector field attacks.

zkay v0.2: Practical Data Privacy for Smart Contracts

1 code implementation2 Sep 2020 Nick Baumann, Samuel Steffen, Benjamin Bichsel, Petar Tsankov, Martin Vechev

Recent work introduces zkay, a system for specifying and enforcing data privacy in smart contracts.

Programming Languages Cryptography and Security

Provably Robust Adversarial Examples

no code implementations ICLR 2022 Dimitar I. Dimitrov, Gagandeep Singh, Timon Gehr, Martin Vechev

We introduce the concept of provably robust adversarial examples for deep neural networks - connected input regions constructed from standard adversarial examples which are guaranteed to be robust to a set of real-world perturbations (such as changes in pixel intensity and geometric transformations).

Scaling Polyhedral Neural Network Verification on GPUs

no code implementations20 Jul 2020 Christoph Müller, François Serre, Gagandeep Singh, Markus Püschel, Martin Vechev

GPUPoly scales to large networks: for example, it can prove the robustness of a 1M neuron, 34-layer deep residual network in approximately 34. 5 ms. We believe GPUPoly is a promising step towards practical verification of real-world neural networks.

Autonomous Driving Medical Diagnosis

Scalable Polyhedral Verification of Recurrent Neural Networks

1 code implementation27 May 2020 Wonryong Ryou, Jiayu Chen, Mislav Balunovic, Gagandeep Singh, Andrei Dan, Martin Vechev

We present a scalable and precise verifier for recurrent neural networks, called Prover based on two novel ideas: (i) a method to compute a set of polyhedral abstractions for the non-convex and nonlinear recurrent update functions by combining sampling, optimization, and Fermat's theorem, and (ii) a gradient descent based algorithm for abstraction refinement guided by the certification problem that combines multiple abstractions for each neuron.

Guiding Program Synthesis by Learning to Generate Examples

1 code implementation ICLR 2020 Larissa Laich, Pavol Bielik, Martin Vechev

A key challenge of existing program synthesizers is ensuring that the synthesized program generalizes well.

Program Synthesis

Adversarial Training and Provable Defenses: Bridging the Gap

1 code implementation ICLR 2020 Mislav Balunovic, Martin Vechev

We experimentally show that this training method, named convex layerwise adversarial training (COLT), is promising and achieves the best of both worlds -- it produces a state-of-the-art neural network with certified robustness of 60. 5% and accuracy of 78. 4% on the challenging CIFAR-10 dataset with a 2/255 L-infinity perturbation.

Robustness Certification of Generative Models

no code implementations30 Apr 2020 Matthew Mirman, Timon Gehr, Martin Vechev

Generative neural networks can be used to specify continuous transformations between images via latent-space interpolation.

Adversarial Attacks on Probabilistic Autoregressive Forecasting Models

1 code implementation ICML 2020 Raphaël Dang-Nhu, Gagandeep Singh, Pavol Bielik, Martin Vechev

We develop an effective generation of adversarial attacks on neural models that output a sequence of probability distributions rather than a sequence of single values.

Decision Making Time Series +1

Certified Defense to Image Transformations via Randomized Smoothing

1 code implementation NeurIPS 2020 Marc Fischer, Maximilian Baader, Martin Vechev

We extend randomized smoothing to cover parameterized transformations (e. g., rotations, translations) and certify robustness in the parameter space (e. g., rotation angle).

Provable Adversarial Defense

Learning Certified Individually Fair Representations

1 code implementation NeurIPS 2020 Anian Ruoss, Mislav Balunović, Marc Fischer, Martin Vechev

That is, our method enables the data producer to learn and certify a representation where for a data point all similar individuals are at $\ell_\infty$-distance at most $\epsilon$, thus allowing data consumers to certify individual fairness by proving $\epsilon$-robustness of their classifier.

Fairness Representation Learning

Adversarial Robustness for Code

1 code implementation ICML 2020 Pavol Bielik, Martin Vechev

Machine learning and deep learning in particular has been recently used to successfully address many tasks in the domain of code such as finding and fixing bugs, code completion, decompilation, type inference and many others.

Adversarial Robustness BIG-bench Machine Learning +1

Learning to Infer User Interface Attributes from Images

no code implementations31 Dec 2019 Philippe Schlattner, Pavol Bielik, Martin Vechev

We explore a new domain of learning to infer user interface attributes that helps developers automate the process of user interface implementation.

Attribute Imitation Learning

Beyond the Single Neuron Convex Barrier for Neural Network Certification

1 code implementation NeurIPS 2019 Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, Martin Vechev

We propose a new parametric framework, called k-ReLU, for computing precise and scalable convex relaxations used to certify neural networks.

Certifying Geometric Robustness of Neural Networks

1 code implementation NeurIPS 2019 Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, Martin Vechev

The use of neural networks in safety-critical computer vision systems calls for their robustness certification against natural geometric transformations (e. g., rotation, scaling).

Online Robustness Training for Deep Reinforcement Learning

no code implementations3 Nov 2019 Marc Fischer, Matthew Mirman, Steven Stalder, Martin Vechev

In deep reinforcement learning (RL), adversarial attacks can trick an agent into unwanted states and disrupt training.

reinforcement-learning Reinforcement Learning +1

Universal Approximation with Certified Networks

1 code implementation ICLR 2020 Maximilian Baader, Matthew Mirman, Martin Vechev

To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.

Statistical Verification of General Perturbations by Gaussian Smoothing

no code implementations25 Sep 2019 Marc Fischer, Maximilian Baader, Martin Vechev

We present a novel statistical certification method that generalizes prior work based on smoothing to handle richer perturbations.

Verification of Generative-Model-Based Visual Transformations

no code implementations25 Sep 2019 Matthew Mirman, Timon Gehr, Martin Vechev

Generative networks are promising models for specifying visual transformations.

Robustness Certification with Refinement

no code implementations ICLR 2019 Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev

We present a novel approach for verification of neural networks which combines scalable over-approximation methods with precise (mixed integer) linear programming.

A Provable Defense for Deep Residual Networks

1 code implementation29 Mar 2019 Matthew Mirman, Gagandeep Singh, Martin Vechev

We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100.

Adversarial Defense Novel Concepts

Fast and Effective Robustness Certification

no code implementations NeurIPS 2018 Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin Vechev

We present a new method and system, called DeepZ, for certifying neural network robustness based on abstract interpretation.

Distilled Agent DQN for Provable Adversarial Robustness

no code implementations27 Sep 2018 Matthew Mirman, Marc Fischer, Martin Vechev

As deep neural networks have become the state of the art for solving complex reinforcement learning tasks, susceptibility to perceptual adversarial examples have become a concern.

Adversarial Robustness reinforcement-learning +1

Training Neural Machines with Trace-Based Supervision

no code implementations ICML 2018 Matthew Mirman, Dimitar Dimitrov, Pavle Djordjevic, Timon Gehr, Martin Vechev

We investigate the effectiveness of trace-based supervision methods for training existing neural abstract machines.

Differentiable Abstract Interpretation for Provably Robust Neural Networks

1 code implementation ICML 2018 Matthew Mirman, Timon Gehr, Martin Vechev

We introduce a scalable method for training robust neural networks based on abstract interpretation.

Securify: Practical Security Analysis of Smart Contracts

3 code implementations4 Jun 2018 Petar Tsankov, Andrei Dan, Dana Drachsler Cohen, Arthur Gervais, Florian Buenzli, Martin Vechev

To address this problem, we present Securify, a security analyzer for Ethereum smart contracts that is scalable, fully automated, and able to prove contract behaviors as safe/unsafe with respect to a given property.

Cryptography and Security

Training Neural Machines with Partial Traces

no code implementations ICLR 2018 Matthew Mirman, Dimitar Dimitrov, Pavle Djordjevich, Timon Gehr, Martin Vechev

We present a novel approach for training neural abstract architectures which in- corporates (partial) supervision over the machine’s interpretable components.

Learning Disjunctions of Predicates

no code implementations15 Jun 2017 Nader H. Bshouty, Dana Drachsler-Cohen, Martin Vechev, Eran Yahav

Our algorithm asks at most $|F| \cdot OPT(F_\vee)$ membership queries where $OPT(F_\vee)$ is the minimum worst case number of membership queries for learning $F_\vee$.

Program Synthesis

Learning a Static Analyzer from Data

no code implementations6 Nov 2016 Pavol Bielik, Veselin Raychev, Martin Vechev

In this paper we present a new, automated approach for creating static analyzers: instead of manually providing the various inference rules of the analyzer, the key idea is to learn these rules from a dataset of programs.

Cannot find the paper you are looking for? You can Submit a new open access paper.