Search Results for author: Mark Niklas Müller

Found 17 papers, 11 papers with code

Overcoming the Paradox of Certified Training with Gaussian Smoothing

no code implementations11 Mar 2024 Stefan Balauca, Mark Niklas Müller, Yuhao Mao, Maximilian Baader, Marc Fischer, Martin Vechev

Training neural networks with high certified accuracy against adversarial examples remains an open problem despite significant efforts.

SPEAR:Exact Gradient Inversion of Batches in Federated Learning

no code implementations6 Mar 2024 Dimitar I. Dimitrov, Maximilian Baader, Mark Niklas Müller, Martin Vechev

In this work, we propose \emph{the first algorithm reconstructing whole batches with $b >1$ exactly}.

Federated Learning

Evading Data Contamination Detection for Language Models is (too) Easy

1 code implementation5 Feb 2024 Jasper Dekoninck, Mark Niklas Müller, Maximilian Baader, Marc Fischer, Martin Vechev

Large language models are widespread, with their performance on benchmarks frequently guiding user preferences for one model over another.

Automated Classification of Model Errors on ImageNet

1 code implementation NeurIPS 2023 Momchil Peychev, Mark Niklas Müller, Marc Fischer, Martin Vechev

To address this, new label-sets and evaluation protocols have been proposed for ImageNet showing that state-of-the-art models already achieve over 95% accuracy and shifting the focus on investigating why the remaining errors persist.

Classification

Prompt Sketching for Large Language Models

no code implementations8 Nov 2023 Luca Beurer-Kellner, Mark Niklas Müller, Marc Fischer, Martin Vechev

This way, sketching grants users more control over the generation process, e. g., by providing a reasoning framework via intermediate instructions, leading to better overall results.

Arithmetic Reasoning Benchmarking +2

Expressivity of ReLU-Networks under Convex Relaxations

no code implementations7 Nov 2023 Maximilian Baader, Mark Niklas Müller, Yuhao Mao, Martin Vechev

We show that: (i) more advanced relaxations allow a larger class of univariate functions to be expressed as precisely analyzable ReLU networks, (ii) more precise relaxations can allow exponentially larger solution spaces of ReLU networks encoding the same functions, and (iii) even using the most precise single-neuron relaxations, it is impossible to construct precisely analyzable ReLU networks that express multivariate, convex, monotone CPWL functions.

Understanding Certified Training with Interval Bound Propagation

1 code implementation17 Jun 2023 Yuhao Mao, Mark Niklas Müller, Marc Fischer, Martin Vechev

We, then, derive sufficient and necessary conditions on weight matrices for IBP bounds to become exact and demonstrate that these impose strong regularization, explaining the empirically observed trade-off between robustness and accuracy in certified training.

TAPS: Connecting Certified and Adversarial Training

2 code implementations8 May 2023 Yuhao Mao, Mark Niklas Müller, Marc Fischer, Martin Vechev

Training certifiably robust neural networks remains a notoriously hard problem.

Efficient Certified Training and Robustness Verification of Neural ODEs

1 code implementation9 Mar 2023 Mustafa Zeqiri, Mark Niklas Müller, Marc Fischer, Martin Vechev

Neural Ordinary Differential Equations (NODEs) are a novel neural architecture, built around initial value problems with learned dynamics which are solved during inference.

Time Series Time Series Forecasting

First Three Years of the International Verification of Neural Networks Competition (VNN-COMP)

no code implementations14 Jan 2023 Christopher Brix, Mark Niklas Müller, Stanley Bak, Taylor T. Johnson, Changliu Liu

This paper presents a summary and meta-analysis of the first three iterations of the annual International Verification of Neural Networks Competition (VNN-COMP) held in 2020, 2021, and 2022.

Image Classification reinforcement-learning +1

The Third International Verification of Neural Networks Competition (VNN-COMP 2022): Summary and Results

1 code implementation20 Dec 2022 Mark Niklas Müller, Christopher Brix, Stanley Bak, Changliu Liu, Taylor T. Johnson

This report summarizes the 3rd International Verification of Neural Networks Competition (VNN-COMP 2022), held as a part of the 5th Workshop on Formal Methods for ML-Enabled Autonomous Systems (FoMLAS), which was collocated with the 34th International Conference on Computer-Aided Verification (CAV).

Certified Training: Small Boxes are All You Need

1 code implementation10 Oct 2022 Mark Niklas Müller, Franziska Eckert, Marc Fischer, Martin Vechev

To obtain, deterministic guarantees of adversarial robustness, specialized training methods are used.

Adversarial Robustness

(De-)Randomized Smoothing for Decision Stump Ensembles

1 code implementation27 May 2022 Miklós Z. Horváth, Mark Niklas Müller, Marc Fischer, Martin Vechev

Whereas most prior work on randomized smoothing focuses on evaluating arbitrary base models approximately under input randomization, the key insight of our work is that decision stump ensembles enable exact yet efficient evaluation via dynamic programming.

Robust and Accurate -- Compositional Architectures for Randomized Smoothing

1 code implementation1 Apr 2022 Miklós Z. Horváth, Mark Niklas Müller, Marc Fischer, Martin Vechev

Randomized Smoothing (RS) is considered the state-of-the-art approach to obtain certifiably robust models for challenging tasks.

Abstract Interpretation of Fixpoint Iterators with Applications to Neural Networks

1 code implementation14 Oct 2021 Mark Niklas Müller, Marc Fischer, Robin Staab, Martin Vechev

We present a new abstract interpretation framework for the precise over-approximation of numerical fixpoint iterators.

Boosting Randomized Smoothing with Variance Reduced Classifiers

1 code implementation ICLR 2022 Miklós Z. Horváth, Mark Niklas Müller, Marc Fischer, Martin Vechev

Randomized Smoothing (RS) is a promising method for obtaining robustness certificates by evaluating a base model under noise.

Cannot find the paper you are looking for? You can Submit a new open access paper.