Search Results for author: Markus Püschel

Found 17 papers, 8 papers with code

Distributed Optimization With Local Domains: Applications in MPC and Network Flows

1 code implementation8 May 2013 João F. C. Mota, João M. F. Xavier, Pedro M. Q. Aguiar, Markus Püschel

Our contribution is a communication-efficient distributed algorithm that finds a vector $x^\star$ minimizing the sum of all the functions.

Optimization and Control Information Theory Information Theory

Fast and Effective Robustness Certification

no code implementations NeurIPS 2018 Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin Vechev

We present a new method and system, called DeepZ, for certifying neural network robustness based on abstract interpretation.

Robustness Certification with Refinement

no code implementations ICLR 2019 Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev

We present a novel approach for verification of neural networks which combines scalable over-approximation methods with precise (mixed integer) linear programming.

On Linear Learning with Manycore Processors

1 code implementation2 May 2019 Eliza Wszola, Celestine Mendler-Dünner, Martin Jaggi, Markus Püschel

A new generation of manycore processors is on the rise that offers dozens and more cores on a chip and, in a sense, fuses host processor and accelerator.

Powerset Convolutional Neural Networks

1 code implementation NeurIPS 2019 Chris Wendler, Dan Alistarh, Markus Püschel

We present a novel class of convolutional neural networks (CNNs) for set functions, i. e., data indexed with the powerset of a finite set.

Beyond the Single Neuron Convex Barrier for Neural Network Certification

1 code implementation NeurIPS 2019 Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, Martin Vechev

We propose a new parametric framework, called k-ReLU, for computing precise and scalable convex relaxations used to certify neural networks.

Discrete Signal Processing with Set Functions

no code implementations28 Jan 2020 Markus Püschel, Chris Wendler

Set functions are functions (or signals) indexed by the powerset (set of all subsets) of a finite set N. They are fundamental and ubiquitous in many application domains and have been used, for example, to formally describe or quantify loss functions for semantic image segmentation, the informativeness of sensors in sensor networks the utility of sets of items in recommender systems, cooperative games in game theory, or bidders in combinatorial auctions.

Image Segmentation Informativeness +2

Digraph Signal Processing with Generalized Boundary Conditions

1 code implementation19 May 2020 Bastian Seifert, Markus Püschel

Furthermore, the Fourier transform in this case is now obtained from the Jordan decomposition, which may not be computable at all for large graphs.

Scaling Polyhedral Neural Network Verification on GPUs

no code implementations20 Jul 2020 Christoph Müller, François Serre, Gagandeep Singh, Markus Püschel, Martin Vechev

GPUPoly scales to large networks: for example, it can prove the robustness of a 1M neuron, 34-layer deep residual network in approximately 34. 5 ms. We believe GPUPoly is a promising step towards practical verification of real-world neural networks.

Autonomous Driving Medical Diagnosis

Learning Set Functions that are Sparse in Non-Orthogonal Fourier Bases

3 code implementations1 Oct 2020 Chris Wendler, Andisheh Amrollahi, Bastian Seifert, Andreas Krause, Markus Püschel

Many applications of machine learning on discrete domains, such as learning preference functions in recommender systems or auctions, can be reduced to estimating a set function that is sparse in the Fourier domain.

Recommendation Systems

Faster Training of Word Embeddings

no code implementations1 Jan 2021 Eliza Wszola, Martin Jaggi, Markus Püschel

Word embeddings have gained increasing popularity in the recent years due to the Word2vec library and its extension fastText that uses subword information.

Word Embeddings

Causal Fourier Analysis on Directed Acyclic Graphs and Posets

no code implementations16 Sep 2022 Bastian Seifert, Chris Wendler, Markus Püschel

Specifically, we model the spread of an infection on such a DAG obtained from real-world contact tracing data and learn the infection signal from samples assuming sparsity in the Fourier domain.

Learning DAGs from Data with Few Root Causes

1 code implementation NeurIPS 2023 Panagiotis Misiakos, Chris Wendler, Markus Püschel

We prove identifiability in this new setting and show that the true DAG is the global minimizer of the $L^0$-norm of the vector of root causes.

QIGen: Generating Efficient Kernels for Quantized Inference on Large Language Models

1 code implementation7 Jul 2023 Tommaso Pegolotti, Elias Frantar, Dan Alistarh, Markus Püschel

We present ongoing work on a new automatic code generation approach for supporting quantized generative inference on LLMs such as LLaMA or OPT on off-the-shelf CPUs.

Code Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.