no code implementations • 6 Jan 2025 • Panagiotis Misiakos, Markus Püschel
We introduce DAG-TFRC, a novel method for learning directed acyclic graphs (DAGs) from time series with few root causes.
no code implementations • 29 Aug 2023 • Mathieu Chevalley, Jacob Sackett-Sanders, Yusuf Roohani, Pascal Notin, Artemy Bakulin, Dariusz Brzezinski, Kaiwen Deng, Yuanfang Guan, Justin Hong, Michael Ibrahim, Wojciech Kotlowski, Marcin Kowiel, Panagiotis Misiakos, Achille Nazaret, Markus Püschel, Chris Wendler, Arash Mehrjou, Patrick Schwab
In drug discovery, mapping interactions between genes within cellular systems is a crucial early step.
1 code implementation • 7 Jul 2023 • Tommaso Pegolotti, Elias Frantar, Dan Alistarh, Markus Püschel
We present ongoing work on a new automatic code generation approach for supporting quantized generative inference on LLMs such as LLaMA or OPT on off-the-shelf CPUs.
1 code implementation • NeurIPS 2023 • Panagiotis Misiakos, Chris Wendler, Markus Püschel
We prove identifiability in this new setting and show that the true DAG is the global minimizer of the $L^0$-norm of the vector of root causes.
no code implementations • 16 Sep 2022 • Bastian Seifert, Chris Wendler, Markus Püschel
Specifically, we model the spread of an infection on such a DAG obtained from real-world contact tracing data and learn the infection signal from samples assuming sparsity in the Fourier domain.
no code implementations • 5 Mar 2021 • Mark Niklas Müller, Gleb Makarchuk, Gagandeep Singh, Markus Püschel, Martin Vechev
Formal verification of neural networks is critical for their safe adoption in real-world applications.
no code implementations • 1 Jan 2021 • Eliza Wszola, Martin Jaggi, Markus Püschel
Word embeddings have gained increasing popularity in the recent years due to the Word2vec library and its extension fastText that uses subword information.
3 code implementations • 1 Oct 2020 • Chris Wendler, Andisheh Amrollahi, Bastian Seifert, Andreas Krause, Markus Püschel
Many applications of machine learning on discrete domains, such as learning preference functions in recommender systems or auctions, can be reduced to estimating a set function that is sparse in the Fourier domain.
no code implementations • 20 Jul 2020 • Christoph Müller, François Serre, Gagandeep Singh, Markus Püschel, Martin Vechev
GPUPoly scales to large networks: for example, it can prove the robustness of a 1M neuron, 34-layer deep residual network in approximately 34. 5 ms. We believe GPUPoly is a promising step towards practical verification of real-world neural networks.
1 code implementation • 19 May 2020 • Bastian Seifert, Markus Püschel
Furthermore, the Fourier transform in this case is now obtained from the Jordan decomposition, which may not be computable at all for large graphs.
no code implementations • 28 Jan 2020 • Markus Püschel, Chris Wendler
Set functions are functions (or signals) indexed by the powerset (set of all subsets) of a finite set N. They are fundamental and ubiquitous in many application domains and have been used, for example, to formally describe or quantify loss functions for semantic image segmentation, the informativeness of sensors in sensor networks the utility of sets of items in recommender systems, cooperative games in game theory, or bidders in combinatorial auctions.
1 code implementation • NeurIPS 2019 • Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, Martin Vechev
We propose a new parametric framework, called k-ReLU, for computing precise and scalable convex relaxations used to certify neural networks.
1 code implementation • NeurIPS 2019 • Chris Wendler, Dan Alistarh, Markus Püschel
We present a novel class of convolutional neural networks (CNNs) for set functions, i. e., data indexed with the powerset of a finite set.
1 code implementation • 2 May 2019 • Eliza Wszola, Celestine Mendler-Dünner, Martin Jaggi, Markus Püschel
A new generation of manycore processors is on the rise that offers dozens and more cores on a chip and, in a sense, fuses host processor and accelerator.
no code implementations • ICLR 2019 • Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev
We present a novel approach for verification of neural networks which combines scalable over-approximation methods with precise (mixed integer) linear programming.
no code implementations • NeurIPS 2018 • Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin Vechev
We present a new method and system, called DeepZ, for certifying neural network robustness based on abstract interpretation.
no code implementations • 14 Feb 2018 • Nezihe Merve Gürel, Kaan Kara, Alen Stojanov, Tyler Smith, Thomas Lemmin, Dan Alistarh, Markus Püschel, Ce Zhang
Modern scientific instruments produce vast amounts of data, which can overwhelm the processing ability of computer systems.
1 code implementation • 8 May 2013 • João F. C. Mota, João M. F. Xavier, Pedro M. Q. Aguiar, Markus Püschel
Our contribution is a communication-efficient distributed algorithm that finds a vector $x^\star$ minimizing the sum of all the functions.
Optimization and Control Information Theory Information Theory