no code implementations • 6 Apr 2024 • Adarsh Jamadandi, Celia Rubio-Madrigal, Rebekka Burkholz
Message Passing Graph Neural Networks are known to suffer from two problems that are sometimes believed to be diametrically opposed: over-squashing and over-smoothing.
no code implementations • 5 Mar 2024 • Intekhab Hossain, Jonas Fischer, Rebekka Burkholz, John Quackenbush
Neural structure learning is of paramount importance for scientific discovery and interpretability.
no code implementations • 29 Feb 2024 • Advait Gadhikar, Rebekka Burkholz
Learning Rate Rewinding (LRR) has been established as a strong variant of Iterative Magnitude Pruning (IMP) to find lottery tickets in deep overparameterized neural networks.
no code implementations • 31 Jan 2023 • Jonas Fischer, Rebekka Burkholz, Jilles Vreeken
We show, however, that these methods fail to reconstruct local properties, such as relative differences in densities (Fig.
1 code implementation • 5 Oct 2022 • Advait Gadhikar, Sohom Mukherjee, Rebekka Burkholz
Random masks define surprisingly effective sparse neural network models, as has been shown empirically.
no code implementations • 5 Oct 2022 • Advait Gadhikar, Rebekka Burkholz
We propose a random initialization scheme, RISOTTO, that achieves perfect dynamical isometry for residual networks with ReLU activation functions even for finite depth and width.
no code implementations • 4 May 2022 • Rebekka Burkholz
The Lottery Ticket Hypothesis continues to have a profound practical impact on the quest for small scale deep neural networks that solve modern deep learning tasks at competitive performance.
1 code implementation • 4 May 2022 • Rebekka Burkholz
For networks with ReLU activation functions, it has been proven that a target network with depth $L$ can be approximated by the subnetwork of a randomly initialized neural network that has double the target's depth $2L$ and is wider by a logarithmic factor.
1 code implementation • ICLR 2022 • Jonas Fischer, Rebekka Burkholz
The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that aim to reduce the computational costs associated with deep learning during training and model deployment.
1 code implementation • ICLR 2022 • Rebekka Burkholz, Nilanjana Laha, Rajarshi Mukherjee, Alkis Gotovos
The lottery ticket hypothesis conjectures the existence of sparse subnetworks of large randomly initialized deep neural networks that can be successfully trained in isolation.
no code implementations • 21 Oct 2021 • Jonas Fischer, Advait Gadhikar, Rebekka Burkholz
The strong lottery ticket hypothesis holds the promise that pruning randomly initialized deep neural networks could offer a computationally efficient alternative to deep learning with stochastic gradient descent.
no code implementations • NeurIPS 2021 • Alkis Gotovos, Rebekka Burkholz, John Quackenbush, Stefanie Jegelka
Modeling the time evolution of discrete sets of items (e. g., genetic mutations) is a fundamental problem in many biomedical applications.
1 code implementation • 4 Apr 2021 • Katherine H. Shutta, Deborah Weighill, Rebekka Burkholz, Marouen Ben Guebila, Dawn L. DeMeo, Helena U. Zacharias, John Quackenbush, Michael Altenbuchinger
The increasing quantity of multi-omics data, such as methylomic and transcriptomic profiles, collected on the same specimen, or even on the same cell, provide a unique opportunity to explore the complex interactions that define cell phenotype and govern cellular responses to perturbations.
no code implementations • 9 Sep 2019 • Rebekka Burkholz, John Quackenbush
Cascade models are central to understanding, predicting, and controlling epidemic spreading and information propagation.
1 code implementation • NeurIPS 2019 • Rebekka Burkholz, Alina Dubatovka
Deep learning relies on good initialization schemes and hyperparameter choices prior to training a neural network.