Search Results for author: Rebekka Burkholz

Found 15 papers, 6 papers with code

Spectral Graph Pruning Against Over-Squashing and Over-Smoothing

no code implementations6 Apr 2024 Adarsh Jamadandi, Celia Rubio-Madrigal, Rebekka Burkholz

Message Passing Graph Neural Networks are known to suffer from two problems that are sometimes believed to be diametrically opposed: over-squashing and over-smoothing.

Masks, Signs, And Learning Rate Rewinding

no code implementations29 Feb 2024 Advait Gadhikar, Rebekka Burkholz

Learning Rate Rewinding (LRR) has been established as a strong variant of Iterative Magnitude Pruning (IMP) to find lottery tickets in deep overparameterized neural networks.

Preserving local densities in low-dimensional embeddings

no code implementations31 Jan 2023 Jonas Fischer, Rebekka Burkholz, Jilles Vreeken

We show, however, that these methods fail to reconstruct local properties, such as relative differences in densities (Fig.

Why Random Pruning Is All We Need to Start Sparse

1 code implementation5 Oct 2022 Advait Gadhikar, Sohom Mukherjee, Rebekka Burkholz

Random masks define surprisingly effective sparse neural network models, as has been shown empirically.

Image Classification

Dynamical Isometry for Residual Networks

no code implementations5 Oct 2022 Advait Gadhikar, Rebekka Burkholz

We propose a random initialization scheme, RISOTTO, that achieves perfect dynamical isometry for residual networks with ReLU activation functions even for finite depth and width.

Convolutional and Residual Networks Provably Contain Lottery Tickets

no code implementations4 May 2022 Rebekka Burkholz

The Lottery Ticket Hypothesis continues to have a profound practical impact on the quest for small scale deep neural networks that solve modern deep learning tasks at competitive performance.

Most Activation Functions Can Win the Lottery Without Excessive Depth

1 code implementation4 May 2022 Rebekka Burkholz

For networks with ReLU activation functions, it has been proven that a target network with depth $L$ can be approximated by the subnetwork of a randomly initialized neural network that has double the target's depth $2L$ and is wider by a logarithmic factor.

Plant 'n' Seek: Can You Find the Winning Ticket?

1 code implementation ICLR 2022 Jonas Fischer, Rebekka Burkholz

The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that aim to reduce the computational costs associated with deep learning during training and model deployment.

On the Existence of Universal Lottery Tickets

1 code implementation ICLR 2022 Rebekka Burkholz, Nilanjana Laha, Rajarshi Mukherjee, Alkis Gotovos

The lottery ticket hypothesis conjectures the existence of sparse subnetworks of large randomly initialized deep neural networks that can be successfully trained in isolation.

Lottery Tickets with Nonzero Biases

no code implementations21 Oct 2021 Jonas Fischer, Advait Gadhikar, Rebekka Burkholz

The strong lottery ticket hypothesis holds the promise that pruning randomly initialized deep neural networks could offer a computationally efficient alternative to deep learning with stochastic gradient descent.

Scaling up Continuous-Time Markov Chains Helps Resolve Underspecification

no code implementations NeurIPS 2021 Alkis Gotovos, Rebekka Burkholz, John Quackenbush, Stefanie Jegelka

Modeling the time evolution of discrete sets of items (e. g., genetic mutations) is a fundamental problem in many biomedical applications.

DRAGON: Determining Regulatory Associations using Graphical models on multi-Omic Networks

1 code implementation4 Apr 2021 Katherine H. Shutta, Deborah Weighill, Rebekka Burkholz, Marouen Ben Guebila, Dawn L. DeMeo, Helena U. Zacharias, John Quackenbush, Michael Altenbuchinger

The increasing quantity of multi-omics data, such as methylomic and transcriptomic profiles, collected on the same specimen, or even on the same cell, provide a unique opportunity to explore the complex interactions that define cell phenotype and govern cellular responses to perturbations.

Cascade Size Distributions: Why They Matter and How to Compute Them Efficiently

no code implementations9 Sep 2019 Rebekka Burkholz, John Quackenbush

Cascade models are central to understanding, predicting, and controlling epidemic spreading and information propagation.

Clustering

Initialization of ReLUs for Dynamical Isometry

1 code implementation NeurIPS 2019 Rebekka Burkholz, Alina Dubatovka

Deep learning relies on good initialization schemes and hyperparameter choices prior to training a neural network.

Cannot find the paper you are looking for? You can Submit a new open access paper.