Search Results for author: Ilan Price

Found 9 papers, 3 papers with code

Deep Neural Network Initialization with Sparsity Inducing Activations

no code implementations25 Feb 2024 Ilan Price, Nicholas Daultry Ball, Samuel C. H. Lam, Adam C. Jones, Jared Tanner

Inducing and leveraging sparse activations during training and inference is a promising avenue for improving the computational efficiency of deep networks, which is increasingly important as network sizes continue to grow and their application becomes more widespread.

Computational Efficiency

GenCast: Diffusion-based ensemble forecasting for medium-range weather

no code implementations25 Dec 2023 Ilan Price, Alvaro Sanchez-Gonzalez, Ferran Alet, Timo Ewalds, Andrew El-Kadi, Jacklynn Stott, Shakir Mohamed, Peter Battaglia, Remi Lam, Matthew Willson

Probabilistic weather forecasting is critical for decision-making in high-impact domains such as flood forecasting, energy system planning or transportation routing, where quantifying the uncertainty of a forecast -- including probabilities of extreme events -- is essential to guide important cost-benefit trade-offs and mitigation measures.

Decision Making Weather Forecasting

Improved Projection Learning for Lower Dimensional Feature Maps

no code implementations27 Oct 2022 Ilan Price, Jared Tanner

The requirement to repeatedly move large feature maps off- and on-chip during inference with convolutional neural networks (CNNs) imposes high costs in terms of both energy and time.

Increasing the accuracy and resolution of precipitation forecasts using deep generative models

1 code implementation23 Mar 2022 Ilan Price, Stephan Rasp

Accurately forecasting extreme rainfall is notoriously difficult, but is also ever more crucial for society as climate change increases the frequency of such extremes.

Generative Adversarial Network Super-Resolution

Dense for the Price of Sparse: Improved Performance of Sparsely Initialized Networks via a Subspace Offset

2 code implementations12 Feb 2021 Ilan Price, Jared Tanner

We show that standard training of networks built with these layers, and pruned at initialization, achieves state-of-the-art accuracy for extreme sparsities on a variety of benchmark network architectures and datasets.

Trajectory growth lower bounds for random sparse deep ReLU networks

no code implementations25 Nov 2019 Ilan Price, Jared Tanner

This paper considers the growth in the length of one-dimensional trajectories as they are passed through deep ReLU neural networks, which, among other things, is one measure of the expressivity of deep networks.

Trajectory growth through random deep ReLU networks

no code implementations25 Sep 2019 Ilan Price, Jared Tanner

This paper considers the growth in the length of one-dimensional trajectories as they are passed through deep ReLU neural networks, which, among other things, is one measure of the expressivity of deep networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.