Search Results for author: Jack Turner

Found 10 papers, 7 papers with code

Neural Architecture Search as Program Transformation Exploration

1 code implementation12 Feb 2021 Jack Turner, Elliot J. Crowley, Michael O'Boyle

This unification allows us to express existing NAS operations as combinations of simpler transformations.

Neural Architecture Search

Optimizing Grouped Convolutions on Edge Devices

1 code implementation17 Jun 2020 Perry Gibson, José Cano, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey

We observe that our new implementation scales well with the number of groups and provides the best inference times in all settings, improving the existing implementations of grouped convolutions in TVM, PyTorch and TensorFlow Lite by 3. 4x, 8x and 4x on average respectively.

Neural Architecture Search without Training

2 code implementations8 Jun 2020 Joseph Mellor, Jack Turner, Amos Storkey, Elliot J. Crowley

In this work, we examine the overlap of activations between datapoints in untrained networks and motivate how this can give a measure which is usefully indicative of a network's trained performance.

Neural Architecture Search

Performance Aware Convolutional Neural Network Channel Pruning for Embedded GPUs

no code implementations20 Feb 2020 Valentin Radu, Kuba Kaszyk, Yuan Wen, Jack Turner, Jose Cano, Elliot J. Crowley, Bjorn Franke, Amos Storkey, Michael O'Boyle

We evaluate higher level libraries, which analyze the input characteristics of a convolutional layer, based on which they produce optimized OpenCL (Arm Compute Library and TVM) and CUDA (cuDNN) code.

Model Compression Network Pruning

Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels

3 code implementations NeurIPS 2020 Massimiliano Patacchiola, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey

Recently, different machine learning methods have been introduced to tackle the challenging few-shot learning scenario that is, learning from a small labeled dataset related to a specific task.

Bayesian Inference Domain Adaptation +4

BlockSwap: Fisher-guided Block Substitution for Network Compression on a Budget

2 code implementations ICLR 2020 Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey, Gavin Gray

The desire to map neural networks to varying-capacity devices has led to the development of a wealth of compression techniques, many of which involve replacing standard convolutional blocks in a large network with cheap alternative blocks.

Distilling with Performance Enhanced Students

no code implementations24 Oct 2018 Jack Turner, Elliot J. Crowley, Valentin Radu, José Cano, Amos Storkey, Michael O'Boyle

The task of accelerating large neural networks on general purpose hardware has, in recent years, prompted the use of channel pruning to reduce network size.

Model Compression

A Closer Look at Structured Pruning for Neural Network Compression

2 code implementations10 Oct 2018 Elliot J. Crowley, Jack Turner, Amos Storkey, Michael O'Boyle

Structured pruning is a popular method for compressing a neural network: given a large trained network, one alternates between removing channel connections and fine-tuning; reducing the overall width of the network.

Network Pruning Neural Network Compression

Pruning neural networks: is it time to nip it in the bud?

no code implementations NIPS Workshop CDNNRIA 2018 Elliot J. Crowley, Jack Turner, Amos Storkey, Michael O'Boyle

First, when time-constrained, it is better to train a simple, smaller network from scratch than prune a large network.

Characterising Across-Stack Optimisations for Deep Convolutional Neural Networks

1 code implementation19 Sep 2018 Jack Turner, José Cano, Valentin Radu, Elliot J. Crowley, Michael O'Boyle, Amos Storkey

Convolutional Neural Networks (CNNs) are extremely computationally demanding, presenting a large barrier to their deployment on resource-constrained devices.

Neural Network Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.