2 code implementations • ICLR 2020 • Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey, Gavin Gray
The desire to map neural networks to varying-capacity devices has led to the development of a wealth of compression techniques, many of which involve replacing standard convolutional blocks in a large network with cheap alternative blocks.
1 code implementation • 3 Jun 2019 • Gavin Gray, Elliot J. Crowley, Amos Storkey
In response to the development of recent efficient dense layers, this paper shows that something as simple as replacing linear components in pointwise convolutions with structured linear decompositions also produces substantial gains in the efficiency/accuracy tradeoff.
no code implementations • 20 Oct 2018 • Gavin Gray, Elliot Crowley, Amos Storkey
Typical recent neural network designs are primarily convolutional layers, but the tricks enabling structured efficient linear layers (SELLs) have not yet been adapted to the convolutional setting.
1 code implementation • NeurIPS 2018 • Elliot J. Crowley, Gavin Gray, Amos Storkey
Many engineers wish to deploy modern neural networks in memory-limited settings; but the development of flexible methods for reducing memory use is in its infancy, and there is little knowledge of the resulting cost-benefit.