Search Results for author: Elliot J. Crowley

Found 24 papers, 17 papers with code

Hyperparameter Selection in Continual Learning

no code implementations9 Apr 2024 Thomas L. Lee, Sigrid Passano Hellan, Linus Ericsson, Elliot J. Crowley, Amos Storkey

In continual learning (CL) -- where a learner trains on a stream of data -- standard hyperparameter optimisation (HPO) cannot be applied, as a learner does not have access to all of the data at the same time.

Continual Learning

PlainMamba: Improving Non-Hierarchical Mamba in Visual Recognition

1 code implementation26 Mar 2024 Chenhongyi Yang, Zehui Chen, Miguel Espinosa, Linus Ericsson, Zhenyu Wang, Jiaming Liu, Elliot J. Crowley

In this paper, we further adapt the selective scanning process of Mamba to the visual domain, enhancing its ability to learn features from two-dimensional images by (i) a continuous 2D scanning process that improves spatial continuity by ensuring adjacency of tokens in the scanning sequence, and (ii) direction-aware updating which enables the model to discern the spatial relations of tokens by encoding directional information.

Image Classification Instance Segmentation +3

EgoPoseFormer: A Simple Baseline for Egocentric 3D Human Pose Estimation

no code implementations26 Mar 2024 Chenhongyi Yang, Anastasia Tkach, Shreyas Hampali, Linguang Zhang, Elliot J. Crowley, Cem Keskin

We also show that our method can be seamlessly extended to monocular settings, which achieves state-of-the-art performance on the SceneEgo dataset, improving MPJPE by 25. 5mm (21% improvement) compared to the best existing method with only 60. 7% model parameters and 36. 4% FLOPs.

Egocentric Pose Estimation

WidthFormer: Toward Efficient Transformer-based BEV View Transformation

1 code implementation8 Jan 2024 Chenhongyi Yang, Tianwei Lin, Lichao Huang, Elliot J. Crowley

In this work, we present WidthFormer, a novel transformer-based Bird's-Eye-View (BEV) 3D detection method tailored for real-time autonomous-driving applications.

3D Object Detection Autonomous Driving +3

DLAS: An Exploration and Assessment of the Deep Learning Acceleration Stack

no code implementations15 Nov 2023 Perry Gibson, José Cano, Elliot J. Crowley, Amos Storkey, Michael O'Boyle

Deep Neural Networks (DNNs) are extremely computationally demanding, which presents a large barrier to their deployment on resource-constrained devices.

Code Generation

Generate Your Own Scotland: Satellite Image Generation Conditioned on Maps

1 code implementation31 Aug 2023 Miguel Espinosa, Elliot J. Crowley

Despite recent advancements in image generation, diffusion models still remain largely underexplored in Earth Observation.

Earth Observation Image Generation

GPViT: A High Resolution Non-Hierarchical Vision Transformer with Group Propagation

2 code implementations13 Dec 2022 Chenhongyi Yang, Jiarui Xu, Shalini De Mello, Elliot J. Crowley, Xiaolong Wang

In each GP Block, features are first grouped together by a fixed number of learnable group tokens; we then perform Group Propagation where global information is exchanged between the grouped features; finally, global information in the updated grouped features is returned back to the image features through a transformer decoder.

Image Classification Instance Segmentation +5

Plug and Play Active Learning for Object Detection

1 code implementation21 Nov 2022 Chenhongyi Yang, Lichao Huang, Elliot J. Crowley

To overcome this challenge, we introduce Plug and Play Active Learning (PPAL), a simple and effective AL strategy for object detection.

Active Learning Image Classification +3

Prediction-Guided Distillation for Dense Object Detection

1 code implementation10 Mar 2022 Chenhongyi Yang, Mateusz Ochal, Amos Storkey, Elliot J. Crowley

Based on this, we propose Prediction-Guided Distillation (PGD), which focuses distillation on these key predictive regions of the teacher and yields considerable gains in performance over many existing KD baselines.

Dense Object Detection Knowledge Distillation +2

Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning

1 code implementation26 Nov 2021 Chenhongyi Yang, Lichao Huang, Elliot J. Crowley

The goal of contrastive learning based pre-training is to leverage large quantities of unlabeled data to produce a model that can be readily adapted downstream.

Contrastive Learning Instance Segmentation +2

Neural Architecture Search as Program Transformation Exploration

1 code implementation12 Feb 2021 Jack Turner, Elliot J. Crowley, Michael O'Boyle

This unification allows us to express existing NAS operations as combinations of simpler transformations.

Neural Architecture Search

Optimizing Grouped Convolutions on Edge Devices

1 code implementation17 Jun 2020 Perry Gibson, José Cano, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey

We observe that our new implementation scales well with the number of groups and provides the best inference times in all settings, improving the existing implementations of grouped convolutions in TVM, PyTorch and TensorFlow Lite by 3. 4x, 8x and 4x on average respectively.

Neural Architecture Search without Training

2 code implementations8 Jun 2020 Joseph Mellor, Jack Turner, Amos Storkey, Elliot J. Crowley

In this work, we examine the overlap of activations between datapoints in untrained networks and motivate how this can give a measure which is usefully indicative of a network's trained performance.

Neural Architecture Search

Performance Aware Convolutional Neural Network Channel Pruning for Embedded GPUs

no code implementations20 Feb 2020 Valentin Radu, Kuba Kaszyk, Yuan Wen, Jack Turner, Jose Cano, Elliot J. Crowley, Bjorn Franke, Amos Storkey, Michael O'Boyle

We evaluate higher level libraries, which analyze the input characteristics of a convolutional layer, based on which they produce optimized OpenCL (Arm Compute Library and TVM) and CUDA (cuDNN) code.

Model Compression Network Pruning

Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels

3 code implementations NeurIPS 2020 Massimiliano Patacchiola, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey

Recently, different machine learning methods have been introduced to tackle the challenging few-shot learning scenario that is, learning from a small labeled dataset related to a specific task.

Bayesian Inference Domain Adaptation +4

BlockSwap: Fisher-guided Block Substitution for Network Compression on a Budget

2 code implementations ICLR 2020 Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey, Gavin Gray

The desire to map neural networks to varying-capacity devices has led to the development of a wealth of compression techniques, many of which involve replacing standard convolutional blocks in a large network with cheap alternative blocks.

Separable Layers Enable Structured Efficient Linear Substitutions

1 code implementation3 Jun 2019 Gavin Gray, Elliot J. Crowley, Amos Storkey

In response to the development of recent efficient dense layers, this paper shows that something as simple as replacing linear components in pointwise convolutions with structured linear decompositions also produces substantial gains in the efficiency/accuracy tradeoff.

Dilated DenseNets for Relational Reasoning

no code implementations1 Nov 2018 Antreas Antoniou, Agnieszka Słowik, Elliot J. Crowley, Amos Storkey

Despite their impressive performance in many tasks, deep neural networks often struggle at relational reasoning.

Relational Reasoning

Distilling with Performance Enhanced Students

no code implementations24 Oct 2018 Jack Turner, Elliot J. Crowley, Valentin Radu, José Cano, Amos Storkey, Michael O'Boyle

The task of accelerating large neural networks on general purpose hardware has, in recent years, prompted the use of channel pruning to reduce network size.

Model Compression

Pruning neural networks: is it time to nip it in the bud?

no code implementations NIPS Workshop CDNNRIA 2018 Elliot J. Crowley, Jack Turner, Amos Storkey, Michael O'Boyle

First, when time-constrained, it is better to train a simple, smaller network from scratch than prune a large network.

A Closer Look at Structured Pruning for Neural Network Compression

2 code implementations10 Oct 2018 Elliot J. Crowley, Jack Turner, Amos Storkey, Michael O'Boyle

Structured pruning is a popular method for compressing a neural network: given a large trained network, one alternates between removing channel connections and fine-tuning; reducing the overall width of the network.

Network Pruning Neural Network Compression

CINIC-10 is not ImageNet or CIFAR-10

2 code implementations2 Oct 2018 Luke N. Darlow, Elliot J. Crowley, Antreas Antoniou, Amos J. Storkey

In this brief technical report we introduce the CINIC-10 dataset as a plug-in extended alternative for CIFAR-10.

Image Classification

Characterising Across-Stack Optimisations for Deep Convolutional Neural Networks

1 code implementation19 Sep 2018 Jack Turner, José Cano, Valentin Radu, Elliot J. Crowley, Michael O'Boyle, Amos Storkey

Convolutional Neural Networks (CNNs) are extremely computationally demanding, presenting a large barrier to their deployment on resource-constrained devices.

Neural Network Compression

Moonshine: Distilling with Cheap Convolutions

1 code implementation NeurIPS 2018 Elliot J. Crowley, Gavin Gray, Amos Storkey

Many engineers wish to deploy modern neural networks in memory-limited settings; but the development of flexible methods for reducing memory use is in its infancy, and there is little knowledge of the resulting cost-benefit.

Cannot find the paper you are looking for? You can Submit a new open access paper.