1 code implementation • 22 Nov 2024 • Miguel Espinosa, Chenhongyi Yang, Linus Ericsson, Steven McDonagh, Elliot J. Crowley
Our observations culminate in the proposal of a training-free approach that leverages DINOv2 features, towards better endowing SAM with semantic understanding and achieving instance-level class differentiation through feature-based similarity.
1 code implementation • 31 May 2024 • Linus Ericsson, Miguel Espinosa, Chenhongyi Yang, Antreas Antoniou, Amos Storkey, Shay B. Cohen, Steven McDonagh, Elliot J. Crowley
Using this search space, we perform experiments to find novel architectures as well as improvements on existing ones on the diverse Unseen NAS datasets.
no code implementations • 9 Apr 2024 • Thomas L. Lee, Sigrid Passano Hellan, Linus Ericsson, Elliot J. Crowley, Amos Storkey
In continual learning (CL) -- where a learner trains on a stream of data -- standard hyperparameter optimisation (HPO) cannot be applied, as a learner does not have access to all of the data at the same time.
1 code implementation • 26 Mar 2024 • Chenhongyi Yang, Zehui Chen, Miguel Espinosa, Linus Ericsson, Zhenyu Wang, Jiaming Liu, Elliot J. Crowley
The recent Mamba model has shown how SSMs can be highly competitive with other architectures on sequential data and initial attempts have been made to apply it to images.
1 code implementation • 26 Mar 2024 • Chenhongyi Yang, Anastasia Tkach, Shreyas Hampali, Linguang Zhang, Elliot J. Crowley, Cem Keskin
We also show that our method can be seamlessly extended to monocular settings, which achieves state-of-the-art performance on the SceneEgo dataset, improving MPJPE by 25. 5mm (21% improvement) compared to the best existing method with only 60. 7% model parameters and 36. 4% FLOPs.
Ranked #1 on Egocentric Pose Estimation on UnrealEgo
1 code implementation • 8 Jan 2024 • Chenhongyi Yang, Tianwei Lin, Lichao Huang, Elliot J. Crowley
We present WidthFormer, a novel transformer-based module to compute Bird's-Eye-View (BEV) representations from multi-view cameras for real-time autonomous-driving applications.
no code implementations • 15 Nov 2023 • Perry Gibson, José Cano, Elliot J. Crowley, Amos Storkey, Michael O'Boyle
Deep Neural Networks (DNNs) are extremely computationally demanding, which presents a large barrier to their deployment on resource-constrained devices.
1 code implementation • 31 Aug 2023 • Miguel Espinosa, Elliot J. Crowley
Despite recent advancements in image generation, diffusion models still remain largely underexplored in Earth Observation.
2 code implementations • 13 Dec 2022 • Chenhongyi Yang, Jiarui Xu, Shalini De Mello, Elliot J. Crowley, Xiaolong Wang
In each GP Block, features are first grouped together by a fixed number of learnable group tokens; we then perform Group Propagation where global information is exchanged between the grouped features; finally, global information in the updated grouped features is returned back to the image features through a transformer decoder.
1 code implementation • CVPR 2024 • Chenhongyi Yang, Lichao Huang, Elliot J. Crowley
To overcome this challenge, we introduce Plug and Play Active Learning (PPAL), a simple and effective AL strategy for object detection.
1 code implementation • 10 Mar 2022 • Chenhongyi Yang, Mateusz Ochal, Amos Storkey, Elliot J. Crowley
Based on this, we propose Prediction-Guided Distillation (PGD), which focuses distillation on these key predictive regions of the teacher and yields considerable gains in performance over many existing KD baselines.
1 code implementation • 26 Nov 2021 • Chenhongyi Yang, Lichao Huang, Elliot J. Crowley
The goal of contrastive learning based pre-training is to leverage large quantities of unlabeled data to produce a model that can be readily adapted downstream.
1 code implementation • 12 Feb 2021 • Jack Turner, Elliot J. Crowley, Michael O'Boyle
This unification allows us to express existing NAS operations as combinations of simpler transformations.
1 code implementation • 17 Jun 2020 • Perry Gibson, José Cano, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey
We observe that our new implementation scales well with the number of groups and provides the best inference times in all settings, improving the existing implementations of grouped convolutions in TVM, PyTorch and TensorFlow Lite by 3. 4x, 8x and 4x on average respectively.
3 code implementations • 8 Jun 2020 • Joseph Mellor, Jack Turner, Amos Storkey, Elliot J. Crowley
In this work, we examine the overlap of activations between datapoints in untrained networks and motivate how this can give a measure which is usefully indicative of a network's trained performance.
no code implementations • 20 Feb 2020 • Valentin Radu, Kuba Kaszyk, Yuan Wen, Jack Turner, Jose Cano, Elliot J. Crowley, Bjorn Franke, Amos Storkey, Michael O'Boyle
We evaluate higher level libraries, which analyze the input characteristics of a convolutional layer, based on which they produce optimized OpenCL (Arm Compute Library and TVM) and CUDA (cuDNN) code.
3 code implementations • NeurIPS 2020 • Massimiliano Patacchiola, Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey
Recently, different machine learning methods have been introduced to tackle the challenging few-shot learning scenario that is, learning from a small labeled dataset related to a specific task.
2 code implementations • ICLR 2020 • Jack Turner, Elliot J. Crowley, Michael O'Boyle, Amos Storkey, Gavin Gray
The desire to map neural networks to varying-capacity devices has led to the development of a wealth of compression techniques, many of which involve replacing standard convolutional blocks in a large network with cheap alternative blocks.
1 code implementation • 3 Jun 2019 • Gavin Gray, Elliot J. Crowley, Amos Storkey
In response to the development of recent efficient dense layers, this paper shows that something as simple as replacing linear components in pointwise convolutions with structured linear decompositions also produces substantial gains in the efficiency/accuracy tradeoff.
no code implementations • 1 Nov 2018 • Antreas Antoniou, Agnieszka Słowik, Elliot J. Crowley, Amos Storkey
Despite their impressive performance in many tasks, deep neural networks often struggle at relational reasoning.
no code implementations • 24 Oct 2018 • Jack Turner, Elliot J. Crowley, Valentin Radu, José Cano, Amos Storkey, Michael O'Boyle
The task of accelerating large neural networks on general purpose hardware has, in recent years, prompted the use of channel pruning to reduce network size.
no code implementations • NIPS Workshop CDNNRIA 2018 • Elliot J. Crowley, Jack Turner, Amos Storkey, Michael O'Boyle
First, when time-constrained, it is better to train a simple, smaller network from scratch than prune a large network.
2 code implementations • 10 Oct 2018 • Elliot J. Crowley, Jack Turner, Amos Storkey, Michael O'Boyle
Structured pruning is a popular method for compressing a neural network: given a large trained network, one alternates between removing channel connections and fine-tuning; reducing the overall width of the network.
2 code implementations • 2 Oct 2018 • Luke N. Darlow, Elliot J. Crowley, Antreas Antoniou, Amos J. Storkey
In this brief technical report we introduce the CINIC-10 dataset as a plug-in extended alternative for CIFAR-10.
Ranked #6 on Image Classification on CINIC-10
1 code implementation • 19 Sep 2018 • Jack Turner, José Cano, Valentin Radu, Elliot J. Crowley, Michael O'Boyle, Amos Storkey
Convolutional Neural Networks (CNNs) are extremely computationally demanding, presenting a large barrier to their deployment on resource-constrained devices.
1 code implementation • NeurIPS 2018 • Elliot J. Crowley, Gavin Gray, Amos Storkey
Many engineers wish to deploy modern neural networks in memory-limited settings; but the development of flexible methods for reducing memory use is in its infancy, and there is little knowledge of the resulting cost-benefit.