no code implementations • 6 Nov 2024 • Tyler Clark, Mark Towers, Christine Evers, Jonathon Hare
Rainbow Deep Q-Network (DQN) demonstrated combining multiple independent enhancements could significantly boost a reinforcement learning (RL) agent's performance.
no code implementations • 30 Oct 2024 • Jay Bear, Adam Prügel-Bennett, Jonathon Hare
Iterative algorithms solve problems by taking steps until a solution is reached.
no code implementations • 17 Jan 2024 • Lei Xun, Mingyu Hu, Hengrui Zhao, Amit Kumar Singh, Jonathon Hare, Geoff V. Merrett
Distributed inference is a popular approach for efficient DNN inference at the edge.
1 code implementation • 17 Jan 2024 • Lei Xun, Jonathon Hare, Geoff V. Merrett
In this thesis, we proposed a combined method, a system was developed for DNN performance trade-off management, combining the runtime trade-off opportunities in both algorithms and hardware to meet dynamically changing application performance targets and hardware constraints in real time.
no code implementations • 10 Nov 2022 • Bhumika Mistry, Katayoun Farrahi, Jonathon Hare
Multilayer Perceptrons struggle to learn certain simple arithmetic tasks.
no code implementations • 20 Jan 2022 • Daniela Mihai, Jonathon Hare
Physical sketches are created by learning programs to control a drawing robot.
no code implementations • NeurIPS Workshop SVRHM 2021 • Daniela Mihai, Jonathon Hare
We present an investigation into how representational losses can affect the drawings produced by artificial agents playing a communication game.
1 code implementation • NeurIPS 2021 • Bhumika Mistry, Katayoun Farrahi, Jonathon Hare
To achieve systematic generalisation, it first makes sense to master simple tasks such as arithmetic.
no code implementations • 29 Sep 2021 • Mark Tuddenham, Adam Prugel-Bennett, Jonathon Hare
The optimisation of neural networks can be sped up by orthogonalising the gradients before the optimisation step, ensuring the diversification of the learned representations.
3 code implementations • 20 Sep 2021 • Jia Bi, Jonathon Hare, Geoff V. Merrett
When compared to GhostNet, inference latency on the Jetson Nano is improved by 1. 3x and 2x on the GPU and CPU respectively.
no code implementations • 26 Jul 2021 • Yue Jiao, Jonathon Hare, Adam Prügel-Bennett
Although different paradigms of visual semantic embedding models are designed to align visual features and distributed word representations, it is unclear to what extent current ZSL models encode semantic information from distributed word representations.
no code implementations • 26 Jul 2021 • Yue Jiao, Jonathon Hare, Adam Prügel-Bennett
We find that contextual representations in language mod-els outperform static word embeddings, when the compositional chain of object is short.
no code implementations • 17 Jul 2021 • Hishan Parry, Lei Xun, Amin Sabet, Jia Bi, Jonathon Hare, Geoff V. Merrett
The new reduced design space results in a BLEU score increase of approximately 1% for sub-optimal models from the original design space, with a wide range for performance scaling between 0. 356s - 1. 526s for the GPU and 2. 9s - 7. 31s for the CPU.
no code implementations • 21 Jun 2021 • Amin Sabet, Jonathon Hare, Bashir Al-Hashimi, Geoff V. Merrett
In this paper, we propose temporal early exits to reduce the computational complexity of per-frame video object detection.
1 code implementation • NeurIPS 2021 • Daniela Mihai, Jonathon Hare
Evidence that visual communication preceded written language and provided a basis for it goes back to prehistory, in forms such as cave and rock paintings depicting traces of our distant ancestors.
1 code implementation • 8 May 2021 • Wei Lou, Lei Xun, Amin Sabet, Jia Bi, Jonathon Hare, Geoff V. Merrett
However, the training process of such dynamic DNNs can be costly, since platform-aware models of different deployment scenarios must be retrained to become dynamic.
1 code implementation • 30 Mar 2021 • Daniela Mihai, Jonathon Hare
We present a bottom-up differentiable relaxation of the process of drawing points, lines and curves into a pixel raster.
no code implementations • 25 Jan 2021 • Daniela Mihai, Jonathon Hare
The majority of work has focused on using fixed, pretrained image feature extraction networks which potentially bias the information the agents learn to communicate.
2 code implementations • 23 Jan 2021 • Bhumika Mistry, Katayoun Farrahi, Jonathon Hare
Neural Arithmetic Logic Modules have become a growing area of interest, though remain a niche field.
no code implementations • NeurIPS Workshop SVRHM 2020 • Ethan Harris, Daniela Mihai, Jonathon Hare
Primate visual systems are well known to exhibit varying degrees of bottlenecks in the early visual pathway.
1 code implementation • 6 Oct 2020 • Ethan Harris, Daniela Mihai, Jonathon Hare
The colour tuning data can further be used to form a rich understanding of how colour is encoded by a network.
no code implementations • NeurIPS 2020 • Matthew Painter, Jonathon Hare, Adam Prugel-Bennett
In this work we empirically show that linear disentangled representations are not generally present in standard VAE models and that they instead require altering the loss landscape to induce them.
5 code implementations • 27 Feb 2020 • Ethan Harris, Antonia Marcu, Matthew Painter, Mahesan Niranjan, Adam Prügel-Bennett, Jonathon Hare
Finally, we show that a consequence of the difference between interpolating MSDA such as MixUp and masking MSDA such as FMix is that the two can be combined to improve performance even further.
Ranked #3 on Image Classification on Fashion-MNIST
no code implementations • 13 Nov 2019 • Daniela Mihai, Jonathon Hare
There has been an increasing interest in the area of emergent communication between agents which learn to play referential signalling games with realistic images.
1 code implementation • 14 Oct 2019 • Ethan Harris, Daniela Mihai, Jonathon Hare
Colour vision has long fascinated scientists, who have sought to understand both the physiology of the mechanics of colour vision and the psychophysics of colour perception.
no code implementations • 25 Sep 2019 • Zezhen Zeng, Jonathon Hare, Adam Prügel-Bennett
Variational Auto-Encoders (VAEs) are designed to capture compressible information about a dataset.
1 code implementation • NeurIPS 2019 • Yan Zhang, Jonathon Hare, Adam Prügel-Bennett
Current approaches for predicting sets from feature vectors ignore the unordered nature of sets and suffer from discontinuity issues as a result.
2 code implementations • ICLR 2020 • Yan Zhang, Jonathon Hare, Adam Prügel-Bennett
Traditional set prediction models can struggle with simple datasets due to an issue we call the responsibility problem.
no code implementations • ICLR 2019 • Yue Jiao, Jonathon Hare, Adam Prügel-Bennett
We present an extension of a variational auto-encoder that creates semantically richcoupled probabilistic latent representations that capture the semantics of multiplemodalities of data.
2 code implementations • ICLR 2019 • Ethan Harris, Mahesan Niranjan, Jonathon Hare
The state of our Hebb-Rosenblatt memory is embedded in STAWM as the weights space of a layer.
2 code implementations • ICLR 2019 • Yan Zhang, Jonathon Hare, Adam Prügel-Bennett
Representations of sets are challenging to learn because operations on sets should be permutation-invariant.
2 code implementations • 10 Sep 2018 • Ethan Harris, Matthew Painter, Jonathon Hare
We introduce torchbearer, a model fitting library for pytorch aimed at researchers working on deep learning or differentiable programming.
1 code implementation • NAACL 2018 • Lucie-Aimée Kaffee, Hady Elsahar, Pavlos Vougiouklis, Christophe Gravier, Frédérique Laforest, Jonathon Hare, Elena Simperl
While Wikipedia exists in 287 languages, its content is unevenly distributed among them.
1 code implementation • ICLR 2018 • Yan Zhang, Jonathon Hare, Adam Prügel-Bennett
Visual Question Answering (VQA) models have struggled with counting objects in natural images so far.
Ranked #32 on Visual Question Answering (VQA) on VQA v2 test-std
1 code implementation • 1 Nov 2017 • Pavlos Vougiouklis, Hady Elsahar, Lucie-Aimée Kaffee, Christoph Gravier, Frederique Laforest, Jonathon Hare, Elena Simperl
We explore the problem of generating natural language summaries for Semantic Web data.
1 code implementation • COLING 2016 • Pavlos Vougiouklis, Jonathon Hare, Elena Simperl
Our model is based on a Recurrent Neural Network (RNN) that is trained over concatenated sequences of comments, a Convolution Neural Network that is trained over Wikipedia sentences and a formulation that couples the two trained embeddings in a multimodal space.