Search Results for author: Jonathon Hare

Found 34 papers, 19 papers with code

Physically Embodied Deep Image Optimisation

no code implementations20 Jan 2022 Daniela Mihai, Jonathon Hare

Physical sketches are created by learning programs to control a drawing robot.

Shared Visual Representations of Drawing for Communication: How do different biases affect human interpretability and intent?

no code implementations NeurIPS Workshop SVRHM 2021 Daniela Mihai, Jonathon Hare

We present an investigation into how representational losses can affect the drawings produced by artificial agents playing a communication game.

Learning Division with Neural Arithmetic Logic Modules

1 code implementation NeurIPS 2021 Bhumika Mistry, Katayoun Farrahi, Jonathon Hare

To achieve systematic generalisation, it first makes sense to master simple tasks such as arithmetic.

Orthogonalising gradients to speedup neural network optimisation

no code implementations29 Sep 2021 Mark Tuddenham, Adam Prugel-Bennett, Jonathon Hare

The optimisation of neural networks can be sped up by orthogonalising the gradients before the optimisation step, ensuring the diversification of the learned representations.

GhostShiftAddNet: More Features from Energy-Efficient Operations

2 code implementations20 Sep 2021 Jia Bi, Jonathon Hare, Geoff V. Merrett

When compared to GhostNet, inference latency on the Jetson Nano is improved by 1. 3x and 2x on the GPU and CPU respectively.

Image Classification

Language Models as Zero-shot Visual Semantic Learners

no code implementations26 Jul 2021 Yue Jiao, Jonathon Hare, Adam Prügel-Bennett

We find that contextual representations in language mod-els outperform static word embeddings, when the compositional chain of object is short.

Object Recognition Word Embeddings +1

What Remains of Visual Semantic Embeddings

no code implementations26 Jul 2021 Yue Jiao, Jonathon Hare, Adam Prügel-Bennett

Although different paradigms of visual semantic embedding models are designed to align visual features and distributed word representations, it is unclear to what extent current ZSL models encode semantic information from distributed word representations.

Contrastive Learning Zero-Shot Learning

Dynamic Transformer for Efficient Machine Translation on Embedded Devices

no code implementations17 Jul 2021 Hishan Parry, Lei Xun, Amin Sabet, Jia Bi, Jonathon Hare, Geoff V. Merrett

The new reduced design space results in a BLEU score increase of approximately 1% for sub-optimal models from the original design space, with a wide range for performance scaling between 0. 356s - 1. 526s for the GPU and 2. 9s - 7. 31s for the CPU.

Machine Translation Translation

Temporal Early Exits for Efficient Video Object Detection

no code implementations21 Jun 2021 Amin Sabet, Jonathon Hare, Bashir Al-Hashimi, Geoff V. Merrett

In this paper, we propose temporal early exits to reduce the computational complexity of per-frame video object detection.

object-detection Optical Flow Estimation +1

Learning to Draw: Emergent Communication through Sketching

1 code implementation NeurIPS 2021 Daniela Mihai, Jonathon Hare

Evidence that visual communication preceded written language and provided a basis for it goes back to prehistory, in forms such as cave and rock paintings depicting traces of our distant ancestors.


Dynamic-OFA: Runtime DNN Architecture Switching for Performance Scaling on Heterogeneous Embedded Platforms

1 code implementation8 May 2021 Wei Lou, Lei Xun, Amin Sabet, Jia Bi, Jonathon Hare, Geoff V. Merrett

However, the training process of such dynamic DNNs can be costly, since platform-aware models of different deployment scenarios must be retrained to become dynamic.

Differentiable Drawing and Sketching

1 code implementation30 Mar 2021 Daniela Mihai, Jonathon Hare

We present a bottom-up differentiable relaxation of the process of drawing points, lines and curves into a pixel raster.

Drawing Pictures

The emergence of visual semantics through communication games

no code implementations25 Jan 2021 Daniela Mihai, Jonathon Hare

The majority of work has focused on using fixed, pretrained image feature extraction networks which potentially bias the information the agents learn to communicate.

A Primer for Neural Arithmetic Logic Modules

2 code implementations23 Jan 2021 Bhumika Mistry, Katayoun Farrahi, Jonathon Hare

Neural Arithmetic Logic Modules have become a growing area of interest, though remain a niche field.

Anatomically Constrained ResNets Exhibit Opponent Receptive Fields; So What?

no code implementations NeurIPS Workshop SVRHM 2020 Ethan Harris, Daniela Mihai, Jonathon Hare

Primate visual systems are well known to exhibit varying degrees of bottlenecks in the early visual pathway.

How Convolutional Neural Network Architecture Biases Learned Opponency and Colour Tuning

1 code implementation6 Oct 2020 Ethan Harris, Daniela Mihai, Jonathon Hare

The colour tuning data can further be used to form a rich understanding of how colour is encoded by a network.

Linear Disentangled Representations and Unsupervised Action Estimation

no code implementations NeurIPS 2020 Matthew Painter, Jonathon Hare, Adam Prugel-Bennett

In this work we empirically show that linear disentangled representations are not generally present in standard VAE models and that they instead require altering the loss landscape to induce them.


FMix: Enhancing Mixed Sample Data Augmentation

5 code implementations27 Feb 2020 Ethan Harris, Antonia Marcu, Matthew Painter, Mahesan Niranjan, Adam Prügel-Bennett, Jonathon Hare

Finally, we show that a consequence of the difference between interpolating MSDA such as MixUp and masking MSDA such as FMix is that the two can be combined to improve performance even further.

Data Augmentation Image Classification

Avoiding hashing and encouraging visual semantics in referential emergent language games

no code implementations13 Nov 2019 Daniela Mihai, Jonathon Hare

There has been an increasing interest in the area of emergent communication between agents which learn to play referential signalling games with realistic images.

Spatial and Colour Opponency in Anatomically Constrained Deep Networks

1 code implementation14 Oct 2019 Ethan Harris, Daniela Mihai, Jonathon Hare

Colour vision has long fascinated scientists, who have sought to understand both the physiology of the mechanics of colour vision and the psychophysics of colour perception.

Imagining the Latent Space of a Variational Auto-Encoders

no code implementations25 Sep 2019 Zezhen Zeng, Jonathon Hare, Adam Prügel-Bennett

Variational Auto-Encoders (VAEs) are designed to capture compressible information about a dataset.

Deep Set Prediction Networks

1 code implementation NeurIPS 2019 Yan Zhang, Jonathon Hare, Adam Prügel-Bennett

Current approaches for predicting sets from feature vectors ignore the unordered nature of sets and suffer from discontinuity issues as a result.

FSPool: Learning Set Representations with Featurewise Sort Pooling

2 code implementations ICLR 2020 Yan Zhang, Jonathon Hare, Adam Prügel-Bennett

Traditional set prediction models can struggle with simple datasets due to an issue we call the responsibility problem.

General Classification

Probabilistic Semantic Embedding

no code implementations ICLR 2019 Yue Jiao, Jonathon Hare, Adam Prügel-Bennett

We present an extension of a variational auto-encoder that creates semantically richcoupled probabilistic latent representations that capture the semantics of multiplemodalities of data.

General Classification Image Generation

Torchbearer: A Model Fitting Library for PyTorch

2 code implementations10 Sep 2018 Ethan Harris, Matthew Painter, Jonathon Hare

We introduce torchbearer, a model fitting library for pytorch aimed at researchers working on deep learning or differentiable programming.

Data Visualization

A Neural Network Approach for Knowledge-Driven Response Generation

1 code implementation COLING 2016 Pavlos Vougiouklis, Jonathon Hare, Elena Simperl

Our model is based on a Recurrent Neural Network (RNN) that is trained over concatenated sequences of comments, a Convolution Neural Network that is trained over Wikipedia sentences and a formulation that couples the two trained embeddings in a multimodal space.

Response Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.