1 code implementation • 23 Sep 2024 • Devon Jarvis, Richard Klein, Benjamin Rosman, Andrew M. Saxe
Our results shed light on the difficulty of module specialization, what is required for modules to successfully specialize, and the necessity of modular architectures to achieve systematicity.
no code implementations • 27 Feb 2024 • David Torpey, Lawrence Pratt, Richard Klein
Additionally, we provide a large-scale unlabelled EL image dataset of $22000$ images, and a $642$-image labelled semantic segmentation EL dataset, for further research in developing self- and semi-supervised training techniques in this domain.
no code implementations • 23 Feb 2024 • David Torpey, Richard Klein
Often, applications of self-supervised learning to 3D medical data opt to use 3D variants of successful 2D network architectures.
no code implementations • 14 Feb 2024 • David Torpey, Richard Klein
The standard approach to modern self-supervised learning is to generate random views through data augmentations and minimise a loss computed from the representations of these views.
2 code implementations • NeurIPS 2023 • Michael Beukman, Devon Jarvis, Richard Klein, Steven James, Benjamin Rosman
To this end, we introduce a neural network architecture, the Decision Adapter, which generates the weights of an adapter module and conditions the behaviour of an agent on the context information.
no code implementations • 27 Jul 2022 • David Torpey, Richard Klein
It is known that representations from self-supervised pre-training can perform on par, and often better, on various downstream tasks than representations from fully-supervised pre-training.
no code implementations • 12 May 2022 • Nathan Michlo, Devon Jarvis, Richard Klein, Steven James
In this work, we investigate the properties of data that cause popular representation learning approaches to fail.
1 code implementation • 27 Feb 2022 • Nathan Michlo, Richard Klein, Steven James
Our findings demonstrate the subjective nature of disentanglement and the importance of considering the interaction between the ground-truth factors, data and notably, the reconstruction loss, which is under-recognised in the literature.
no code implementations • 3 Nov 2021 • David Poulton, Richard Klein
This research presents the idea of activity fusion into existing Pose Estimation architectures to enhance their predictive ability.
1 code implementation • 22 Oct 2021 • Jared Harris-Dewey, Richard Klein
We give an overview of the different rendering methods and we demonstrate that the use of a Generative Adversarial Networks (GAN) for Global Illumination (GI) gives a superior quality rendered image to that of a rasterisations image.
no code implementations • 29 Sep 2021 • Devon Jarvis, Richard Klein, Benjamin Rosman, Andrew M Saxe
We introduce a minimal space of datasets with systematic and non-systematic features in both the input and output.
no code implementations • 14 Jun 2021 • Julien Nyambal, Richard Klein
Those bounding box coordinates are saved from a frame of the video of the parking lot in a JSON format, to be later used by the system for sequential prediction on each parking spot.
no code implementations • 28 May 2021 • Pierce Burke, Richard Klein
The naive approach to assigning labels is to adopt a majority vote method, however, in the context of data labelling, this is not always ideal as data labellers are not equally reliable.
no code implementations • 28 May 2021 • Richard Klein, Turgay Celik
To perform contingent teaching and be responsive to students' needs during class, lecturers must be able to quickly assess the state of their audience.
no code implementations • 12 Jan 2021 • David Torpey, Richard Klein
We show how the inclusion of this module to regress the parameters of an affine transformation or homography, in addition to the original contrastive objective, improves both performance and learning speed.
1 code implementation • 14 Jan 2020 • Kimessha Paupamah, Steven James, Richard Klein
Deep neural networks are typically too computationally expensive to run in real-time on consumer-grade hardware and low-powered devices.
Ranked #1 on Neural Network Compression on CIFAR-10
no code implementations • 25 Sep 2019 • Devon Jarvis, Richard Klein, Benjamin Rosman
The efficacy of the width of the basin of attraction surrounding a minimum in parameter space as an indicator for the generalizability of a model parametrization is a point of contention surrounding the training of artificial neural networks, with the dominant view being that wider areas in the landscape reflect better generalizability by the trained model.