1 code implementation • ICLR 2018 • Jason Ramapuram, Magda Gregorova, Alexandros Kalousis
Lifelong learning is the problem of learning multiple consecutive tasks in a sequential manner, where knowledge gained from previous tasks is retained and used to aid future learning over the lifetime of the learner.
no code implementations • 19 Apr 2018 • Magda Gregorová, Jason Ramapuram, Alexandros Kalousis, Stéphane Marchand-Maillet
We propose a new method for input variable selection in nonlinear regression.
1 code implementation • 30 Jun 2018 • Jason Ramapuram, Russ Webb
Knowledge Matters: Importance of Prior Information for Optimization [7], by Gulcehre et.
no code implementations • 24 Oct 2018 • Frantzeska Lavda, Jason Ramapuram, Magda Gregorova, Alexandros Kalousis
Continual learning is the ability to sequentially learn over time by accommodating knowledge while retaining previously learned experiences.
1 code implementation • 8 Dec 2018 • Jason Ramapuram, Maurits Diephuis, Frantzeska Lavda, Russ Webb, Alexandros Kalousis
Image classification with deep neural networks is typically restricted to images of small dimensionality such as 224 x 244 in Resnet models [24].
no code implementations • 9 May 2019 • Jason Ramapuram, Russ Webb
Modern neural network training relies on piece-wise (sub-)differentiable functions in order to use backpropagation to update model parameters.
1 code implementation • NeurIPS 2020 • Jean-Baptiste Alayrac, Adrià Recasens, Rosalia Schneider, Relja Arandjelović, Jason Ramapuram, Jeffrey De Fauw, Lucas Smaira, Sander Dieleman, Andrew Zisserman
In particular, we explore how best to combine the modalities, such that fine-grained representations of the visual and audio modalities can be maintained, whilst also integrating text into a common embedding.
2 code implementations • ICCV 2021 • Mike Roberts, Jason Ramapuram, Anurag Ranjan, Atulit Kumar, Miguel Angel Bautista, Nathan Paczan, Russ Webb, Joshua M. Susskind
To create our dataset, we leverage a large repository of synthetic scenes created by professional artists, and we generate 77, 400 images of 461 indoor scenes with detailed per-pixel labels and corresponding ground truth geometry.
no code implementations • ICLR 2021 • Jason Ramapuram, Yan Wu, Alexandros Kalousis
Episodic and semantic memory are critical components of the human memory model.
no code implementations • 1 Oct 2021 • Jason Ramapuram, Dan Busbridge, Xavier Suau, Russ Webb
While state-of-the-art contrastive Self-Supervised Learning (SSL) models produce results competitive with their supervised counterparts, they lack the ability to infer latent variables.
no code implementations • 1 Oct 2021 • Jason Ramapuram, Dan Busbridge, Russ Webb
In this work we examine how fine-tuning impacts the fairness of contrastive Self-Supervised Learning (SSL) models.
no code implementations • 1 Oct 2021 • Tom George Grigg, Dan Busbridge, Jason Ramapuram, Russ Webb
Despite the success of a number of recent techniques for visual self-supervised deep learning, there has been limited investigation into the representations that are ultimately learned.
no code implementations • NeurIPS Workshop ICBINB 2021 • Arno Blaas, Xavier Suau, Jason Ramapuram, Nicholas Apostoloff, Luca Zappella
Image augmentations applied during training are crucial for the generalization performance of image classifiers.
1 code implementation • 15 Jul 2022 • Shuangfei Zhai, Navdeep Jaitly, Jason Ramapuram, Dan Busbridge, Tatiana Likhomanenko, Joseph Yitan Cheng, Walter Talbott, Chen Huang, Hanlin Goh, Joshua Susskind
This pretraining strategy which has been used in BERT models in NLP, Wav2Vec models in Speech and, recently, in MAE models in Vision, forces the model to learn about relationships between the content in different parts of the input using autoencoding related objectives.
no code implementations • 28 Oct 2022 • Andrius Ovsianas, Jason Ramapuram, Dan Busbridge, Eeshan Gunesh Dhekane, Russ Webb
Self-supervised representation learning (SSL) methods provide an effective label-free initial condition for fine-tuning downstream tasks.
1 code implementation • 11 Mar 2023 • Shuangfei Zhai, Tatiana Likhomanenko, Etai Littwin, Dan Busbridge, Jason Ramapuram, Yizhe Zhang, Jiatao Gu, Josh Susskind
We show that $\sigma$Reparam provides stability and robustness with respect to the choice of hyperparameters, going so far as enabling training (a) a Vision Transformer {to competitive performance} without warmup, weight decay, layer normalization or adaptive optimizers; (b) deep architectures in machine translation and (c) speech recognition to competitive performance without warmup and adaptive optimizers.
1 code implementation • 28 Jun 2023 • Xavier Suau, Federico Danieli, T. Anderson Keller, Arno Blaas, Chen Huang, Jason Ramapuram, Dan Busbridge, Luca Zappella
We propose 2D strUctured and EquivarianT representations (coined DUET), which are 2d representations organized in a matrix structure, and equivariant with respect to transformations acting on the input data.
1 code implementation • 20 Jul 2023 • Borja Rodríguez-Gálvez, Arno Blaas, Pau Rodríguez, Adam Goliński, Xavier Suau, Jason Ramapuram, Dan Busbridge, Luca Zappella
We consider a different lower bound on the MI consisting of an entropy and a reconstruction term (ER), and analyze the main MVSSL families through its lens.
no code implementations • 6 Dec 2023 • Polina Turishcheva, Jason Ramapuram, Sinead Williamson, Dan Busbridge, Eeshan Dhekane, Russ Webb
Understanding model uncertainty is important for many applications.
no code implementations • 8 Mar 2024 • Amitis Shidani, Devon Hjelm, Jason Ramapuram, Russ Webb, Eeshan Gunesh Dhekane, Dan Busbridge
Contrastive learning typically matches pairs of related views among a number of unrelated negative views.