Search Results for author: Jason Ramapuram

Found 20 papers, 9 papers with code

Lifelong Generative Modeling

1 code implementation ICLR 2018 Jason Ramapuram, Magda Gregorova, Alexandros Kalousis

Lifelong learning is the problem of learning multiple consecutive tasks in a sequential manner, where knowledge gained from previous tasks is retained and used to aid future learning over the lifetime of the learner.

Transfer Learning

A New Benchmark and Progress Toward Improved Weakly Supervised Learning

1 code implementation30 Jun 2018 Jason Ramapuram, Russ Webb

Knowledge Matters: Importance of Prior Information for Optimization [7], by Gulcehre et.

Weakly-supervised Learning

Continual Classification Learning Using Generative Models

no code implementations24 Oct 2018 Frantzeska Lavda, Jason Ramapuram, Magda Gregorova, Alexandros Kalousis

Continual learning is the ability to sequentially learn over time by accommodating knowledge while retaining previously learned experiences.

Classification Continual Learning +1

Variational Saccading: Efficient Inference for Large Resolution Images

1 code implementation8 Dec 2018 Jason Ramapuram, Maurits Diephuis, Frantzeska Lavda, Russ Webb, Alexandros Kalousis

Image classification with deep neural networks is typically restricted to images of small dimensionality such as 224 x 244 in Resnet models [24].

General Classification Image Classification +2

Improving Discrete Latent Representations With Differentiable Approximation Bridges

no code implementations9 May 2019 Jason Ramapuram, Russ Webb

Modern neural network training relies on piece-wise (sub-)differentiable functions in order to use backpropagation to update model parameters.

Density Estimation General Classification +3

Self-Supervised MultiModal Versatile Networks

1 code implementation NeurIPS 2020 Jean-Baptiste Alayrac, Adrià Recasens, Rosalia Schneider, Relja Arandjelović, Jason Ramapuram, Jeffrey De Fauw, Lucas Smaira, Sander Dieleman, Andrew Zisserman

In particular, we explore how best to combine the modalities, such that fine-grained representations of the visual and audio modalities can be maintained, whilst also integrating text into a common embedding.

Action Recognition In Videos Audio Classification +2

Hypersim: A Photorealistic Synthetic Dataset for Holistic Indoor Scene Understanding

2 code implementations ICCV 2021 Mike Roberts, Jason Ramapuram, Anurag Ranjan, Atulit Kumar, Miguel Angel Bautista, Nathan Paczan, Russ Webb, Joshua M. Susskind

To create our dataset, we leverage a large repository of synthetic scenes created by professional artists, and we generate 77, 400 images of 461 indoor scenes with detailed per-pixel labels and corresponding ground truth geometry.

Multi-Task Learning Scene Understanding +1

Stochastic Contrastive Learning

no code implementations1 Oct 2021 Jason Ramapuram, Dan Busbridge, Xavier Suau, Russ Webb

While state-of-the-art contrastive Self-Supervised Learning (SSL) models produce results competitive with their supervised counterparts, they lack the ability to infer latent variables.

Contrastive Learning regression +1

Evaluating the fairness of fine-tuning strategies in self-supervised learning

no code implementations1 Oct 2021 Jason Ramapuram, Dan Busbridge, Russ Webb

In this work we examine how fine-tuning impacts the fairness of contrastive Self-Supervised Learning (SSL) models.

Fairness Self-Supervised Learning

Do Self-Supervised and Supervised Methods Learn Similar Visual Representations?

no code implementations1 Oct 2021 Tom George Grigg, Dan Busbridge, Jason Ramapuram, Russ Webb

Despite the success of a number of recent techniques for visual self-supervised deep learning, there has been limited investigation into the representations that are ultimately learned.

Challenges of Adversarial Image Augmentations

no code implementations NeurIPS Workshop ICBINB 2021 Arno Blaas, Xavier Suau, Jason Ramapuram, Nicholas Apostoloff, Luca Zappella

Image augmentations applied during training are crucial for the generalization performance of image classifiers.

Position Prediction as an Effective Pretraining Strategy

1 code implementation15 Jul 2022 Shuangfei Zhai, Navdeep Jaitly, Jason Ramapuram, Dan Busbridge, Tatiana Likhomanenko, Joseph Yitan Cheng, Walter Talbott, Chen Huang, Hanlin Goh, Joshua Susskind

This pretraining strategy which has been used in BERT models in NLP, Wav2Vec models in Speech and, recently, in MAE models in Vision, forces the model to learn about relationships between the content in different parts of the input using autoencoding related objectives.

Position speech-recognition +1

Stabilizing Transformer Training by Preventing Attention Entropy Collapse

1 code implementation11 Mar 2023 Shuangfei Zhai, Tatiana Likhomanenko, Etai Littwin, Dan Busbridge, Jason Ramapuram, Yizhe Zhang, Jiatao Gu, Josh Susskind

We show that $\sigma$Reparam provides stability and robustness with respect to the choice of hyperparameters, going so far as enabling training (a) a Vision Transformer {to competitive performance} without warmup, weight decay, layer normalization or adaptive optimizers; (b) deep architectures in machine translation and (c) speech recognition to competitive performance without warmup and adaptive optimizers.

Automatic Speech Recognition Image Classification +6

DUET: 2D Structured and Approximately Equivariant Representations

1 code implementation28 Jun 2023 Xavier Suau, Federico Danieli, T. Anderson Keller, Arno Blaas, Chen Huang, Jason Ramapuram, Dan Busbridge, Luca Zappella

We propose 2D strUctured and EquivarianT representations (coined DUET), which are 2d representations organized in a matrix structure, and equivariant with respect to transformations acting on the input data.

Self-Supervised Learning Transfer Learning

The Role of Entropy and Reconstruction in Multi-View Self-Supervised Learning

1 code implementation20 Jul 2023 Borja Rodríguez-Gálvez, Arno Blaas, Pau Rodríguez, Adam Goliński, Xavier Suau, Jason Ramapuram, Dan Busbridge, Luca Zappella

We consider a different lower bound on the MI consisting of an entropy and a reconstruction term (ER), and analyze the main MVSSL families through its lens.

Self-Supervised Learning

Poly-View Contrastive Learning

no code implementations8 Mar 2024 Amitis Shidani, Devon Hjelm, Jason Ramapuram, Russ Webb, Eeshan Gunesh Dhekane, Dan Busbridge

Contrastive learning typically matches pairs of related views among a number of unrelated negative views.

Contrastive Learning Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.