no code implementations • 8 Mar 2024 • Amitis Shidani, Devon Hjelm, Jason Ramapuram, Russ Webb, Eeshan Gunesh Dhekane, Dan Busbridge
Contrastive learning typically matches pairs of related views among a number of unrelated negative views.
no code implementations • 6 Dec 2023 • Polina Turishcheva, Jason Ramapuram, Sinead Williamson, Dan Busbridge, Eeshan Dhekane, Russ Webb
Understanding model uncertainty is important for many applications.
no code implementations • 28 Oct 2022 • Andrius Ovsianas, Jason Ramapuram, Dan Busbridge, Eeshan Gunesh Dhekane, Russ Webb
Self-supervised representation learning (SSL) methods provide an effective label-free initial condition for fine-tuning downstream tasks.
no code implementations • 1 Oct 2021 • Tom George Grigg, Dan Busbridge, Jason Ramapuram, Russ Webb
Despite the success of a number of recent techniques for visual self-supervised deep learning, there has been limited investigation into the representations that are ultimately learned.
no code implementations • 1 Oct 2021 • Jason Ramapuram, Dan Busbridge, Xavier Suau, Russ Webb
While state-of-the-art contrastive Self-Supervised Learning (SSL) models produce results competitive with their supervised counterparts, they lack the ability to infer latent variables.
no code implementations • 1 Oct 2021 • Jason Ramapuram, Dan Busbridge, Russ Webb
In this work we examine how fine-tuning impacts the fairness of contrastive Self-Supervised Learning (SSL) models.
2 code implementations • ICCV 2021 • Mike Roberts, Jason Ramapuram, Anurag Ranjan, Atulit Kumar, Miguel Angel Bautista, Nathan Paczan, Russ Webb, Joshua M. Susskind
To create our dataset, we leverage a large repository of synthetic scenes created by professional artists, and we generate 77, 400 images of 461 indoor scenes with detailed per-pixel labels and corresponding ground truth geometry.
no code implementations • 18 Dec 2019 • Lionel Blondé, Yichuan Charlie Tang, Jian Zhang, Russ Webb
In this work, we introduce a new method for imitation learning from video demonstrations.
no code implementations • 9 May 2019 • Jason Ramapuram, Russ Webb
Modern neural network training relies on piece-wise (sub-)differentiable functions in order to use backpropagation to update model parameters.
no code implementations • 2 Apr 2019 • Katherine Metcalf, Barry-John Theobald, Garrett Weinberg, Robert Lee, Ing-Marie Jonsson, Russ Webb, Nicholas Apostoloff
We describe experiments towards building a conversational digital assistant that considers the preferred conversational style of the user.
1 code implementation • 8 Dec 2018 • Jason Ramapuram, Maurits Diephuis, Frantzeska Lavda, Russ Webb, Alexandros Kalousis
Image classification with deep neural networks is typically restricted to images of small dimensionality such as 224 x 244 in Resnet models [24].
1 code implementation • 30 Jun 2018 • Jason Ramapuram, Russ Webb
Knowledge Matters: Importance of Prior Information for Optimization [7], by Gulcehre et.
9 code implementations • CVPR 2017 • Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Josh Susskind, Wenda Wang, Russ Webb
With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations.
Ranked #3 on Image-to-Image Translation on Cityscapes Labels-to-Photo (Per-class Accuracy metric)