Search Results for author: Aaron Sarna

Found 9 papers, 8 papers with code

A simple, efficient and scalable contrastive masked autoencoder for learning visual representations

1 code implementation30 Oct 2022 Shlok Mishra, Joshua Robinson, Huiwen Chang, David Jacobs, Aaron Sarna, Aaron Maschinot, Dilip Krishnan

Our framework is a minimal and conceptually clean synthesis of (C) contrastive learning, (A) masked autoencoders, and (N) the noise prediction approach used in diffusion models.

Contrastive Learning Self-Supervised Learning +1

Simplified Transfer Learning for Chest Radiography Models Using Less Data

1 code implementation Radiology 2022 Andrew B. Sellergren, Christina Chen, Zaid Nabulsi, Yuanzhen Li, Aaron Maschinot, Aaron Sarna, Jenny Huang, Charles Lau, Sreenivasa Raju Kalidindi, Mozziyar Etemadi, Florencia Garcia-Vicente, David Melnick, Yun Liu, Krish Eswaran, Daniel Tse, Neeral Beladia, Dilip Krishnan, Shravya Shetty

Supervised contrastive learning enabled performance comparable to state-of-the-art deep learning models in multiple clinical tasks by using as few as 45 images and is a promising method for predictive modeling with use of small data sets and for predicting outcomes in shifting patient populations.

Contrastive Learning Transfer Learning

Unsupervised Disentanglement without Autoencoding: Pitfalls and Future Directions

1 code implementation14 Aug 2021 Andrea Burns, Aaron Sarna, Dilip Krishnan, Aaron Maschinot

Disentangled visual representations have largely been studied with generative models such as Variational AutoEncoders (VAEs).

Contrastive Learning Disentanglement

Supervised Contrastive Learning

20 code implementations NeurIPS 2020 Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, Dilip Krishnan

Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models.

class-incremental learning Contrastive Learning +4

Local Deep Implicit Functions for 3D Shape

1 code implementation CVPR 2020 Kyle Genova, Forrester Cole, Avneesh Sud, Aaron Sarna, Thomas Funkhouser

The goal of this project is to learn a 3D shape representation that enables accurate surface reconstruction, compact storage, efficient computation, consistency for similar shapes, generalization across diverse shape categories, and inference from depth camera observations.

3D Shape Representation Surface Reconstruction

Learning Shape Templates with Structured Implicit Functions

1 code implementation ICCV 2019 Kyle Genova, Forrester Cole, Daniel Vlasic, Aaron Sarna, William T. Freeman, Thomas Funkhouser

To allow for widely varying geometry and topology, we choose an implicit surface representation based on composition of local shape elements.

Semantic Segmentation

Unsupervised Training for 3D Morphable Model Regression

2 code implementations CVPR 2018 Kyle Genova, Forrester Cole, Aaron Maschinot, Aaron Sarna, Daniel Vlasic, William T. Freeman

We train a regression network using these objectives, a set of unlabeled photographs, and the morphable model itself, and demonstrate state-of-the-art results.

Ranked #2 on 3D Face Reconstruction on Florence (Average 3D Error metric)

3D Face Reconstruction regression

Synthesizing Normalized Faces from Facial Identity Features

1 code implementation CVPR 2017 Forrester Cole, David Belanger, Dilip Krishnan, Aaron Sarna, Inbar Mosseri, William T. Freeman

We present a method for synthesizing a frontal, neutral-expression image of a person's face given an input face photograph.

Cannot find the paper you are looking for? You can Submit a new open access paper.