2 code implementations • 21 Mar 2023 • Vithursan Thangarasa, Shreyas Saxena, Abhay Gupta, Sean Lie
Recent research has focused on weight sparsity in neural network training to reduce FLOPs, aiming for improved efficiency (test accuracy w. r. t training FLOPs).
no code implementations • 18 Mar 2023 • Vithursan Thangarasa, Abhay Gupta, William Marshall, Tianda Li, Kevin Leong, Dennis Decoste, Sean Lie, Shreyas Saxena
In this work, we show the benefits of using unstructured weight sparsity to train only a subset of weights during pre-training (Sparse Pre-training) and then recover the representational capacity by allowing the zeroed weights to learn (Dense Fine-tuning).
1 code implementation • 28 Jun 2022 • Vitaliy Chiley, Vithursan Thangarasa, Abhay Gupta, Anshul Samar, Joel Hestness, Dennis Decoste
However, training them requires substantial accelerator memory for saving large, multi-resolution activations.
Ranked #310 on Image Classification on ImageNet (using extra training data)
no code implementations • 16 Nov 2016 • Abhay Gupta
We propose a method to aggregate noisy labels collected from a crowd of workers or annotators.
no code implementations • 7 Sep 2016 • Abhay Gupta, Arjun D'Cunha, Kamal Awasthi, Vineeth Balasubramanian
We introduce DAiSEE, the first multi-label video classification dataset comprising of 9068 video snippets captured from 112 users for recognizing the user affective states of boredom, confusion, engagement, and frustration in the wild.