no code implementations • 3 Oct 2023 • Samyadeep Basu, Mehrdad Saberi, Shweta Bhardwaj, Atoosa Malemir Chegini, Daniela Massiceti, Maziar Sanjabi, Shell Xu Hu, Soheil Feizi
From both the human study and automated evaluation, we find that: (i) Instruct-Pix2Pix, Null-Text and SINE are the top-performing methods averaged across different edit types, however {\it only} Instruct-Pix2Pix and Null-Text are able to preserve original image properties; (ii) Most of the editing methods fail at edits involving spatial operations (e. g., changing the position of an object).
1 code implementation • 20 Jul 2023 • Neha Kalibhat, Shweta Bhardwaj, Bayan Bruss, Hamed Firooz, Maziar Sanjabi, Soheil Feizi
Although many existing approaches interpret features independently, we observe in state-of-the-art self-supervised and supervised models, that less than 20% of the representation space can be explained by individual features.
1 code implementation • CVPR 2019 • Shweta Bhardwaj, Mukundhan Srinivasan, Mitesh M. Khapra
We focus on building compute-efficient video classification models which process fewer frames and hence have less number of FLOPs.
Ranked #2 on Video Classification on YouTube-8M
1 code implementation • 26 Dec 2018 • Deepak Mittal, Shweta Bhardwaj, Mitesh M. Khapra, Balaraman Ravindran
In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned.
no code implementations • 12 May 2018 • Shweta Bhardwaj, Mitesh M. Khapra
We then train a student network whose objective is to process only a small fraction of the frames in the video and still produce a representation which is very close to the representation computed by the teacher network.
1 code implementation • 31 Jan 2018 • Deepak Mittal, Shweta Bhardwaj, Mitesh M. Khapra, Balaraman Ravindran
In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned.