no code implementations • 18 Jan 2024 • Namitha Padmanabhan, Matthew Gwilliam, Pulkit Kumar, Shishira R Maiya, Max Ehrlich, Abhinav Shrivastava
We call the aggregate of these contribution maps the Implicit Neural Canvas and we use this concept to demonstrate that the INRs which we study learn to ''see'' the frames they represent in surprising ways.
no code implementations • 30 Nov 2023 • Matthew Gwilliam, Michael Cogswell, Meng Ye, Karan Sikka, Abhinav Shrivastava, Ajay Divakaran
To provide a more thorough evaluation of the capabilities of long video retrieval systems, we propose a pipeline that leverages state-of-the-art large language models to carefully generate a diverse set of synthetic captions for long videos.
1 code implementation • 29 Nov 2023 • Soumik Mukhopadhyay, Matthew Gwilliam, Yosuke Yamaguchi, Vatsal Agarwal, Namitha Padmanabhan, Archana Swaminathan, Tianyi Zhou, Abhinav Shrivastava
We find that the intermediate feature maps of the U-Net are diverse, discriminative feature representations.
1 code implementation • 17 Jul 2023 • Soumik Mukhopadhyay, Matthew Gwilliam, Vatsal Agarwal, Namitha Padmanabhan, Archana Swaminathan, Srinidhi Hegde, Tianyi Zhou, Abhinav Shrivastava
We explore optimal methods for extracting and using these embeddings for classification tasks, demonstrating promising results on the ImageNet classification task.
1 code implementation • CVPR 2022 • Matthew Gwilliam, Abhinav Shrivastava
In this paper, we compare methods using performance-based benchmarks such as linear evaluation, nearest neighbor classification, and clustering for several different datasets, demonstrating the lack of a clear front-runner within the current state-of-the-art.
1 code implementation • 7 Sep 2021 • Matthew Gwilliam, Srinidhi Hegde, Lade Tinubu, Alex Hanson
Many existing works have made great strides towards reducing racial bias in face recognition.
no code implementations • 7 Sep 2021 • Matthew Gwilliam, Adam Teuscher, Connor Anderson, Ryan Farrell
From this analysis, we both highlight the importance of reporting and comparing methods based on information beyond overall accuracy, as well as point out techniques that mitigate variance in FGVC results.
no code implementations • EACL 2021 • Eva Vanmassenhove, Dimitar Shterionov, Matthew Gwilliam
Recent studies in the field of Machine Translation (MT) and Natural Language Processing (NLP) have shown that existing models amplify biases observed in the training data.