1 code implementation • 22 Jul 2024 • Jingchen Sun, Rohan Sharma, Vishnu Suresh Lokhande, Changyou Chen
The experiment on four different prompt tuning structures consistently shows the improvement of our method, with increases of up to $6. 1\%$ in the Base-to-Novel generalization task, $5. 8\%$ in the group robustness task, and $2. 7\%$ in the out-of-distribution tasks.
no code implementations • 3 Jun 2024 • Mingzhen Huang, Jialing Cai, Shan Jia, Vishnu Suresh Lokhande, Siwei Lyu
This dataset is a benchmark for evaluating text-driven image editing methods in multifaceted scenarios.
no code implementations • 5 Mar 2024 • Sotirios Panagiotis Chytas, Vishnu Suresh Lokhande, Peiran Li, Vikas Singh
Further, we discuss how this style of formulation offers a unified perspective on at least 5+ distinct problem settings, from self-supervised learning to matching problems in 3D reconstruction.
1 code implementation • CVPR 2022 • Vishnu Suresh Lokhande, Rudrasis Chakraborty, Sathya N. Ravi, Vikas Singh
Pooling multiple neuroimaging datasets across institutions often enables improvements in statistical power when evaluating associations (e. g., between risk factors and disease outcomes) that may otherwise be too weak to detect.
no code implementations • 19 Feb 2022 • Jurijs Nazarovs, Ronak R. Mehta, Vishnu Suresh Lokhande, Vikas Singh
This is directly related to the structure of the computation graph, which can grow linearly as a function of the number of MC samples needed.
no code implementations • 10 Jan 2022 • Vishnu Suresh Lokhande, Kihyuk Sohn, Jinsung Yoon, Madeleine Udell, Chen-Yu Lee, Tomas Pfister
Such a requirement is impractical in situations where the data labeling efforts for minority or rare groups are significantly laborious or where the individuals comprising the dataset choose to conceal sensitive information.
no code implementations • 29 Sep 2021 • Vishnu Suresh Lokhande, Kihyuk Sohn, Jinsung Yoon, Madeleine Udell, Chen-Yu Lee, Tomas Pfister
Such a requirement is impractical in situations where the data labelling efforts for minority or rare groups is significantly laborious or where the individuals comprising the dataset choose to conceal sensitive information.
1 code implementation • 16 Feb 2021 • Aditya Kumar Akash, Vishnu Suresh Lokhande, Sathya N. Ravi, Vikas Singh
Learning invariant representations is a critical first step in a number of machine learning tasks.
1 code implementation • ECCV 2020 • Vishnu Suresh Lokhande, Aditya Kumar Akash, Sathya N. Ravi, Vikas Singh
We provide a detailed technical analysis and present experiments demonstrating that various fairness measures from the literature can be reliably imposed on a number of training tasks in vision in a manner that is interpretable.
no code implementations • 10 Oct 2019 • Muni Sreenivas Pydi, Vishnu Suresh Lokhande
We consider an active learning setting where the algorithm has access to a large pool of unlabeled data and a small pool of labeled data.
1 code implementation • 12 Sep 2019 • Vishnu Suresh Lokhande, Shaofei Wang, Maneesh Singh, Julian Yarkony
We tackle optimization of weighted set packing by relaxing integrality in our ILP formulation.
1 code implementation • CVPR 2020 • Vishnu Suresh Lokhande, Songwong Tasneeyapant, Abhay Venkatesh, Sathya N. Ravi, Vikas Singh
Rectified Linear Units (ReLUs) are among the most widely used activation function in a broad variety of tasks in vision.