1 code implementation • 28 Nov 2023 • Vaidehi Patil, Adyasha Maharana, Mohit Bansal
In this paper, we study bias arising from confounders in a causal graph for multimodal data and examine a novel approach that leverages causally-motivated information minimization to learn the confounder representations.
1 code implementation • 29 Sep 2023 • Vaidehi Patil, Peter Hase, Mohit Bansal
Experimentally, we show that even state-of-the-art model editing methods such as ROME struggle to truly delete factual information from models like GPT-J, as our whitebox and blackbox attacks can recover "deleted" information from an edited model 38% of the time.
no code implementations • 8 Jul 2022 • Rishi Agarwal, Tirupati Saketh Chandra, Vaidehi Patil, Aniruddha Mahapatra, Kuldeep Kulkarni, Vishwa Vinay
To this end, we formulate scene graph expansion as a sequential prediction task involving multiple steps of first predicting a new node and then predicting the set of relationships between the newly predicted node and previous nodes in the graph.
1 code implementation • ACL 2022 • Vaidehi Patil, Partha Talukdar, Sunita Sarawagi
This results in improved zero-shot transfer from related HRLs to LRLs without reducing HRL representation and accuracy.
1 code implementation • ACL 2021 • Yash Khemchandani, Sarvesh Mehtani, Vaidehi Patil, Abhijeet Awasthi, Partha Talukdar, Sunita Sarawagi
RelateLM uses transliteration to convert the unseen script of limited LRL text into the script of a Related Prominent Language (RPL) (Hindi in our case).