no code implementations • 18 Oct 2023 • Jatin Chauhan, Xiaoxuan Wang, Wei Wang
We present one of the preliminary NLP works under the challenging setup of Learning from Label Proportions (LLP), where the data is provided in an aggregate form called bags and only the proportion of samples in each class as the ground truth.
1 code implementation • 11 Oct 2023 • Shreyas Havaldar, Jatin Chauhan, Karthikeyan Shanmugam, Jay Nandy, Aravindan Raghuveer
Our third contribution is theoretical, where we show that our weighted entropy term along with prediction loss on the training set approximates test loss under covariate shift.
1 code implementation • 29 Sep 2023 • Yanqiao Zhu, Jeehyun Hwang, Keir Adams, Zhen Liu, Bozhao Nan, Brock Stenfors, Yuanqi Du, Jatin Chauhan, Olaf Wiest, Olexandr Isayev, Connor W. Coley, Yizhou Sun, Wei Wang
Molecular Representation Learning (MRL) has proven impactful in numerous biochemical applications such as drug discovery and enzyme design.
1 code implementation • 25 Jun 2022 • Jatin Chauhan, Aravindan Raghuveer, Rishi Saket, Jay Nandy, Balaraman Ravindran
Through systematic experiments across 4 datasets and 5 forecast models, we show that our technique is able to recover close to 95\% performance of the models even when only 15\% of the original variables are present.
1 code implementation • 2 May 2022 • Jatin Chauhan, Manohar Kaul
Proposing scoring functions to effectively understand, analyze and learn various properties of high dimensional hidden representations of large-scale transformer models like BERT can be a challenging task.
2 code implementations • 25 Oct 2021 • Jatin Chauhan, Priyanshu Gupta, Pasquale Minervini
We present NNMFAug, a probabilistic framework to perform data augmentation for the task of knowledge graph completion to counter the problem of data scarcity, which can enhance the learning process of neural link predictors.
no code implementations • 13 Jun 2021 • Jatin Chauhan, Karan Bhukar, Manohar Kaul
Despite significant improvements in natural language understanding models with the advent of models like BERT and XLNet, these neural-network based classifiers are vulnerable to blackbox adversarial attacks, where the attacker is only allowed to query the target model outputs.
1 code implementation • 19 May 2020 • Charu Sharma, Jatin Chauhan, Manohar Kaul
Several state-of-the-art neural graph embedding methods are based on short random walks (stochastic processes) because of their ease of computation, simplicity in capturing complex local graph properties, scalability, and interpretibility.
1 code implementation • ICLR 2020 • Jatin Chauhan, Deepak Nathani, Manohar Kaul
We propose to study the problem of few shot graph classification in graph neural networks (GNNs) to recognize unseen classes, given limited labeled graph examples.
2 code implementations • ACL 2019 • Deepak Nathani, Jatin Chauhan, Charu Sharma, Manohar Kaul
The recent proliferation of knowledge graphs (KGs) coupled with incomplete or partial information, in the form of missing relations (links) between entities, has fueled a lot of research on knowledge base completion (also known as relation prediction).
Ranked #1 on
Knowledge Graph Completion
on FB15k-237