no code implementations • WMT (EMNLP) 2020 • Lovish Madaan, Soumya Sharma, Parag Singla
In this paper, we describe IIT Delhi’s submissions to the WMT 2020 task on Similar Language Translation for four language directions: Hindi <-> Marathi and Spanish <-> Portuguese.
no code implementations • 7 Mar 2024 • Rohith Peddi, Saksham Singh, Saurabh, Parag Singla, Vibhav Gogate
In SceneSayer, we leverage object-centric representations of relationships to reason about the observed video frames and model the evolution of relationships between objects.
no code implementations • 4 Feb 2024 • Chinmay Mittal, Krishna Kartik, Mausam, Parag Singla
Recent works show that the largest of the large language models (LLMs) can solve many simple reasoning tasks expressed in natural language, without any/much supervision.
no code implementations • 7 Nov 2023 • Ananjan Nandi, Navdeep Kaur, Parag Singla, Mausam
We consider two popular approaches to Knowledge Graph Completion (KGC): textual models that rely on textual entity descriptions, and structure-based models that exploit the connectivity structure of the Knowledge Graph (KG).
1 code implementation • 25 Oct 2023 • Vipul Rathore, Rajdeep Dhingra, Parag Singla, Mausam
We posit that for more effective cross-lingual transfer, instead of just one source LA, we need to leverage LAs of multiple (linguistically or geographically related) source languages, both at train and test-time - which we investigate via our novel neural architecture, ZGUL.
no code implementations • 16 Oct 2023 • Anand Brahmbhatt, Vipul Rathore, Mausam, Parag Singla
Further, we show that ensuring group-wise calibration with respect to the sensitive attributes automatically results in a fair model under our definition.
no code implementations • 3 Oct 2023 • Aniruddha Deb, Neeva Oza, Sarthak Singla, Dinesh Khandelwal, Dinesh Garg, Parag Singla
Utilizing the specific format of this task, we propose three novel techniques that improve performance: Rephrase reformulates the given problem into a forward reasoning problem, PAL-Tools combines the idea of Program-Aided LLMs to produce a set of equations that can be solved by an external solver, and Check your Work exploits the availability of natural verifier of high accuracy in the forward direction, interleaving solving and verification steps.
no code implementations • 23 May 2023 • Harman Singh, Poorva Garg, Mohit Gupta, Kevin Shah, Ashish Goswami, Satyam Modi, Arnab Kumar Mondal, Dinesh Khandelwal, Dinesh Garg, Parag Singla
We are interested in image manipulation via natural language text -- a task that is useful for multiple AI applications but requires complex reasoning over multi-modal spaces.
no code implementations • 12 Nov 2022 • Namasivayam Kalithasan, Himanshu Singh, Vishal Bindal, Arnav Tuli, Vishwajeet Agrawal, Rahul Jain, Parag Singla, Rohan Paul
Given a natural language instruction and an input scene, our goal is to train a model to output a manipulation program that can be executed by the robot.
1 code implementation • 17 Oct 2022 • Yatin Nandwani, Rishabh Ranjan, Mausam, Parag Singla
Experiments on several problems, both perceptual as well as symbolic, which require learning the constraints of an ILP, show that our approach has superior performance and scales much better compared to purely neural baselines and other state-of-the-art models that require solver-based training.
no code implementations • ICLR 2022 • Yatin Nandwani, Vidit Jain, Mausam, Parag Singla
One drawback of the proposed architectures, which are often based on Graph Neural Networks (GNN), is that they cannot generalize across the size of the output space from which variables are assigned a value, for example, set of colors in a GCP, or board-size in sudoku.
1 code implementation • ACL 2022 • Vipul Rathore, Kartikeya Badola, Mausam, Parag Singla
The contextual embeddings of tokens are aggregated using attention with the candidate relation as query -- this summary of whole passage predicts the candidate relation.
no code implementations • AKBC Workshop CSKB 2021 • Shourya Aggarwal, Divyanshu Mandowara, Vishwajeet Agrawal, Dinesh Khandelwal, Parag Singla, Dinesh Garg
We human-annotate a first-of-its-kind dataset (called ECQA) of positive and negative properties, as well as free-flow explanations, for $11K$ QA pairs taken from the CQA dataset.
1 code implementation • 16 Jul 2021 • Arnab Kumar Mondal, Himanshu Asnani, Parag Singla, Prathosh AP
The basic idea in RAEs is to learn a non-linear mapping from the high-dimensional data space to a low-dimensional latent space and vice-versa, simultaneously imposing a distributional prior on the latent space, which brings in a regularization effect.
no code implementations • 16 Jul 2021 • Rushil Gupta, Vishal Sharma, Yash Jain, Yitao Liang, Guy Van Den Broeck, Parag Singla
We work with models which are object-centric, i. e., explicitly work with object representations, and propagate a loss in the latent space.
no code implementations • 18 Apr 2021 • Keshav Kolluru, Mayank Singh Chauhan, Yatin Nandwani, Parag Singla, Mausam
Pre-trained language models (LMs) like BERT have shown to store factual knowledge about the world.
1 code implementation • 28 Sep 2020 • Danish Contractor, Shashank Goel, Mausam, Parag Singla
In response, we develop the first joint spatio-textual reasoning model, which combines geo-spatial knowledge with information in textual corpora to answer questions.
no code implementations • ICLR 2021 • Yatin Nandwani, Deepanshu Jindal, Mausam, Parag Singla
Our framework uses a selection module, whose goal is to dynamically determine, for every input, the solution that is most effective for training the network parameters in any given learning iteration.
no code implementations • 10 Jun 2020 • Arnab Kumar Mondal, Himanshu Asnani, Parag Singla, Prathosh AP
Specifically, we consider the class of RAEs with deterministic Encoder-Decoder pairs, Wasserstein Auto-Encoders (WAE), and show that having a fixed prior distribution, \textit{a priori}, oblivious to the dimensionality of the `true' latent space, will lead to the infeasibility of the optimization problem considered.
1 code implementation • AKBC 2020 • Yatin Nandwani, Ankesh Gupta, Aman Agrawal, Mayank Singh Chauhan, Parag Singla, Mausam
State-of-the-art models for Knowledge Base Completion (KBC) are based on tensor factorization (TF), e. g, DistMult, ComplEx.
no code implementations • 10 Dec 2019 • Arnab Kumar Mondal, Sankalan Pal Chowdhury, Aravind Jayendran, Parag Singla, Himanshu Asnani, Prathosh AP
The field of neural generative models is dominated by the highly successful Generative Adversarial Networks (GANs) despite their challenges, such as training instability and mode collapse.
1 code implementation • NeurIPS 2019 • Yatin Nandwani, Abhishek Pathak, Mausam, Parag Singla
In this paper, we present a constrained optimization formulation for training a deep network with a given set of hard constraints on output labels.
no code implementations • 8 Sep 2019 • Danish Contractor, Krunal Shah, Aditi Partap, Mausam, Parag Singla
We introduce the novel task of answering entity-seeking recommendation questions using a collection of reviews that describe candidate answer entities.
no code implementations • 24 Nov 2018 • Dinesh Khandelwal, Suyash Agrawal, Parag Singla, Chetan Arora
Designing such a network, as well as collecting jointly labeled data for training is a non-trivial task.
no code implementations • 3 Jul 2018 • Happy Mittal, Ayush Bhardwaj, Vibhav Gogate, Parag Singla
Experiments on the benchmark Friends & Smokers domain show that our ap- proach results in significantly higher accuracies compared to existing methods when testing on domains whose sizes different from those seen during training.
no code implementations • 2 Jul 2018 • Vishal Sharma, Noman Ahmed Sheikh, Happy Mittal, Vibhav Gogate, Parag Singla
Lifted inference reduces the complexity of inference in relational probabilistic models by identifying groups of constants (or atoms) which behave symmetric to each other.
1 code implementation • 2 Jul 2018 • Gagan Madan, Ankit Anand, Mausam, Parag Singla
These orbits are represented compactly using permutations over variables, and variable-value (VV) pairs, but they can miss several state symmetries in a domain.
no code implementations • 5 Jan 2018 • Danish Contractor, Barun Patra, Mausam Singla, Parag Singla
We introduce the first system towards the novel task of answering complex multisentence recommendation questions in the tourism domain.
1 code implementation • 27 Jul 2017 • Ankit Anand, Ritesh Noothigattu, Parag Singla, Mausam
Moreover, algorithms for lifted inference in multi-valued domains also compute a multi-valued extension of count symmetries only.
1 code implementation • 22 Jul 2017 • Haroun Habeeb, Ankit Anand, Mausam, Parag Singla
We demonstrate the performance of C2F inference by developing lifted versions of two near state-of-the-art CV algorithms for stereo vision and interactive image segmentation.
no code implementations • 30 Jun 2016 • Ankit Anand, Aditya Grover, Mausam, Parag Singla
We extend previous work on exploiting symmetries in the MCMC framework to the case of contextual symmetries.
no code implementations • 30 Jun 2016 • David Smith, Parag Singla, Vibhav Gogate
Due to the intractable nature of exact lifted inference, research has recently focused on the discovery of accurate and efficient approximate inference algorithms in Statistical Relational Models (SRMs), such as Lifted First-Order Belief Propagation.
no code implementations • CVPR 2016 • Ishant Shanu, Chetan Arora, Parag Singla
Current state of the art inference algorithms for general submodular function takes many hours for problems with clique size 16, and fail to scale beyond.
no code implementations • NeurIPS 2015 • Timothy Kopp, Parag Singla, Henry Kautz
Symmetry breaking is a technique for speeding up propositional satisfiability testing by adding constraints to the theory that restrict the search space while preserving satisfiability.
no code implementations • NeurIPS 2015 • Somdeb Sarkhel, Parag Singla, Vibhav G. Gogate
A key advantage of these lifted algorithms is that they have much smaller computational complexity than propositional algorithms when symmetries are present in the MLN and these symmetries can be detected using lifted inference rules.
no code implementations • NeurIPS 2015 • Happy Mittal, Anuj Mahajan, Vibhav G. Gogate, Parag Singla
Lifted inference rules exploit symmetries for fast reasoning in statistical rela-tional models.
no code implementations • NeurIPS 2014 • Happy Mittal, Prasoon Goyal, Vibhav G. Gogate, Parag Singla
In this paper, we present two new lifting rules, which enable fast MAP inference in a large class of MLNs.
no code implementations • NeurIPS 2014 • Somdeb Sarkhel, Deepak Venugopal, Parag Singla, Vibhav G. Gogate
In this paper, we present a new approach for lifted MAP inference in Markov logic networks (MLNs).