no code implementations • WMT (EMNLP) 2020 • Lovish Madaan, Soumya Sharma, Parag Singla

In this paper, we describe IIT Delhi’s submissions to the WMT 2020 task on Similar Language Translation for four language directions: Hindi <-> Marathi and Spanish <-> Portuguese.

no code implementations • 3 Jul 2024 • Sanket Gandhi, Atul, Samanyu Mahajan, Vishal Sharma, Rushil Gupta, Arnab Kumar Mondal, Parag Singla

Recent work has shown that object-centric representations can greatly help improve the accuracy of learning dynamics while also bringing interpretability.

1 code implementation • 2 Jul 2024 • Ananjan Nandi, Navdeep Kaur, Parag Singla, Mausam

High-quality and high-coverage rule sets are imperative to the success of Neuro-Symbolic Knowledge Graph Completion (NS-KGC) models, because they form the basis of all symbolic inferences.

1 code implementation • 27 Jun 2024 • Vipul Rathore, Aniruddha Deb, Ankish Chandresh, Parag Singla, Mausam

We investigate their effectiveness for NLP tasks in low-resource languages (LRLs), especially in the setting of zero-labelled cross-lingual transfer (0-CLT), where no labelled training data for the target language is available -- however training data from one or more related medium-resource languages (MRLs) is utilized, alongside the available unlabeled test data for a target language.

no code implementations • 29 May 2024 • Namasivayam Kalithasan, Arnav Tuli, Vishal Bindal, Himanshu Gaurav Singh, Parag Singla, Rohan Paul

Automatically detecting and recovering from failures is an important but challenging problem for autonomous robots.

no code implementations • 11 Apr 2024 • Namasivayam Kalithasan, Sachit Sachdeva, Himanshu Gaurav Singh, Vishal Bindal, Arnav Tuli, Gurarmaan Singh Panjeta, Divyanshu Aggarwal, Rohan Paul, Parag Singla

Additionally, the approach should generalize inductively to novel structures of different sizes or complex structures expressed as a hierarchical composition of previously learned concepts.

1 code implementation • 7 Mar 2024 • Rohith Peddi, Saksham Singh, Saurabh, Parag Singla, Vibhav Gogate

In SceneSayer, we leverage object-centric representations of relationships to reason about the observed video frames and model the evolution of relationships between objects.

no code implementations • 4 Feb 2024 • Chinmay Mittal, Krishna Kartik, Mausam, Parag Singla

Recent works show that the largest of the large language models (LLMs) can solve many simple reasoning tasks expressed in natural language, without any/much supervision.

1 code implementation • 7 Nov 2023 • Ananjan Nandi, Navdeep Kaur, Parag Singla, Mausam

We consider two popular approaches to Knowledge Graph Completion (KGC): textual models that rely on textual entity descriptions, and structure-based models that exploit the connectivity structure of the Knowledge Graph (KG).

1 code implementation • 25 Oct 2023 • Vipul Rathore, Rajdeep Dhingra, Parag Singla, Mausam

We posit that for more effective cross-lingual transfer, instead of just one source LA, we need to leverage LAs of multiple (linguistically or geographically related) source languages, both at train and test-time - which we investigate via our novel neural architecture, ZGUL.

no code implementations • 16 Oct 2023 • Anand Brahmbhatt, Vipul Rathore, Mausam, Parag Singla

Further, we show that ensuring group-wise calibration with respect to the sensitive attributes automatically results in a fair model under our definition.

no code implementations • 3 Oct 2023 • Aniruddha Deb, Neeva Oza, Sarthak Singla, Dinesh Khandelwal, Dinesh Garg, Parag Singla

Extensive experimentation demonstrates successive improvement in the performance of LLMs on the backward reasoning task, using our strategies, with our ensemble-based method resulting in significant performance gains compared to the SOTA forward reasoning strategies we adapt.

no code implementations • 23 May 2023 • Harman Singh, Poorva Garg, Mohit Gupta, Kevin Shah, Ashish Goswami, Satyam Modi, Arnab Kumar Mondal, Dinesh Khandelwal, Dinesh Garg, Parag Singla

We are interested in image manipulation via natural language text -- a task that is useful for multiple AI applications but requires complex reasoning over multi-modal spaces.

no code implementations • 12 Nov 2022 • Namasivayam Kalithasan, Himanshu Singh, Vishal Bindal, Arnav Tuli, Vishwajeet Agrawal, Rahul Jain, Parag Singla, Rohan Paul

Given a natural language instruction and an input scene, our goal is to train a model to output a manipulation program that can be executed by the robot.

1 code implementation • 17 Oct 2022 • Yatin Nandwani, Rishabh Ranjan, Mausam, Parag Singla

Experiments on several problems, both perceptual as well as symbolic, which require learning the constraints of an ILP, show that our approach has superior performance and scales much better compared to purely neural baselines and other state-of-the-art models that require solver-based training.

no code implementations • ICLR 2022 • Yatin Nandwani, Vidit Jain, Mausam, Parag Singla

One drawback of the proposed architectures, which are often based on Graph Neural Networks (GNN), is that they cannot generalize across the size of the output space from which variables are assigned a value, for example, set of colors in a GCP, or board-size in sudoku.

1 code implementation • ACL 2022 • Vipul Rathore, Kartikeya Badola, Mausam, Parag Singla

The contextual embeddings of tokens are aggregated using attention with the candidate relation as query -- this summary of whole passage predicts the candidate relation.

no code implementations • AKBC Workshop CSKB 2021 • Shourya Aggarwal, Divyanshu Mandowara, Vishwajeet Agrawal, Dinesh Khandelwal, Parag Singla, Dinesh Garg

We human-annotate a first-of-its-kind dataset (called ECQA) of positive and negative properties, as well as free-flow explanations, for $11K$ QA pairs taken from the CQA dataset.

no code implementations • 16 Jul 2021 • Rushil Gupta, Vishal Sharma, Yash Jain, Yitao Liang, Guy Van Den Broeck, Parag Singla

We work with models which are object-centric, i. e., explicitly work with object representations, and propagate a loss in the latent space.

1 code implementation • 16 Jul 2021 • Arnab Kumar Mondal, Himanshu Asnani, Parag Singla, Prathosh AP

The basic idea in RAEs is to learn a non-linear mapping from the high-dimensional data space to a low-dimensional latent space and vice-versa, simultaneously imposing a distributional prior on the latent space, which brings in a regularization effect.

no code implementations • 18 Apr 2021 • Keshav Kolluru, Mayank Singh Chauhan, Yatin Nandwani, Parag Singla, Mausam

Pre-trained language models (LMs) like BERT have shown to store factual knowledge about the world.

1 code implementation • 28 Sep 2020 • Danish Contractor, Shashank Goel, Mausam, Parag Singla

In response, we develop the first joint spatio-textual reasoning model, which combines geo-spatial knowledge with information in textual corpora to answer questions.

no code implementations • ICLR 2021 • Yatin Nandwani, Deepanshu Jindal, Mausam, Parag Singla

Our framework uses a selection module, whose goal is to dynamically determine, for every input, the solution that is most effective for training the network parameters in any given learning iteration.

no code implementations • 10 Jun 2020 • Arnab Kumar Mondal, Himanshu Asnani, Parag Singla, Prathosh AP

Specifically, we consider the class of RAEs with deterministic Encoder-Decoder pairs, Wasserstein Auto-Encoders (WAE), and show that having a fixed prior distribution, \textit{a priori}, oblivious to the dimensionality of the `true' latent space, will lead to the infeasibility of the optimization problem considered.

1 code implementation • AKBC 2020 • Yatin Nandwani, Ankesh Gupta, Aman Agrawal, Mayank Singh Chauhan, Parag Singla, Mausam

State-of-the-art models for Knowledge Base Completion (KBC) are based on tensor factorization (TF), e. g, DistMult, ComplEx.

no code implementations • 10 Dec 2019 • Arnab Kumar Mondal, Sankalan Pal Chowdhury, Aravind Jayendran, Parag Singla, Himanshu Asnani, Prathosh AP

The field of neural generative models is dominated by the highly successful Generative Adversarial Networks (GANs) despite their challenges, such as training instability and mode collapse.

1 code implementation • NeurIPS 2019 • Yatin Nandwani, Abhishek Pathak, Mausam, Parag Singla

In this paper, we present a constrained optimization formulation for training a deep network with a given set of hard constraints on output labels.

no code implementations • 8 Sep 2019 • Danish Contractor, Krunal Shah, Aditi Partap, Mausam, Parag Singla

We introduce the novel task of answering entity-seeking recommendation questions using a collection of reviews that describe candidate answer entities.

no code implementations • 24 Nov 2018 • Dinesh Khandelwal, Suyash Agrawal, Parag Singla, Chetan Arora

Designing such a network, as well as collecting jointly labeled data for training is a non-trivial task.

no code implementations • 3 Jul 2018 • Happy Mittal, Ayush Bhardwaj, Vibhav Gogate, Parag Singla

Experiments on the benchmark Friends & Smokers domain show that our ap- proach results in significantly higher accuracies compared to existing methods when testing on domains whose sizes different from those seen during training.

no code implementations • 2 Jul 2018 • Vishal Sharma, Noman Ahmed Sheikh, Happy Mittal, Vibhav Gogate, Parag Singla

Lifted inference reduces the complexity of inference in relational probabilistic models by identifying groups of constants (or atoms) which behave symmetric to each other.

1 code implementation • 2 Jul 2018 • Gagan Madan, Ankit Anand, Mausam, Parag Singla

These orbits are represented compactly using permutations over variables, and variable-value (VV) pairs, but they can miss several state symmetries in a domain.

no code implementations • 5 Jan 2018 • Danish Contractor, Barun Patra, Mausam Singla, Parag Singla

We introduce the first system towards the novel task of answering complex multisentence recommendation questions in the tourism domain.

1 code implementation • 27 Jul 2017 • Ankit Anand, Ritesh Noothigattu, Parag Singla, Mausam

Moreover, algorithms for lifted inference in multi-valued domains also compute a multi-valued extension of count symmetries only.

1 code implementation • 22 Jul 2017 • Haroun Habeeb, Ankit Anand, Mausam, Parag Singla

We demonstrate the performance of C2F inference by developing lifted versions of two near state-of-the-art CV algorithms for stereo vision and interactive image segmentation.

no code implementations • 30 Jun 2016 • Ankit Anand, Aditya Grover, Mausam, Parag Singla

We extend previous work on exploiting symmetries in the MCMC framework to the case of contextual symmetries.

no code implementations • 30 Jun 2016 • David Smith, Parag Singla, Vibhav Gogate

Due to the intractable nature of exact lifted inference, research has recently focused on the discovery of accurate and efficient approximate inference algorithms in Statistical Relational Models (SRMs), such as Lifted First-Order Belief Propagation.

no code implementations • CVPR 2016 • Ishant Shanu, Chetan Arora, Parag Singla

Current state of the art inference algorithms for general submodular function takes many hours for problems with clique size 16, and fail to scale beyond.

no code implementations • NeurIPS 2015 • Happy Mittal, Anuj Mahajan, Vibhav G. Gogate, Parag Singla

Lifted inference rules exploit symmetries for fast reasoning in statistical rela-tional models.

no code implementations • NeurIPS 2015 • Timothy Kopp, Parag Singla, Henry Kautz

Symmetry breaking is a technique for speeding up propositional satisfiability testing by adding constraints to the theory that restrict the search space while preserving satisfiability.

no code implementations • NeurIPS 2015 • Somdeb Sarkhel, Parag Singla, Vibhav G. Gogate

A key advantage of these lifted algorithms is that they have much smaller computational complexity than propositional algorithms when symmetries are present in the MLN and these symmetries can be detected using lifted inference rules.

no code implementations • NeurIPS 2014 • Somdeb Sarkhel, Deepak Venugopal, Parag Singla, Vibhav G. Gogate

In this paper, we present a new approach for lifted MAP inference in Markov logic networks (MLNs).

no code implementations • NeurIPS 2014 • Happy Mittal, Prasoon Goyal, Vibhav G. Gogate, Parag Singla

In this paper, we present two new lifting rules, which enable fast MAP inference in a large class of MLNs.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.