1 code implementation • 27 Feb 2024 • Debrup Das, Debopriyo Banerjee, Somak Aditya, Ashish Kulkarni
While, TALMs have been successfully employed in different question-answering benchmarks, their efficacy on complex mathematical reasoning benchmarks, and the potential complementary benefits offered by tools for knowledge retrieval and mathematical equation solving are open research questions.
1 code implementation • 20 Feb 2024 • Sayantan Adak, Daivik Agrawal, Animesh Mukherjee, Somak Aditya
We investigate the knowledge of object affordances in pre-trained language models (LMs) and pre-trained Vision-Language models (VLMs).
1 code implementation • 18 Jan 2024 • Haritz Puerto, Martin Tutek, Somak Aditya, Xiaodan Zhu, Iryna Gurevych
Reasoning is a fundamental component of language understanding.
1 code implementation • 17 Jan 2024 • Pengfei Hong, Deepanway Ghosal, Navonil Majumder, Somak Aditya, Rada Mihalcea, Soujanya Poria
Recent advancements in Large Language Models (LLMs) have showcased striking results on existing logical reasoning benchmarks, with some models even surpassing human performance.
no code implementations • 2 Oct 2023 • Man Luo, Shrinidhi Kumbhar, Ming Shen, Mihir Parmar, Neeraj Varshney, Pratyay Banerjee, Somak Aditya, Chitta Baral
This work strives to understand the proficiency of LLMs in logical reasoning by offering a brief review of the latest progress in this area; with a focus on the logical reasoning datasets, tasks, and the methods adopted to utilize LLMs for reasoning.
1 code implementation • 24 May 2023 • Abhinav Rao, Sachin Vashistha, Atharva Naik, Somak Aditya, Monojit Choudhury
Recent explorations with commercial Large Language Models (LLMs) have shown that non-expert users can jailbreak LLMs by simply manipulating their prompts; resulting in degenerate output behavior, privacy and security breaches, offensive outputs, and violations of content regulator policies.
no code implementations • 4 May 2023 • Pengfei Hong, Rishabh Bhardwaj, Navonil Majumdar, Somak Aditya, Soujanya Poria
Our experiments empirically show that the counterfactual samples sourced from our masked text lead to improved domain transfer on 10 out of 12 domain sentiment classification settings, with an average of 2% accuracy improvement over the state-of-the-art for unsupervised domain adaptation (UDA).
no code implementations • 31 Aug 2022 • Deepanway Ghosal, Somak Aditya, Monojit Choudhury
The Natural Language Inference (NLI) task often requires reasoning over multiple steps to reach the conclusion.
no code implementations • 24 Mar 2022 • Karthikeyan K, Shaily Bhatt, Pankaj Singh, Somak Aditya, Sandipan Dandapat, Sunayana Sitaram, Monojit Choudhury
We compare the TEA CheckLists with CheckLists created with different levels of human intervention.
no code implementations • 4 Dec 2021 • Ishan Tarunesh, Somak Aditya, Monojit Choudhury
Natural Language Inference (NLI) is considered a representative task to test natural language understanding (NLU).
1 code implementation • EMNLP (MRL) 2021 • Karthikeyan K, Aalok Sathe, Somak Aditya, Monojit Choudhury
Multilingual language models achieve impressive zero-shot accuracies in many languages in complex tasks such as Natural Language Inference (NLI).
1 code implementation • 15 Jul 2021 • Ishan Tarunesh, Somak Aditya, Monojit Choudhury
The recent state-of-the-art natural language understanding (NLU) systems often behave unpredictably, failing on simpler reasoning examples.
no code implementations • 29 Apr 2021 • Vishesh Agarwal, Somak Aditya, Navin Goyal
To understand Transformers' abilities in such tasks in a fine-grained manner, we deviate from traditional end-to-end settings, and explore a step-wise polynomial simplification task.
no code implementations • 1 Jan 2021 • Vishesh Agarwal, Somak Aditya, Navin Goyal
For a polynomial which is not necessarily in this normal form, a sequence of simplification steps is applied to reach the fully simplified (i. e., in the normal form) polynomial.
1 code implementation • CONLL 2020 • Pratik Joshi, Somak Aditya, Aalok Sathe, Monojit Choudhury
Pre-trained Transformer-based neural architectures have consistently achieved state-of-the-art performance in the Natural Language Inference (NLI) task.
no code implementations • 18 Dec 2019 • Somak Aditya, Atanu Sinha
Online behaviors of consumers and marketers generate massive marketing data, which ever more sophisticated models attempt to turn into insights and aid decisions by marketers.
no code implementations • 24 Jun 2019 • Somak Aditya, Yezhou Yang, Chitta Baral
Deep learning based data-driven approaches have been successfully applied in various image understanding applications ranging from object recognition, semantic segmentation to visual question answering.
no code implementations • 10 Dec 2018 • Somak Aditya, Rudra Saha, Yezhou Yang, Chitta Baral
We propose a framework that combines recent advances in knowledge distillation (teacher-student framework), relational reasoning and probabilistic logical languages to incorporate such knowledge in existing neural networks for the task of Visual Question Answering.
no code implementations • 23 Mar 2018 • Somak Aditya, Yezhou Yang, Chitta Baral
Here we adopt Visual Question Answering (VQA) as an example task, where a system is expected to answer a question in natural language about an image.
no code implementations • 17 Nov 2016 • Somak Aditya, Yezhou Yang, Chitta Baral, Yiannis Aloimonos
We compile a dataset of over 3k riddles where each riddle consists of 4 images and a groundtruth answer.
no code implementations • 10 Nov 2015 • Somak Aditya, Yezhou Yang, Chitta Baral, Cornelia Fermuller, Yiannis Aloimonos
Specifically, commonsense reasoning is applied on (a) detections obtained from existing perception methods on given images, (b) a "commonsense" knowledge base constructed using natural language processing of image annotations and (c) lexical ontological knowledge from resources such as WordNet.