Search Results for author: Somak Aditya

Found 23 papers, 8 papers with code

MATHSENSEI: A Tool-Augmented Large Language Model for Mathematical Reasoning

1 code implementation27 Feb 2024 Debrup Das, Debopriyo Banerjee, Somak Aditya, Ashish Kulkarni

While, TALMs have been successfully employed in different question-answering benchmarks, their efficacy on complex mathematical reasoning benchmarks, and the potential complementary benefits offered by tools for knowledge retrieval and mathematical equation solving are open research questions.

8k Language Modelling +5

GRAFFORD: A Benchmark Dataset for Testing the Knowledge of Object Affordances of Language and Vision Models

1 code implementation20 Feb 2024 Sayantan Adak, Daivik Agrawal, Animesh Mukherjee, Somak Aditya

We investigate the knowledge of object affordances in pre-trained language models (LMs) and pre-trained Vision-Language models (VLMs).

Object

Caught in the Quicksand of Reasoning, Far from AGI Summit: Evaluating LLMs' Mathematical and Coding Competency through Ontology-guided Interventions

1 code implementation17 Jan 2024 Pengfei Hong, Deepanway Ghosal, Navonil Majumder, Somak Aditya, Rada Mihalcea, Soujanya Poria

Recent advancements in Large Language Models (LLMs) have showcased striking results on existing logical reasoning benchmarks, with some models even surpassing human performance.

Arithmetic Reasoning Code Generation +3

Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models

no code implementations2 Oct 2023 Man Luo, Shrinidhi Kumbhar, Ming Shen, Mihir Parmar, Neeraj Varshney, Pratyay Banerjee, Somak Aditya, Chitta Baral

This work strives to understand the proficiency of LLMs in logical reasoning by offering a brief review of the latest progress in this area; with a focus on the logical reasoning datasets, tasks, and the methods adopted to utilize LLMs for reasoning.

Knowledge Distillation Language Modelling +1

Tricking LLMs into Disobedience: Formalizing, Analyzing, and Detecting Jailbreaks

1 code implementation24 May 2023 Abhinav Rao, Sachin Vashistha, Atharva Naik, Somak Aditya, Monojit Choudhury

Recent explorations with commercial Large Language Models (LLMs) have shown that non-expert users can jailbreak LLMs by simply manipulating their prompts; resulting in degenerate output behavior, privacy and security breaches, offensive outputs, and violations of content regulator policies.

ReMask: A Robust Information-Masking Approach for Domain Counterfactual Generation

no code implementations4 May 2023 Pengfei Hong, Rishabh Bhardwaj, Navonil Majumdar, Somak Aditya, Soujanya Poria

Our experiments empirically show that the counterfactual samples sourced from our masked text lead to improved domain transfer on 10 out of 12 domain sentiment classification settings, with an average of 2% accuracy improvement over the state-of-the-art for unsupervised domain adaptation (UDA).

counterfactual intent-classification +5

Generating Intermediate Steps for NLI with Next-Step Supervision

no code implementations31 Aug 2022 Deepanway Ghosal, Somak Aditya, Monojit Choudhury

The Natural Language Inference (NLI) task often requires reasoning over multiple steps to reach the conclusion.

Data Augmentation Natural Language Inference

Analyzing the Effects of Reasoning Types on Cross-Lingual Transfer Performance

1 code implementation EMNLP (MRL) 2021 Karthikeyan K, Aalok Sathe, Somak Aditya, Monojit Choudhury

Multilingual language models achieve impressive zero-shot accuracies in many languages in complex tasks such as Natural Language Inference (NLI).

Cross-Lingual Transfer Natural Language Inference

Trusting RoBERTa over BERT: Insights from CheckListing the Natural Language Inference Task

1 code implementation15 Jul 2021 Ishan Tarunesh, Somak Aditya, Monojit Choudhury

The recent state-of-the-art natural language understanding (NLU) systems often behave unpredictably, failing on simpler reasoning examples.

Natural Language Inference Natural Language Understanding

Analyzing the Nuances of Transformers' Polynomial Simplification Abilities

no code implementations29 Apr 2021 Vishesh Agarwal, Somak Aditya, Navin Goyal

To understand Transformers' abilities in such tasks in a fine-grained manner, we deviate from traditional end-to-end settings, and explore a step-wise polynomial simplification task.

Do Transformers Understand Polynomial Simplification?

no code implementations1 Jan 2021 Vishesh Agarwal, Somak Aditya, Navin Goyal

For a polynomial which is not necessarily in this normal form, a sequence of simplification steps is applied to reach the fully simplified (i. e., in the normal form) polynomial.

TaxiNLI: Taking a Ride up the NLU Hill

1 code implementation CONLL 2020 Pratik Joshi, Somak Aditya, Aalok Sathe, Monojit Choudhury

Pre-trained Transformer-based neural architectures have consistently achieved state-of-the-art performance in the Natural Language Inference (NLI) task.

Natural Language Inference

Uncovering Relations for Marketing Knowledge Representation

no code implementations18 Dec 2019 Somak Aditya, Atanu Sinha

Online behaviors of consumers and marketers generate massive marketing data, which ever more sophisticated models attempt to turn into insights and aid decisions by marketers.

Marketing Relation

Integrating Knowledge and Reasoning in Image Understanding

no code implementations24 Jun 2019 Somak Aditya, Yezhou Yang, Chitta Baral

Deep learning based data-driven approaches have been successfully applied in various image understanding applications ranging from object recognition, semantic segmentation to visual question answering.

Object Recognition Question Answering +2

Spatial Knowledge Distillation to aid Visual Reasoning

no code implementations10 Dec 2018 Somak Aditya, Rudra Saha, Yezhou Yang, Chitta Baral

We propose a framework that combines recent advances in knowledge distillation (teacher-student framework), relational reasoning and probabilistic logical languages to incorporate such knowledge in existing neural networks for the task of Visual Question Answering.

Knowledge Distillation Question Answering +3

Explicit Reasoning over End-to-End Neural Architectures for Visual Question Answering

no code implementations23 Mar 2018 Somak Aditya, Yezhou Yang, Chitta Baral

Here we adopt Visual Question Answering (VQA) as an example task, where a system is expected to answer a question in natural language about an image.

Question Answering Visual Question Answering

From Images to Sentences through Scene Description Graphs using Commonsense Reasoning and Knowledge

no code implementations10 Nov 2015 Somak Aditya, Yezhou Yang, Chitta Baral, Cornelia Fermuller, Yiannis Aloimonos

Specifically, commonsense reasoning is applied on (a) detections obtained from existing perception methods on given images, (b) a "commonsense" knowledge base constructed using natural language processing of image annotations and (c) lexical ontological knowledge from resources such as WordNet.

image-sentence alignment Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.