Search Results for author: Neeraj Varshney

Found 26 papers, 4 papers with code

Unsupervised Natural Language Inference Using PHL Triplet Generation

1 code implementation Findings (ACL) 2022 Neeraj Varshney, Pratyay Banerjee, Tejas Gokhale, Chitta Baral

Transformer-based models achieve impressive performance on numerous Natural Language Inference (NLI) benchmarks when trained on respective training datasets.

Natural Language Inference Sentence

ILDAE: Instance-Level Difficulty Analysis of Evaluation Data

1 code implementation ACL 2022 Neeraj Varshney, Swaroop Mishra, Chitta Baral

Knowledge of questions' difficulty level helps a teacher in several ways, such as estimating students' potential quickly by asking carefully selected questions and improving quality of examination by modifying trivial and hard questions.

Towards Question Format Independent Numerical Reasoning: A Set of Prerequisite Tasks

no code implementations18 May 2020 Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Sachdeva, Chitta Baral

However, there exists a strong need for a benchmark which can evaluate the abilities of models, in performing question format independent numerical reasoning, as (i) the numerical reasoning capabilities we want to teach are not controlled by question formats, (ii) for numerical reasoning technology to have the best possible application, it must be able to process language and reason in a way that is not exclusive to a single format, task, dataset or domain.

Natural Language Inference Question Answering +1

Towards Improving Selective Prediction Ability of NLP Systems

no code implementations RepL4NLP (ACL) 2022 Neeraj Varshney, Swaroop Mishra, Chitta Baral

In (IID, OOD) settings, we show that the representations learned by our calibrator result in an improvement of (15. 81%, 5. 64%) and (6. 19%, 13. 9%) over 'MaxProb' -- a selective prediction baseline -- on NLI and DD tasks respectively.

Natural Language Inference

Beamformed Energy Detection in the Presence of an Interferer for Cognitive mmWave Network

no code implementations31 Jul 2020 Madhuri Latha Mannedu, Sai Krishna Charan Dara, Sachin Chaudhari, Neeraj Varshney

To demonstrate the bound on the system performance, the proposed sensing scheme is designed under the knowledge of full channel state information (CSI) at the SU for the PU-SU and Interferer-SU channels.

Can Transformers Reason About Effects of Actions?

no code implementations17 Dec 2020 Pratyay Banerjee, Chitta Baral, Man Luo, Arindam Mitra, Kuntal Pal, Tran C. Son, Neeraj Varshney

A recent work has shown that transformers are able to "reason" with facts and rules in a limited setting where the rules are natural language expressions of conjunctions of conditions implying a conclusion.

Common Sense Reasoning Question Answering

On the Performance of the Primary and Secondary Links in a 3-D Underlay Cognitive Molecular Communication

no code implementations11 Feb 2021 Nithin V. Sabu, Neeraj Varshney, Abhishek K. Gupta

In this work, we consider a system in three-dimensional (3-D) space with two coexisting communication links, each between a point transmitter and fully-absorbing spherical receiver (FAR), where the one link (termed primary) has priority over the second link (termed secondary).

Information Theory Information Theory

Interviewer-Candidate Role Play: Towards Developing Real-World NLP Systems

no code implementations1 Jul 2021 Neeraj Varshney, Swaroop Mishra, Chitta Baral

However, our task leaves a significant challenge for NLP researchers to further improve OOD performance at each stage.

Natural Language Inference

Hybrid Transceiver Design for Tera-Hertz MIMO Systems Relying on Bayesian Learning Aided Sparse Channel Estimation

no code implementations20 Sep 2021 Suraj Srivastava, Ajeet Tripathi, Neeraj Varshney, Aditya K. Jagannatham, Lajos Hanzo

Hybrid transceiver design in multiple-input multiple-output (MIMO) Tera-Hertz (THz) systems relying on sparse channel state information (CSI) estimation techniques is conceived.

Benchmarking

Let the Model Decide its Curriculum for Multitask Learning

no code implementations DeepLo 2022 Neeraj Varshney, Swaroop Mishra, Chitta Baral

Curriculum learning strategies in prior multi-task learning approaches arrange datasets in a difficulty hierarchy either based on human perception or by exhaustively searching the optimal arrangement.

Multi-Task Learning

Model Cascading: Towards Jointly Improving Efficiency and Accuracy of NLP Systems

no code implementations11 Oct 2022 Neeraj Varshney, Chitta Baral

Through comprehensive experiments in multiple task settings that differ in the number of models available for cascading (K value), we show that cascading improves both the computational efficiency and the prediction accuracy.

Computational Efficiency

Performance Analysis of LEO Satellite-Based IoT Networks in the Presence of Interference

no code implementations8 Nov 2022 Ayush Kumar Dwivedi, Sachin Chaudhari, Neeraj Varshney, Pramod K. Varshney

The paper also presents simplified expressions for the OP under a high signal-to-noise ratio (SNR) assumption, which are utilized to optimize the system parameters for achieving a target OP.

Can Open-Domain QA Reader Utilize External Knowledge Efficiently like Humans?

no code implementations23 Nov 2022 Neeraj Varshney, Man Luo, Chitta Baral

Comparing with the FiD reader, this approach matches its accuracy by utilizing just 18. 32% of its reader inference cost and also outperforms it by achieving up to 55. 10% accuracy on NQ Open.

Open-Domain Question Answering TriviaQA

Methods and Mechanisms for Interactive Novelty Handling in Adversarial Environments

no code implementations28 Feb 2023 Tung Thai, Ming Shen, Mayank Garg, Ayush Kalani, Nakul Vaidya, Utkarsh Soni, Mudit Verma, Sriram Gopalakrishnan, Neeraj Varshney, Chitta Baral, Subbarao Kambhampati, Jivko Sinapov, Matthias Scheutz

Learning to detect, characterize and accommodate novelties is a challenge that agents operating in open-world domains need to address to be able to guarantee satisfactory task performance.

Novelty Detection

Post-Abstention: Towards Reliably Re-Attempting the Abstained Instances in QA

no code implementations2 May 2023 Neeraj Varshney, Chitta Baral

Despite remarkable progress made in natural language processing, even the state-of-the-art models often make incorrect predictions.

A Unified Evaluation Framework for Novelty Detection and Accommodation in NLP with an Instantiation in Authorship Attribution

no code implementations8 May 2023 Neeraj Varshney, Himanshu Gupta, Eric Robertson, Bing Liu, Chitta Baral

To initiate a systematic research in this important area of 'dealing with novelties', we introduce 'NoveltyTask', a multi-stage task to evaluate a system's performance on pipelined novelty 'detection' and 'accommodation' tasks.

Authorship Attribution Novelty Detection

A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation

no code implementations8 Jul 2023 Neeraj Varshney, Wenlin Yao, Hongming Zhang, Jianshu Chen, Dong Yu

Specifically, the detection technique achieves a recall of ~88% and the mitigation technique successfully mitigates 57. 6% of the correctly detected hallucinations.

Hallucination

Can NLP Models 'Identify', 'Distinguish', and 'Justify' Questions that Don't have a Definitive Answer?

no code implementations8 Sep 2023 Ayushi Agarwal, Nisarg Patel, Neeraj Varshney, Mihir Parmar, Pavan Mallina, Aryan Bhavin Shah, Srihari Raju Sangaraju, Tirth Patel, Nihar Thakkar, Chitta Baral

Though state-of-the-art (SOTA) NLP systems have achieved remarkable performance on a variety of language understanding tasks, they primarily focus on questions that have a correct and a definitive answer.

Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models

no code implementations2 Oct 2023 Man Luo, Shrinidhi Kumbhar, Ming Shen, Mihir Parmar, Neeraj Varshney, Pratyay Banerjee, Somak Aditya, Chitta Baral

This work strives to understand the proficiency of LLMs in logical reasoning by offering a brief review of the latest progress in this area; with a focus on the logical reasoning datasets, tasks, and the methods adopted to utilize LLMs for reasoning.

Knowledge Distillation Language Modelling +1

Accelerating LLaMA Inference by Enabling Intermediate Layer Decoding via Instruction Tuning with LITE

no code implementations28 Oct 2023 Neeraj Varshney, Agneet Chatterjee, Mihir Parmar, Chitta Baral

Large Language Models (LLMs) have achieved remarkable performance across a wide variety of natural language tasks; however, their large size makes their inference slow and computationally expensive.

Semantic Similarity Semantic Textual Similarity +1

The Art of Defending: A Systematic Evaluation and Analysis of LLM Defense Strategies on Safety and Over-Defensiveness

no code implementations30 Dec 2023 Neeraj Varshney, Pavel Dolin, Agastya Seth, Chitta Baral

As Large Language Models (LLMs) play an increasingly pivotal role in natural language processing applications, their safety concerns become critical areas of NLP research.

Cannot find the paper you are looking for? You can Submit a new open access paper.