QNLI

6 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Tell Me How to Ask Again: Question Data Augmentation with Controllable Rewriting in Continuous Space

dayihengliu/CRQDA EMNLP 2020

In this paper, we propose a novel data augmentation method, referred to as Controllable Rewriting based Question Data Augmentation (CRQDA), for machine reading comprehension (MRC), question generation, and question-answering natural language inference tasks.

Learning Rate Curriculum

croitorualin/lerac 18 May 2022

In this work, we propose a novel curriculum learning approach termed Learning Rate Curriculum (LeRaC), which leverages the use of a different learning rate for each layer of a neural network to create a data-agnostic curriculum during the initial training epochs.

Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning

strong-ai-lab/logical-equivalence-driven-amr-data-augmentation-for-representation-learning 21 May 2023

Combining large language models with logical reasoning enhances their capacity to address problems in a robust and reliable manner.

How to Distill your BERT: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives

mainlp/how-to-distill-your-bert 24 May 2023

To the best of our knowledge, this is the first work comprehensively evaluating distillation objectives in both settings.

Here's a Free Lunch: Sanitizing Backdoored Models with Model Merge

ansharora7/model-merge-backdoor 29 Feb 2024

The democratization of pre-trained language models through open-source initiatives has rapidly advanced innovation and expanded access to cutting-edge technologies.

Enhancing LLM Robustness to Perturbed Instructions: An Empirical Study

ary4n99/llm-robustness 3 Apr 2025

Large Language Models (LLMs) are highly vulnerable to input perturbations, as even a small prompt change may result in a substantially different output.