QNLI
6 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in QNLI
Most implemented papers
Tell Me How to Ask Again: Question Data Augmentation with Controllable Rewriting in Continuous Space
In this paper, we propose a novel data augmentation method, referred to as Controllable Rewriting based Question Data Augmentation (CRQDA), for machine reading comprehension (MRC), question generation, and question-answering natural language inference tasks.
Learning Rate Curriculum
In this work, we propose a novel curriculum learning approach termed Learning Rate Curriculum (LeRaC), which leverages the use of a different learning rate for each layer of a neural network to create a data-agnostic curriculum during the initial training epochs.
Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning
Combining large language models with logical reasoning enhances their capacity to address problems in a robust and reliable manner.
How to Distill your BERT: An Empirical Study on the Impact of Weight Initialisation and Distillation Objectives
To the best of our knowledge, this is the first work comprehensively evaluating distillation objectives in both settings.
Here's a Free Lunch: Sanitizing Backdoored Models with Model Merge
The democratization of pre-trained language models through open-source initiatives has rapidly advanced innovation and expanded access to cutting-edge technologies.
Enhancing LLM Robustness to Perturbed Instructions: An Empirical Study
Large Language Models (LLMs) are highly vulnerable to input perturbations, as even a small prompt change may result in a substantially different output.