RTE

21 papers with code • 1 benchmarks • 3 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Finetuned Language Models Are Zero-Shot Learners

google-research/flan ICLR 2022

We show that instruction tuning -- finetuning language models on a collection of tasks described via instructions -- substantially improves zero-shot performance on unseen tasks.

Representing Meaning with a Combination of Logical and Distributional Models

ibeltagy/rrr CL 2016

In this paper, we focus on the three components of a practical system integrating logical and distributional models: 1) Parsing and task representation is the logic-based part where input problems are represented in probabilistic logic.

Reset-free Trial-and-Error Learning for Robot Damage Recovery

resibots/chatzilygeroudis_2018_rte 13 Oct 2016

However, the best RL algorithms for robotics require the robot and the environment to be reset to an initial state after each episode, that is, the robot is not learning autonomously.

Acquisition of Phrase Correspondences using Natural Deduction Proofs

mynlp/ccg2lambda NAACL 2018

How to identify, extract, and use phrasal knowledge is a crucial problem for the task of Recognizing Textual Entailment (RTE).

End-Task Oriented Textual Entailment via Deep Explorations of Inter-Sentence Interactions

yinwenpeng/SciTail ACL 2018

This work deals with SciTail, a natural entailment challenge derived from a multi-choice question answering problem.

Combining Axiom Injection and Knowledge Base Completion for Efficient Natural Language Inference

masashi-y/abduction_kbc 15 Nov 2018

In logic-based approaches to reasoning tasks such as Recognizing Textual Entailment (RTE), it is important for a system to have a large amount of knowledge data.

Adaptive Prior Selection for Repertoire-based Online Adaptation in Robotics

resibots/kaushik_2019_aprol 16 Jul 2019

Repertoire-based learning is a data-efficient adaptation approach based on a two-step process in which (1) a large and diverse set of policies is learned in simulation, and (2) a planning or learning algorithm chooses the most appropriate policies according to the current situation (e. g., a damaged robot, a new object, etc.).

Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference

ZJULearning/DMP ACL 2018

We observe that people usually use some discourse markers such as "so" or "but" to represent the logical relationship between two sentences.

Investigating Entity Knowledge in BERT with Simple Neural End-To-End Entity Linking

samuelbroscheit/entity_knowledge_in_bert CONLL 2019

We show on an entity linking benchmark that (i) this model improves the entity representations over plain BERT, (ii) that it outperforms entity linking architectures that optimize the tasks separately and (iii) that it only comes second to the current state-of-the-art that does mention detection and entity disambiguation jointly.

Pretraining with Contrastive Sentence Objectives Improves Discourse Performance of Language Models

google-research/language ACL 2020

Recent models for unsupervised representation learning of text have employed a number of techniques to improve contextual word representations but have put little focus on discourse-level representations.