Sentence Completion

16 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Sentence Completion models and implementations

Datasets


Most implemented papers

Language Models are Few-Shot Learners

openai/gpt-3 NeurIPS 2020

By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do.

Finetuned Language Models Are Zero-Shot Learners

google-research/flan ICLR 2022

We show that instruction tuning -- finetuning language models on a collection of tasks described via instructions -- substantially improves zero-shot performance on unseen tasks.

Recurrent Memory Networks for Language Modeling

ketranm/RMN NAACL 2016

In this paper, we propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only amplifies the power of RNN but also facilitates our understanding of its internal functioning and allows us to discover underlying patterns in data.

Muppet: Massive Multi-task Representations with Pre-Finetuning

facebook/muppet-roberta-base EMNLP 2021

We propose pre-finetuning, an additional large-scale learning stage between language model pre-training and fine-tuning.

Dependency Recurrent Neural Language Models for Sentence Completion

piotrmirowski/DependencyTreeRnn IJCNLP 2015

Recent work on language modelling has shifted focus from count-based models to neural models.

Top-down Tree Long Short-Term Memory Networks

XingxingZhang/td-treelstm NAACL 2016

Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have been successfully applied to a variety of sequence modeling tasks.

A Deep Architecture for Semantic Matching with Multiple Positional Sentence Representations

jastfkjg/semantic-matching 26 Nov 2015

Our model has several advantages: (1) By using Bi-LSTM, rich context of the whole sentence is leveraged to capture the contextualized local information in each positional sentence representation; (2) By matching with multiple positional sentence representations, it is flexible to aggregate different important contextualized local information in a sentence to support the matching; (3) Experiments on different tasks such as question answering and sentence completion demonstrate the superiority of our model.

Learning Semantically and Additively Compositional Distributional Representations

tianran/vecdcs ACL 2016

This paper connects a vector-based composition model to a formal semantics, the Dependency-based Compositional Semantics (DCS).

CODAH: An Adversarially Authored Question-Answer Dataset for Common Sense

Websail-NU/AQuA 8 Apr 2019

To produce a more difficult dataset, we introduce a novel procedure for question acquisition in which workers author questions designed to target weaknesses of state-of-the-art neural question answering systems.

CODAH: An Adversarially-Authored Question Answering Dataset for Common Sense

Websail-NU/CODAH WS 2019

To produce a more difficult dataset, we introduce a novel procedure for question acquisition in which workers author questions designed to target weaknesses of state-of-the-art neural question answering systems.