Sentence Completion

22 papers with code • 1 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?


Use these libraries to find Sentence Completion models and implementations

Most implemented papers

Language Models are Few-Shot Learners

openai/gpt-3 NeurIPS 2020

By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do.

QLoRA: Efficient Finetuning of Quantized LLMs

artidoro/qlora 23 May 2023

Our best model family, which we name Guanaco, outperforms all previous openly released models on the Vicuna benchmark, reaching 99. 3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU.

Finetuned Language Models Are Zero-Shot Learners

google-research/flan ICLR 2022

We show that instruction tuning -- finetuning language models on a collection of tasks described via instructions -- substantially improves zero-shot performance on unseen tasks.

Llama 2: Open Foundation and Fine-Tuned Chat Models

facebookresearch/llama 18 Jul 2023

In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.

Factuality Enhanced Language Models for Open-Ended Text Generation

nayeon7lee/factualityprompt 9 Jun 2022

In this work, we measure and improve the factual accuracy of large-scale LMs for open-ended text generation.

Recurrent Memory Networks for Language Modeling

ketranm/RMN NAACL 2016

In this paper, we propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only amplifies the power of RNN but also facilitates our understanding of its internal functioning and allows us to discover underlying patterns in data.

Muppet: Massive Multi-task Representations with Pre-Finetuning

facebook/muppet-roberta-base EMNLP 2021

We propose pre-finetuning, an additional large-scale learning stage between language model pre-training and fine-tuning.

Dependency Recurrent Neural Language Models for Sentence Completion

piotrmirowski/DependencyTreeRnn IJCNLP 2015

Recent work on language modelling has shifted focus from count-based models to neural models.

Top-down Tree Long Short-Term Memory Networks

XingxingZhang/td-treelstm NAACL 2016

Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have been successfully applied to a variety of sequence modeling tasks.

A Deep Architecture for Semantic Matching with Multiple Positional Sentence Representations

jastfkjg/semantic-matching 26 Nov 2015

Our model has several advantages: (1) By using Bi-LSTM, rich context of the whole sentence is leveraged to capture the contextualized local information in each positional sentence representation; (2) By matching with multiple positional sentence representations, it is flexible to aggregate different important contextualized local information in a sentence to support the matching; (3) Experiments on different tasks such as question answering and sentence completion demonstrate the superiority of our model.