Language Modelling

4439 papers with code • 51 benchmarks • 157 datasets

Language Modeling is the task of predicting the next word or character in a document. This technique can be used to train language models that can further be applied to a wide range of natural language tasks like text generation, text classification, and question answering.

Historically, language modelling was done with N-gram language models (which still have niche uses), but since the 2010s neural language models took over, and starting from the 2020s SOTA was achieved exclusively with large language models (LLMs).

A model's language modeling capability is measured using cross-entropy and perplexity. Some datasets to evaluate language modeling are WikiText-103, One Billion Word, Text8, C4, The Pile, among others.

Some notable state-of-the-art language models include:

Check below for all state-of-the-art models.

Here are some additional readings to go deeper on the task:

( Image credit: Exploring the Limits of Language Modeling )

Libraries

Use these libraries to find Language Modelling models and implementations
31 papers
124,593
12 papers
18,284
10 papers
29,201
See all 15 libraries.

Latest papers with no code

Prompt Optimizer of Text-to-Image Diffusion Models for Abstract Concept Understanding

no code yet • 17 Apr 2024

The dataset is created with GPT-4 to extend the abstract concept to a scene and concrete objects.

LLMTune: Accelerate Database Knob Tuning with Large Language Models

no code yet • 17 Apr 2024

Database knob tuning is a critical challenge in the database community, aiming to optimize knob values to enhance database performance for specific workloads.

Characterizing and modeling harms from interactions with design patterns in AI interfaces

no code yet • 17 Apr 2024

The proliferation of applications using artificial intelligence (AI) systems has led to a growing number of users interacting with these systems through sophisticated interfaces.

Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives

no code yet • 17 Apr 2024

The Composed Image Retrieval (CIR) task aims to retrieve target images using a composed query consisting of a reference image and a modified text.

RD2Bench: Toward Data-Centric Automatic R&D

no code yet • 17 Apr 2024

The progress of humanity is driven by those successful discoveries accompanied by countless failed experiments.

Prompt-Guided Generation of Structured Chest X-Ray Report Using a Pre-trained LLM

no code yet • 17 Apr 2024

Our method introduces a prompt-guided approach to generate structured chest X-ray reports using a pre-trained large language model (LLM).

A Progressive Framework of Vision-language Knowledge Distillation and Alignment for Multilingual Scene

no code yet • 17 Apr 2024

Pre-trained vision-language (V-L) models such as CLIP have shown excellent performance in many downstream cross-modal tasks.

Lightweight Unsupervised Federated Learning with Pretrained Vision Language Model

no code yet • 17 Apr 2024

To address these two inherent challenges in supervised federated learning, we propose a novel lightweight unsupervised federated learning approach that leverages unlabeled data on each client to perform lightweight model training and communication by harnessing pretrained vision-language models, such as CLIP.

On the Scalability of GNNs for Molecular Graphs

no code yet • 17 Apr 2024

However, structure-based architectures such as Graph Neural Networks (GNNs) are yet to show the benefits of scale mainly due to the lower efficiency of sparse operations, large data requirements, and lack of clarity about the effectiveness of various architectures.

LongVQ: Long Sequence Modeling with Vector Quantization on Structured Memory

no code yet • 17 Apr 2024

Transformer models have been successful in various sequence processing tasks, but the self-attention mechanism's computational cost limits its practicality for long sequences.