# Topic Models

170 papers with code • 3 benchmarks • 7 datasets

A topic model is a type of statistical model for discovering the abstract "topics" that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for the discovery of hidden semantic structures in a text body.

## Libraries

Use these libraries to find Topic Models models and implementations## Datasets

## Most implemented papers

# Topic Modeling in Embedding Spaces

To this end, we develop the Embedded Topic Model (ETM), a generative model of documents that marries traditional topic models with word embeddings.

# Neural Variational Inference for Text Processing

We validate this framework on two very different text modelling applications, generative document modelling and supervised question answering.

# Autoencoding Variational Inference For Topic Models

A promising approach to address this problem is autoencoding variational Bayes (AEVB), but it has proven diffi- cult to apply to topic models in practice.

# Mixing Dirichlet Topic Models and Word Embeddings to Make lda2vec

Distributed dense word vectors have been shown to be effective at capturing token-level semantic and syntactic regularities in language, while topic models can form interpretable representations over documents.

# Adapting Text Embeddings for Causal Inference

To address this challenge, we develop causally sufficient embeddings, low-dimensional document representations that preserve sufficient information for causal identification and allow for efficient estimation of causal effects.

# Neural Models for Documents with Metadata

Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information.

# An Unsupervised Neural Attention Model for Aspect Extraction

Unlike topic models which typically assume independently generated words, word embedding models encourage words that appear in similar contexts to be located close to each other in the embedding space.

# Topic Discovery in Massive Text Corpora Based on Min-Hashing

This paper describes an alternative approach to discover topics based on Min-Hashing, which can handle massive text corpora and large vocabularies using modest computer hardware and does not require to fix the number of topics in advance.

# Pre-training is a Hot Topic: Contextualized Document Embeddings Improve Topic Coherence

Topic models extract groups of words from documents, whose interpretation as a topic hopefully allows for a better understanding of the data.

# Latent Dirichlet Allocation

Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities.