Subword Segmentation

WordPiece is a subword segmentation algorithm used in natural language processing. The vocabulary is initialized with individual characters in the language, then the most frequent combinations of symbols in the vocabulary are iteratively added to the vocabulary. The process is:

  1. Initialize the word unit inventory with all the characters in the text.
  2. Build a language model on the training data using the inventory from 1.
  3. Generate a new word unit by combining two units out of the current word inventory to increment the word unit inventory by one. Choose the new word unit out of all the possible ones that increases the likelihood on the training data the most when added to the model.
  4. Goto 2 until a predefined limit of word units is reached or the likelihood increase falls below a certain threshold.

Text: Source

Image: WordPiece as used in BERT

Source: Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Language Modelling 111 12.73%
Retrieval 84 9.63%
Question Answering 47 5.39%
Text Classification 37 4.24%
Sentence 35 4.01%
Large Language Model 33 3.78%
Sentiment Analysis 30 3.44%
NER 20 2.29%
Information Retrieval 17 1.95%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories