Subword Segmentation

WordPiece is a subword segmentation algorithm used in natural language processing. The vocabulary is initialized with individual characters in the language, then the most frequent combinations of symbols in the vocabulary are iteratively added to the vocabulary. The process is:

  1. Initialize the word unit inventory with all the characters in the text.
  2. Build a language model on the training data using the inventory from 1.
  3. Generate a new word unit by combining two units out of the current word inventory to increment the word unit inventory by one. Choose the new word unit out of all the possible ones that increases the likelihood on the training data the most when added to the model.
  4. Goto 2 until a predefined limit of word units is reached or the likelihood increase falls below a certain threshold.

Text: Source

Image: WordPiece as used in BERT

Source: Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Language Modelling 94 11.84%
Text Classification 42 5.29%
Sentiment Analysis 41 5.16%
Retrieval 34 4.28%
Question Answering 28 3.53%
Classification 25 3.15%
NER 21 2.64%
Large Language Model 18 2.27%
Named Entity Recognition (NER) 15 1.89%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories