Skip-gram Word2Vec is an architecture for computing word embeddings. Instead of using surrounding words to predict the center word, as with CBow Word2Vec, Skip-gram Word2Vec uses the central word to predict the surrounding words.
The skip-gram objective function sums the log probabilities of the surrounding $n$ words to the left and right of the target word $w_{t}$ to produce the following objective:
$$J_\theta = \frac{1}{T}\sum^{T}_{t=1}\sum_{-n\leq{j}\leq{n}, \neq{0}}\log{p}\left(w_{j+1}\mid{w_{t}}\right)$$
Source: Efficient Estimation of Word Representations in Vector SpacePaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Graph Embedding | 2 | 6.90% |
Knowledge Graph Embedding | 2 | 6.90% |
Knowledge Graphs | 2 | 6.90% |
Benchmarking | 1 | 3.45% |
Knowledge Graph Embeddings | 1 | 3.45% |
Link Prediction | 1 | 3.45% |
Node Classification | 1 | 3.45% |
Classification | 1 | 3.45% |
Text Classification | 1 | 3.45% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |