Word embeddings have been widely adopted across several NLP applications.
Learning word embeddings on large unlabeled corpus has been shown to be successful in improving many natural language tasks.
In this paper, we investigate the task of learning word embeddings from very sparse data in an incremental, cognitively-plausible way.
Then, based on this insight, we propose a novel framework WordRank that efficiently estimates word representations via robust ranking, in which the attention mechanism and robustness to noise are readily achieved via the DCG-like ranking losses.
Existing approaches for learning word embeddings often assume there are sufficient occurrences for each word in the corpus, such that the representation of words can be accurately estimated from their contexts.
Knowledge graphs are structured representations of facts in a graph, where nodes represent entities and edges represent relationships between them.