Why do embedding spaces look as they do?

29 Sep 2021  ·  Xingzhi Guo, Baojian Zhou, Haochen Chen, Sergiy Verstyuk, Steven Skiena ·

The power of embedding representations is a curious phenomenon. For embeddings to work effectively as feature representations, there must exist substantial latent structure inherent in the domain to be encoded. Language vocabularies and Wikipedia topics are human-generated structures that reflect how people organize their world, and what they find important. The structure of the resulting embedding spaces reflects the human evolution of language formation and the cultural processes shaping our world. This paper studies what the observed structure of embeddings can tell us about the natural processes that generate new knowledge or concepts. We demonstrate that word and graph embeddings trained on standard datasets using several popular algorithms consistently share two distinct properties: (1) a decreasing neighbor frequency concentration with rank, and (2) specific clustering velocities and power-law based community structures. We then assess a variety of generative models of embedding spaces by these criteria, and conclude that incremental insertion processes based on the Barabási-Albert network generation process best model the observed phenomenon on language and network data.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here