Interpretable Entity Representations through Large-Scale Typing

30 Apr 2020Yasumasa OnoeGreg Durrett

In standard methodology for natural language processing, entities in text are typically embedded in dense vector spaces with pre-trained models. Such approaches are strong building blocks for entity-related tasks, but the embeddings they produce require extensive additional processing in neural models, and these entity embeddings are fundamentally difficult to interpret... (read more)

PDF Abstract

Code


No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper