Putting words in context: LSTM language models and lexical ambiguity

ACL 2019 Laura AinaKristina GulordavaGemma Boleda

In neural network models of language, words are commonly represented using context-invariant representations (word embeddings) which are then put in context in the hidden layers. Since words are often ambiguous, representing the contextually relevant information is not trivial... (read more)

PDF Abstract ACL 2019 PDF ACL 2019 Abstract

Code


No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper