Paper

Putting words in context: LSTM language models and lexical ambiguity

In neural network models of language, words are commonly represented using context-invariant representations (word embeddings) which are then put in context in the hidden layers. Since words are often ambiguous, representing the contextually relevant information is not trivial... (read more)

Results in Papers With Code
(↓ scroll down to see all results)