To keep a knowledge graph up-to-date, an extractor needs not only the ability to recall the triples it encountered during training, but also the ability to extract the new triples from the context that it has never seen before.
In this paper, we show that although existing extraction models are able to memorize and recall already seen triples, they cannot generalize effectively for unseen triples.
Transformers with soft attention have been widely adopted to various sequence-to-sequence tasks.
no code implementations • 14 Aug 2020 • Taewoo Lee, Min-Joong Lee, Tae Gyoon Kang, Seokyeoung Jung, Minseok Kwon, Yeona Hong, Jungin Lee, Kyoung-Gu Woo, Ho-Gyeong Kim, Jiseung Jeong, Ji-Hyun Lee, Hosik Lee, Young Sang Choi
We propose an adapter based multi-domain Transformer based language model (LM) for Transformer ASR.