no code implementations • RANLP 2021 • Kyoumoto Matsushita, Takuya Makino, Tomoya Iwakura
Most neural-based NLP models receive only vectors encoded from a sequence of subwords obtained from an input text.
no code implementations • IJCNLP 2019 • Taiki Watanabe, Akihiro Tamura, Takashi Ninomiya, Takuya Makino, Tomoya Iwakura
We propose a method to improve named entity recognition (NER) for chemical compounds using multi-task learning by jointly training a chemical NER model and a chemical com- pound paraphrase model.
no code implementations • ACL 2019 • Takuya Makino, Tomoya Iwakura, Hiroya Takamura, Manabu Okumura
The experimental results show that a state-of-the-art neural summarization model optimized with GOLC generates fewer overlength summaries while maintaining the fastest processing speed; only 6. 70{\%} overlength summaries on CNN/Daily and 7. 8{\%} on long summary of Mainichi, compared to the approximately 20{\%} to 50{\%} on CNN/Daily Mail and 10{\%} to 30{\%} on Mainichi with the other optimization methods.