no code implementations • 17 Feb 2022 • Jin Sakuma, Tatsuya Komatsu, Robin Scheibler
We propose multi-layer perceptron (MLP)-based architectures suitable for variable length input.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 29 Sep 2021 • Jin Sakuma, Tatsuya Komatsu, Robin Scheibler
We propose three approaches to extend MLP-based architectures for use with sequences of arbitrary length.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Shoetsu Sato, Jin Sakuma, Naoki Yoshinaga, Masashi Toyoda, Masaru Kitsuregawa
Prior to fine-tuning, our method replaces the embedding layers of the NMT model by projecting general word embeddings induced from monolingual data in a target domain onto a source-domain embedding space.
no code implementations • 30 Apr 2020 • Shoetsu Sato, Jin Sakuma, Naoki Yoshinaga, Masashi Toyoda, Masaru Kitsuregawa
Prior to fine-tuning, our method replaces the embedding layers of the NMT model by projecting general word embeddings induced from monolingual data in a target domain onto a source-domain embedding space.
no code implementations • CONLL 2019 • Jin Sakuma, Naoki Yoshinaga
We present a method for applying a neural network trained on one (resource-rich) language for a given task to other (resource-poor) languages.
no code implementations • EMNLP 2020 • Ikuya Yamada, Akari Asai, Jin Sakuma, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji, Yuji Matsumoto
The embeddings of entities in a large knowledge base (e. g., Wikipedia) are highly beneficial for solving various natural language tasks that involve real world knowledge.
1 code implementation • WS 2017 • Masato Neishi, Jin Sakuma, Satoshi Tohda, Shonosuke Ishiwatari, Naoki Yoshinaga, Masashi Toyoda
In this paper, we describe the team UT-IIS{'}s system and results for the WAT 2017 translation tasks.