Search Results for author: Seiichi Yamamoto

Found 7 papers, 0 papers with code

Investigating context features hidden in End-to-End TTS

no code implementations4 Nov 2018 Kohki Mametani, Tsuneo Kato, Seiichi Yamamoto

Recent studies have introduced end-to-end TTS, which integrates the production of context and acoustic features in statistical parametric speech synthesis.

Feature Engineering Speech Synthesis

Utterance Intent Classification of a Spoken Dialogue System with Efficiently Untied Recursive Autoencoders

no code implementations WS 2017 Tsuneo Kato, Atsushi Nagai, Naoki Noda, Ryosuke Sumitomo, Jianming Wu, Seiichi Yamamoto

Recursive autoencoders (RAEs) for compositionality of a vector space model were applied to utterance intent classification of a smartphone-based Japanese-language spoken dialogue system.

Automatic Speech Recognition (ASR) Classification +5

What topic do you want to hear about? A bilingual talking robot using English and Japanese Wikipedias

no code implementations COLING 2016 Graham Wilcock, Kristiina Jokinen, Seiichi Yamamoto

We demonstrate a bilingual robot application, WikiTalk, that can talk fluently in both English and Japanese about almost any topic using information from English and Japanese Wikipedias.

Navigate

Joining-in-type Humanoid Robot Assisted Language Learning System

no code implementations LREC 2016 AlBara Khalifa, Tsuneo Kato, Seiichi Yamamoto

Dialogue robots are attractive to people, and in language learning systems, they motivate learners and let them practice conversational skills in more realistic environment.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Quantitative Analysis of Gazes and Grounding Acts in L1 and L2 Conversations

no code implementations LREC 2016 Ichiro Umata, Koki Ijuin, Mitsuru Ishida, Moe Takeuchi, Seiichi Yamamoto

The listener{'}s gazing activities during utterances were analyzed in a face-to-face three-party conversation setting.

Phoneme Set Design Using English Speech Database by Japanese for Dialogue-Based English CALL Systems

no code implementations LREC 2014 Xiaoyun Wang, Jinsong Zhang, Masafumi Nishida, Seiichi Yamamoto

This paper describes a method of generating a reduced phoneme set for dialogue-based computer assisted language learning (CALL)systems.

Language Modelling Speech Recognition +1

Multimodal Corpus of Multi-party Conversations in Second Language

no code implementations LREC 2012 Shota Yamasaki, Hirohisa Furukawa, Masafumi Nishida, Kristiina Jokinen, Seiichi Yamamoto

We collected a multimodal corpus of multi-party conversations in English as the second language to investigate the differences in communication styles.

Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.