Search Results for author: Takashi Morita

Found 9 papers, 5 papers with code

Positional Encoding Helps Recurrent Neural Networks Handle a Large Vocabulary

no code implementations31 Jan 2024 Takashi Morita

This study discusses the effects of positional encoding on recurrent neural networks (RNNs) utilizing synthetic benchmarks.

Time Series

Adaptive Uncertainty-Guided Model Selection for Data-Driven PDE Discovery

1 code implementation20 Aug 2023 Pongpisit Thanasutives, Takashi Morita, Masayuki Numao, Ken-ichi Fukui

We propose a new parameter-adaptive uncertainty-penalized Bayesian information criterion (UBIC) to prioritize the parsimonious partial differential equation (PDE) that sufficiently governs noisy spatial-temporal observed data with few reliable terms.

Denoising Model Discovery +1

Exploring TTS without T Using Biologically/Psychologically Motivated Neural Network Modules (ZeroSpeech 2020)

1 code implementation11 May 2020 Takashi Morita, Hiroki Koda

In this study, we reported our exploration of Text-To-Speech without Text (TTS without T) in the Zero Resource Speech Challenge 2020, in which participants proposed an end-to-end, unsupervised system that learned speech recognition and TTS together.

Clustering speech-recognition +1

Neural Language Models as Psycholinguistic Subjects: Representations of Syntactic State

2 code implementations NAACL 2019 Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, Roger Levy

We deploy the methods of controlled psycholinguistic experimentation to shed light on the extent to which the behavior of neural network language models reflects incremental representations of syntactic state.

Superregular grammars do not provide additional explanatory power but allow for a compact analysis of animal song

no code implementations5 Nov 2018 Takashi Morita, Hiroki Koda

A pervasive belief with regard to the differences between human language and animal vocal sequences (song) is that they belong to different classes of computational complexity, with animal song belonging to regular languages, whereas human language is superregular.

What do RNN Language Models Learn about Filler--Gap Dependencies?

no code implementations WS 2018 Ethan Wilcox, Roger Levy, Takashi Morita, Richard Futrell

RNN language models have achieved state-of-the-art perplexity results and have proven useful in a suite of NLP tasks, but it is as yet unclear what syntactic generalizations they learn.

Language Modelling Machine Translation

RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency

1 code implementation5 Sep 2018 Richard Futrell, Ethan Wilcox, Takashi Morita, Roger Levy

Recurrent neural networks (RNNs) are the state of the art in sequence modeling for natural language.

Language Modelling

What do RNN Language Models Learn about Filler-Gap Dependencies?

no code implementations31 Aug 2018 Ethan Wilcox, Roger Levy, Takashi Morita, Richard Futrell

RNN language models have achieved state-of-the-art perplexity results and have proven useful in a suite of NLP tasks, but it is as yet unclear what syntactic generalizations they learn.

Cannot find the paper you are looking for? You can Submit a new open access paper.