Search Results for author: Yoshimasa Tsuruoka

Found 39 papers, 7 papers with code

Zero-pronoun Data Augmentation for Japanese-to-English Translation

no code implementations1 Jul 2021 Ryokan Ri, Toshiaki Nakazawa, Yoshimasa Tsuruoka

For Japanese-to-English translation, zero pronouns in Japanese pose a challenge, since the model needs to infer and produce the corresponding pronoun in the target side of the English sentence.

Data Augmentation Machine Translation

Modeling Target-side Inflection in Placeholder Translation

1 code implementation1 Jul 2021 Ryokan Ri, Toshiaki Nakazawa, Yoshimasa Tsuruoka

Placeholder translation systems enable the users to specify how a specific phrase is translated in the output sentence.

Utilizing Skipped Frames in Action Repeats via Pseudo-Actions

no code implementations7 May 2021 Taisei Hashimoto, Yoshimasa Tsuruoka

The key idea of our method is making the transition between action-decision points usable as training data by considering pseudo-actions.

Continuous Control OpenAI Gym

Meta-Model-Based Meta-Policy Optimization

no code implementations4 Jun 2020 Takuya Hiraoka, Takahisa Imagawa, Voot Tangkaratt, Takayuki Osa, Takashi Onishi, Yoshimasa Tsuruoka

Model-based meta-reinforcement learning (RL) methods have recently shown to be a promising approach to improving the sample efficiency of RL in multi-task settings.

Continuous Control Meta-Learning +1

Data Augmentation with Unsupervised Machine Translation Improves the Structural Similarity of Cross-lingual Word Embeddings

no code implementations ACL 2021 Sosuke Nishikawa, Ryokan Ri, Yoshimasa Tsuruoka

Unsupervised cross-lingual word embedding (CLWE) methods learn a linear transformation matrix that maps two monolingual embedding spaces that are separately trained with monolingual corpora.

Data Augmentation Unsupervised Machine Translation +1

Revisiting the Context Window for Cross-lingual Word Embeddings

no code implementations ACL 2020 Ryokan Ri, Yoshimasa Tsuruoka

Existing approaches to mapping-based cross-lingual word embeddings are based on the assumption that the source and target embedding spaces are structurally similar.

Bilingual Lexicon Induction Word Embeddings

Optimistic Proximal Policy Optimization

no code implementations25 Jun 2019 Takahisa Imagawa, Takuya Hiraoka, Yoshimasa Tsuruoka

Reinforcement Learning, a machine learning framework for training an autonomous agent based on rewards, has shown outstanding results in various domains.

Building a Computer Mahjong Player via Deep Convolutional Neural Networks

no code implementations5 Jun 2019 Shiqi Gao, Fuminori Okuya, Yoshihiro Kawahara, Yoshimasa Tsuruoka

The evaluation function for imperfect information games is always hard to define but owns a significant impact on the playing strength of a program.

Game of Go

Learning Robust Options by Conditional Value at Risk Optimization

1 code implementation NeurIPS 2019 Takuya Hiraoka, Takahisa Imagawa, Tatsuya Mori, Takashi Onishi, Yoshimasa Tsuruoka

While there are several methods to learn options that are robust against the uncertainty of model parameters, these methods only consider either the worst case or the average (ordinary) case for learning options.

Synthesizing Chemical Plant Operation Procedures using Knowledge, Dynamic Simulation and Deep Reinforcement Learning

no code implementations6 Mar 2019 Shumpei Kubosawa, Takashi Onishi, Yoshimasa Tsuruoka

Chemical plants are complex and dynamical systems consisting of many components for manipulation and sensing, whose state transitions depend on various factors such as time, disturbance, and operation procedures.

Neural Fictitious Self-Play on ELF Mini-RTS

no code implementations6 Feb 2019 Keigo Kawamura, Yoshimasa Tsuruoka

Despite the notable successes in video games such as Atari 2600, current AI is yet to defeat human champions in the domain of real-time strategy (RTS) games.

Partially Non-Recurrent Controllers for Memory-Augmented Neural Networks

no code implementations30 Dec 2018 Naoya Taguchi, Yoshimasa Tsuruoka

Memory-Augmented Neural Networks (MANNs) are a class of neural networks equipped with an external memory, and are reported to be effective for tasks requiring a large long-term memory and its selective use.

Refining Manually-Designed Symbol Grounding and High-Level Planning by Policy Gradients

no code implementations29 Sep 2018 Takuya Hiraoka, Takashi Onishi, Takahisa Imagawa, Yoshimasa Tsuruoka

In this paper, we propose a framework that can automatically refine symbol grounding functions and a high-level planner to reduce human effort for designing these modules.

Decision Making

Multilingual Extractive Reading Comprehension by Runtime Machine Translation

1 code implementation10 Sep 2018 Akari Asai, Akiko Eriguchi, Kazuma Hashimoto, Yoshimasa Tsuruoka

Given a target language without RC training data and a pivot language with RC training data (e. g. English), our method leverages existing RC resources in the pivot language by combining a competitive RC model in the pivot language with an attentive Neural Machine Translation (NMT) model.

Machine Translation Reading Comprehension

Monte Carlo Tree Search with Scalable Simulation Periods for Continuously Running Tasks

no code implementations7 Sep 2018 Seydou Ba, Takuya Hiraoka, Takashi Onishi, Toru Nakata, Yoshimasa Tsuruoka

The evaluation results show that, with variable simulation times, the proposed approach outperforms the conventional MCTS in the evaluated continuous decision space tasks and improves the performance of MCTS in most of the ALE tasks.

Atari Games

Accelerated Reinforcement Learning for Sentence Generation by Vocabulary Prediction

1 code implementation NAACL 2019 Kazuma Hashimoto, Yoshimasa Tsuruoka

A major obstacle in reinforcement learning-based sentence generation is the large action space whose size is equal to the vocabulary size of the target-side language.

Image Captioning Machine Translation

Hierarchical Reinforcement Learning with Abductive Planning

no code implementations28 Jun 2018 Kazeto Yamamoto, Takashi Onishi, Yoshimasa Tsuruoka

One potential solution to this problem is to combine reinforcement learning with automated symbol planning and utilize prior knowledge on the domain.

Hierarchical Reinforcement Learning

Learning to Parse and Translate Improves Neural Machine Translation

1 code implementation ACL 2017 Akiko Eriguchi, Yoshimasa Tsuruoka, Kyunghyun Cho

There has been relatively little attention to incorporating linguistic prior to neural machine translation.

Machine Translation

Neural Machine Translation with Source-Side Latent Graph Parsing

no code implementations EMNLP 2017 Kazuma Hashimoto, Yoshimasa Tsuruoka

This paper presents a novel neural machine translation model which jointly learns translation and source-side latent graph representations of sentences.

Machine Translation

Character-based Decoding in Tree-to-Sequence Attention-based Neural Machine Translation

no code implementations WS 2016 Akiko Eriguchi, Kazuma Hashimoto, Yoshimasa Tsuruoka

This paper reports our systems (UT-AKY) submitted in the 3rd Workshop of Asian Translation 2016 (WAT{'}16) and their results in the English-to-Japanese translation task.

Machine Translation

Domain Adaptation for Neural Networks by Parameter Augmentation

no code implementations WS 2016 Yusuke Watanabe, Kazuma Hashimoto, Yoshimasa Tsuruoka

Recently, recurrent neural networks have been shown to be successful on a variety of NLP tasks such as caption generation; however, the existing domain adaptation techniques are limited to (1) tune the model parameters by the target dataset after the training by the source dataset, or (2) design the network to have dual output, one for the source domain and the other for the target domain.

Domain Adaptation

Asymmetric Move Selection Strategies in Monte-Carlo Tree Search: Minimizing the Simple Regret at Max Nodes

no code implementations8 May 2016 Yun-Ching Liu, Yoshimasa Tsuruoka

We develop the Asymmetric-MCTS algorithm, which is an MCTS variant that applies a simple regret algorithm on max nodes, and the UCB algorithm on min nodes.

Adaptive Joint Learning of Compositional and Non-Compositional Phrase Embeddings

no code implementations ACL 2016 Kazuma Hashimoto, Yoshimasa Tsuruoka

We present a novel method for jointly learning compositional and non-compositional phrase embeddings by adaptively weighting both types of embeddings using a compositionality scoring function.

Tree-to-Sequence Attentional Neural Machine Translation

1 code implementation ACL 2016 Akiko Eriguchi, Kazuma Hashimoto, Yoshimasa Tsuruoka

Most of the existing Neural Machine Translation (NMT) models focus on the conversion of sequential data and do not directly use syntactic information.

Machine Translation

Adapting Improved Upper Confidence Bounds for Monte-Carlo Tree Search

no code implementations11 May 2015 Yun-Ching Liu, Yoshimasa Tsuruoka

The UCT algorithm, which combines the UCB algorithm and Monte-Carlo Tree Search (MCTS), is currently the most widely used variant of MCTS.

Cannot find the paper you are looking for? You can Submit a new open access paper.