Search Results for author: Kotaro Nakayama

Found 8 papers, 1 papers with code

Improving Bi-directional Generation between Different Modalities with Variational Autoencoders

no code implementations26 Jan 2018 Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo

However, we found that when this model attempts to generate a large dimensional modality missing at the input, the joint representation collapses and this modality cannot be generated successfully.

Censoring Representations with Multiple-Adversaries over Random Subspaces

no code implementations ICLR 2018 Yusuke Iwasawa, Kotaro Nakayama, Yutaka Matsuo

AFL learn such a representations by training the networks to deceive the adversary that predict the sensitive information from the network, and therefore, the success of the AFL heavily relies on the choice of the adversary.

Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action Detection

no code implementations9 Jun 2017 Mohammadamin Barekatain, Miquel Martí, Hsueh-Fu Shih, Samuel Murray, Kotaro Nakayama, Yutaka Matsuo, Helmut Prendinger

Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios.

Action Detection

Neural Machine Translation with Latent Semantic of Image and Text

no code implementations25 Nov 2016 Joji Toyama, Masanori Misono, Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo

The report of earlier studies has introduced a latent variable to capture the entire meaning of sentence and achieved improvement on attention-based Neural Machine Translation.

Machine Translation Sentence +1

Joint Multimodal Learning with Deep Generative Models

2 code implementations7 Nov 2016 Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo

As described herein, we propose a joint multimodal variational autoencoder (JMVAE), in which all modalities are independently conditioned on joint representation.

Cannot find the paper you are looking for? You can Submit a new open access paper.