Search Results for author: Masahiro Suzuki

Found 18 papers, 2 papers with code

Learning shared manifold representation of images and attributes for generalized zero-shot learning

no code implementations ICLR 2019 Masahiro Suzuki, Yusuke Iwasawa, Yutaka Matsuo

To solve this, we propose a concept to learn a mapping that embeds both images and attributes to the shared representation space that can be generalized even for unseen classes by interpolating from the information of seen classes, which we refer to shared manifold learning.

Generalized Zero-Shot Learning

A survey of multimodal deep generative models

no code implementations5 Jul 2022 Masahiro Suzuki, Yutaka Matsuo

In recent years, deep generative models, i. e., generative models in which distributions are parameterized by deep neural networks, have attracted much attention, especially variational autoencoders, which are suitable for accomplishing the above challenges because they can consider heterogeneity and infer good representations of data.

Score Transformer: Generating Musical Score from Note-level Representation

1 code implementation1 Dec 2021 Masahiro Suzuki

In this paper, we explore the tokenized representation of musical scores using the Transformer model to automatically generate musical scores.

Improving the Robustness to Variations of Objects and Instructions with a Neuro-Symbolic Approach for Interactive Instruction Following

no code implementations13 Oct 2021 Kazutoshi Shinoda, Yuki Takezawa, Masahiro Suzuki, Yusuke Iwasawa, Yutaka Matsuo

An interactive instruction following task has been proposed as a benchmark for learning to map natural language instructions and first-person vision into sequences of actions to interact with objects in 3D environments.

Instruction Following

Scalable multimodal variational autoencoders with surrogate joint posterior

no code implementations29 Sep 2021 Masahiro Suzuki, Yutaka Matsuo

A state-of-the-art approach to learning this aggregation of experts is to encourage all modalities to be reconstructed and cross-generated from arbitrary subsets.

Learning Global Spatial Information for Multi-View Object-Centric Models

no code implementations29 Sep 2021 Yuya Kobayashi, Masahiro Suzuki, Yutaka Matsuo

Therefore, we introduce several crucial components which help inference and training with the proposed model.

Novel View Synthesis

Pixyz: a library for developing deep generative models

no code implementations28 Jul 2021 Masahiro Suzuki, Takaaki Kaneko, Yutaka Matsuo

With the recent rapid progress in the study of deep generative models (DGMs), there is a need for a framework that can implement them in a simple and generic way.

Probabilistic Programming

A Whole Brain Probabilistic Generative Model: Toward Realizing Cognitive Architectures for Developmental Robots

no code implementations15 Mar 2021 Tadahiro Taniguchi, Hiroshi Yamakawa, Takayuki Nagai, Kenji Doya, Masamichi Sakagami, Masahiro Suzuki, Tomoaki Nakamura, Akira Taniguchi

This approach is based on two ideas: (1) brain-inspired AI, learning human brain architecture to build human-level intelligence, and (2) a probabilistic generative model(PGM)-based cognitive system to develop a cognitive system for developmental robots by integrating PGMs.

Iterative Image Inpainting with Structural Similarity Mask for Anomaly Detection

no code implementations1 Jan 2021 Hitoshi Nakanishi, Masahiro Suzuki, Yutaka Matsuo

Moreover, there is objective mismatching that models are trained to minimize total reconstruction errors while we expect a small deviation on normal pixels and large deviation on anomalous pixels.

Image Inpainting Unsupervised Anomaly Detection

Out-of-Distribution Detection Using Layerwise Uncertainty in Deep Neural Networks

no code implementations ICLR 2020 Hirono Okamoto, Masahiro Suzuki, Yutaka Matsuo

However, on difficult datasets or models with low classification ability, these methods incorrectly regard in-distribution samples close to the decision boundary as OOD samples.

Classification General Classification +1

Relation-based Generalized Zero-shot Classification with the Domain Discriminator on the shared representation

no code implementations25 Sep 2019 Masahiro Suzuki, Yutaka Matsuo

However, this relation-based approach presents a difficulty: many of the test images are predicted as biased to the seen domain, i. e., the \emph{domain bias problem}.

Generalized Zero-Shot Learning

Variational Domain Adaptation

no code implementations ICLR 2019 Hirono Okamoto, Shohei Ohsawa, Itto Higuchi, Haruka Murakami, Mizuki Sango, Zhenghang Cui, Masahiro Suzuki, Hiroshi Kajino, Yutaka Matsuo

It reformulates the posterior with a natural paring $\langle, \rangle: \mathcal{Z} \times \mathcal{Z}^* \rightarrow \Real$, which can be expanded to uncountable infinite domains such as continuous domains as well as interpolation.

Bayesian Inference Domain Adaptation +2

DUAL SPACE LEARNING WITH VARIATIONAL AUTOENCODERS

no code implementations ICLR Workshop DeepGenStruct 2019 Hirono Okamoto, Masahiro Suzuki, Itto Higuchi, Shohei Ohsawa, Yutaka Matsuo

However, when the dimension of multiclass labels is large, these models cannot change images corresponding to labels, because learning multiple distributions of the corresponding class is necessary to transfer an image.

Improving Bi-directional Generation between Different Modalities with Variational Autoencoders

no code implementations26 Jan 2018 Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo

However, we found that when this model attempts to generate a large dimensional modality missing at the input, the joint representation collapses and this modality cannot be generated successfully.

Neural Machine Translation with Latent Semantic of Image and Text

no code implementations25 Nov 2016 Joji Toyama, Masanori Misono, Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo

The report of earlier studies has introduced a latent variable to capture the entire meaning of sentence and achieved improvement on attention-based Neural Machine Translation.

Machine Translation Translation

Joint Multimodal Learning with Deep Generative Models

1 code implementation7 Nov 2016 Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo

As described herein, we propose a joint multimodal variational autoencoder (JMVAE), in which all modalities are independently conditioned on joint representation.

Cannot find the paper you are looking for? You can Submit a new open access paper.