Search Results for author: Masahiro Suzuki

Found 15 papers, 2 papers with code

Learning shared manifold representation of images and attributes for generalized zero-shot learning

no code implementations ICLR 2019 Masahiro Suzuki, Yusuke Iwasawa, Yutaka Matsuo

To solve this, we propose a concept to learn a mapping that embeds both images and attributes to the shared representation space that can be generalized even for unseen classes by interpolating from the information of seen classes, which we refer to shared manifold learning.

Generalized Zero-Shot Learning

Score Transformer: Generating Musical Score from Note-level Representation

1 code implementation1 Dec 2021 Masahiro Suzuki

In this paper, we explore the tokenized representation of musical scores using the Transformer model to automatically generate musical scores.

Pixyz: a library for developing deep generative models

no code implementations28 Jul 2021 Masahiro Suzuki, Takaaki Kaneko, Yutaka Matsuo

With the recent rapid progress in the study of deep generative models (DGMs), there is a need for a framework that can implement them in a simple and generic way.

Whole brain Probabilistic Generative Model toward Realizing Cognitive Architecture for Developmental Robots

no code implementations15 Mar 2021 Tadahiro Taniguchi, Hiroshi Yamakawa, Takayuki Nagai, Kenji Doya, Masamichi Sakagami, Masahiro Suzuki, Tomoaki Nakamura, Akira Taniguchi

Building a humanlike integrative artificial cognitive system, that is, an artificial general intelligence, is one of the goals in artificial intelligence and developmental robotics.

Iterative Image Inpainting with Structural Similarity Mask for Anomaly Detection

no code implementations1 Jan 2021 Hitoshi Nakanishi, Masahiro Suzuki, Yutaka Matsuo

Moreover, there is objective mismatching that models are trained to minimize total reconstruction errors while we expect a small deviation on normal pixels and large deviation on anomalous pixels.

Image Inpainting Unsupervised Anomaly Detection

Out-of-Distribution Detection Using Layerwise Uncertainty in Deep Neural Networks

no code implementations ICLR 2020 Hirono Okamoto, Masahiro Suzuki, Yutaka Matsuo

However, on difficult datasets or models with low classification ability, these methods incorrectly regard in-distribution samples close to the decision boundary as OOD samples.

Classification General Classification +1

Relation-based Generalized Zero-shot Classification with the Domain Discriminator on the shared representation

no code implementations25 Sep 2019 Masahiro Suzuki, Yutaka Matsuo

However, this relation-based approach presents a difficulty: many of the test images are predicted as biased to the seen domain, i. e., the \emph{domain bias problem}.

Generalized Zero-Shot Learning

Variational Domain Adaptation

no code implementations ICLR 2019 Hirono Okamoto, Shohei Ohsawa, Itto Higuchi, Haruka Murakami, Mizuki Sango, Zhenghang Cui, Masahiro Suzuki, Hiroshi Kajino, Yutaka Matsuo

It reformulates the posterior with a natural paring $\langle, \rangle: \mathcal{Z} \times \mathcal{Z}^* \rightarrow \Real$, which can be expanded to uncountable infinite domains such as continuous domains as well as interpolation.

Bayesian Inference Domain Adaptation +2

DUAL SPACE LEARNING WITH VARIATIONAL AUTOENCODERS

no code implementations ICLR Workshop DeepGenStruct 2019 Hirono Okamoto, Masahiro Suzuki, Itto Higuchi, Shohei Ohsawa, Yutaka Matsuo

However, when the dimension of multiclass labels is large, these models cannot change images corresponding to labels, because learning multiple distributions of the corresponding class is necessary to transfer an image.

Improving Bi-directional Generation between Different Modalities with Variational Autoencoders

no code implementations26 Jan 2018 Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo

However, we found that when this model attempts to generate a large dimensional modality missing at the input, the joint representation collapses and this modality cannot be generated successfully.

Neural Machine Translation with Latent Semantic of Image and Text

no code implementations25 Nov 2016 Joji Toyama, Masanori Misono, Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo

The report of earlier studies has introduced a latent variable to capture the entire meaning of sentence and achieved improvement on attention-based Neural Machine Translation.

Machine Translation Translation

Joint Multimodal Learning with Deep Generative Models

1 code implementation7 Nov 2016 Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo

As described herein, we propose a joint multimodal variational autoencoder (JMVAE), in which all modalities are independently conditioned on joint representation.

Cannot find the paper you are looking for? You can Submit a new open access paper.