Search Results for author: Masanori Koyama

Found 22 papers, 10 papers with code

Contrastive Representation Learning with Trainable Augmentation Channel

no code implementations15 Nov 2021 Masanori Koyama, Kentaro Minami, Takeru Miyato, Yarin Gal

In contrastive representation learning, data representation is trained so that it can classify the image instances even when the images are altered by augmentations.

Representation Learning

RECONNAISSANCE FOR REINFORCEMENT LEARNING WITH SAFETY CONSTRAINTS

no code implementations1 Jan 2021 Shin-ichi Maeda, Hayato Watahiki, Yi Ouyang, Shintarou Okada, Masanori Koyama

In this study, we consider a situation in which the agent has access to the generative model which provides us with a next state sample for any given state-action pair, and propose a model to solve a CMDP problem by decomposing the CMDP into a pair of MDPs; \textit{reconnaissance} MDP (R-MDP) and \textit{planning} MDP (P-MDP).

When is invariance useful in an Out-of-Distribution Generalization problem ?

1 code implementation4 Aug 2020 Masanori Koyama, Shoichiro Yamaguchi

Popular approaches in this field use the hypothesis that such a predictor shall be an \textit{invariant predictor} that captures the mechanism that remains constant across environments.

Learning Structured Latent Factors from Dependent Data:A Generative Model Framework from Information-Theoretic Perspective

no code implementations ICML 2020 Ruixiang Zhang, Masanori Koyama, katsuhiko Ishiguro

Learning controllable and generalizable representation of multivariate data with desired structural properties remains a fundamental problem in machine learning.

Fairness

Meta Learning as Bayes Risk Minimization

no code implementations2 Jun 2020 Shin-ichi Maeda, Toshiki Nakanishi, Masanori Koyama

However, the posterior distribution in Neural Process violates the way the posterior distribution changes with the contextual dataset.

Meta-Learning

Reconnaissance and Planning algorithm for constrained MDP

no code implementations20 Sep 2019 Shin-ichi Maeda, Hayato Watahiki, Shintarou Okada, Masanori Koyama

Practical reinforcement learning problems are often formulated as constrained Markov decision process (CMDP) problems, in which the agent has to maximize the expected return while satisfying a set of prescribed safety constraints.

Optuna: A Next-generation Hyperparameter Optimization Framework

9 code implementations25 Jul 2019 Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, Masanori Koyama

We will present the design-techniques that became necessary in the development of the software that meets the above criteria, and demonstrate the power of our new design through experimental results and real world applications.

Distributed Computing Hyperparameter Optimization

A Graph Theoretic Framework of Recomputation Algorithms for Memory-Efficient Backpropagation

1 code implementation NeurIPS 2019 Mitsuru Kusumoto, Takuya Inoue, Gentaro Watanabe, Takuya Akiba, Masanori Koyama

Recomputation algorithms collectively refer to a family of methods that aims to reduce the memory consumption of the backpropagation by selectively discarding the intermediate results of the forward propagation and recomputing the discarded results as needed.

Robustness to Adversarial Perturbations in Learning from Incomplete Data

no code implementations NeurIPS 2019 Amir Najafi, Shin-ichi Maeda, Masanori Koyama, Takeru Miyato

What is the role of unlabeled data in an inference problem, when the presumed underlying distribution is adversarially perturbed?

DISTRIBUTIONAL CONCAVITY REGULARIZATION FOR GANS

no code implementations ICLR 2019 Shoichiro Yamaguchi, Masanori Koyama

We propose Distributional Concavity (DC) regularization for Generative Adversarial Networks (GANs), a functional gradient-based method that promotes the entropy of the generator distribution and works against mode collapse.

A Wrapped Normal Distribution on Hyperbolic Space for Gradient-Based Learning

no code implementations8 Feb 2019 Yoshihiro Nagano, Shoichiro Yamaguchi, Yasuhiro Fujita, Masanori Koyama

Hyperbolic space is a geometry that is known to be well-suited for representation learning of data with an underlying hierarchical structure.

Hierarchical structure Representation Learning

Graph Warp Module: an Auxiliary Module for Boosting the Power of Graph Neural Networks in Molecular Graph Analysis

1 code implementation4 Feb 2019 Katsuhiko Ishiguro, Shin-ichi Maeda, Masanori Koyama

Graph Neural Network (GNN) is a popular architecture for the analysis of chemical molecules, and it has numerous applications in material and medicinal science.

Spatially Controllable Image Synthesis with Internal Representation Collaging

1 code implementation26 Nov 2018 Ryohei Suzuki, Masanori Koyama, Takeru Miyato, Taizan Yonetsuji, Huachun Zhu

We present a novel CNN-based image editing strategy that allows the user to change the semantic information of an image over an arbitrary region by manipulating the feature-space representation of the image in a trained GAN model.

Image Generation

Train Sparsely, Generate Densely: Memory-efficient Unsupervised Training of High-resolution Temporal GAN

2 code implementations22 Nov 2018 Masaki Saito, Shunta Saito, Masanori Koyama, Sosuke Kobayashi

Training of Generative Adversarial Network (GAN) on a video dataset is a challenge because of the sheer size of the dataset and the complexity of each observation.

Video Generation

cGANs with Projection Discriminator

9 code implementations ICLR 2018 Takeru Miyato, Masanori Koyama

We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model.

Conditional Image Generation Super-Resolution

Distributional Smoothing with Virtual Adversarial Training

5 code implementations2 Jul 2015 Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii

We propose local distributional smoothness (LDS), a new notion of smoothness for statistical model that can be used as a regularization term to promote the smoothness of the model distribution.

Deep learning of fMRI big data: a novel approach to subject-transfer decoding

no code implementations31 Jan 2015 Sotetsu Koyamada, Yumi Shikauchi, Ken Nakae, Masanori Koyama, Shin Ishii

Our PSA successfully visualized the subject-independent features contributing to the subject-transferability of the trained decoder.

Brain Decoding

Principal Sensitivity Analysis

no code implementations21 Dec 2014 Sotetsu Koyamada, Masanori Koyama, Ken Nakae, Shin Ishii

We then visualize the PSMs to demonstrate the PSA's ability to decompose the knowledge acquired by the trained classifiers.

Cannot find the paper you are looking for? You can Submit a new open access paper.