no code implementations • 31 May 2024 • Kenji Fukumizu, Taiji Suzuki, Noboru Isobe, Kazusato Oko, Masanori Koyama
Flow matching (FM) has gained significant attention as a simulation-free generative model.
no code implementations • 29 Feb 2024 • Noboru Isobe, Masanori Koyama, Jinzhe Zhang, Kohei Hayashi, Kenji Fukumizu
We show that we can introduce inductive bias to the conditional generation through the matrix field and demonstrate this fact with MMOT-EFM, a version of EFM that aims to minimize the Dirichlet energy or the sensitivity of the distribution with respect to conditions.
no code implementations • 29 May 2023 • Masanori Koyama, Kenji Fukumizu, Kohei Hayashi, Takeru Miyato
Symmetry learning has proven to be an effective approach for extracting the hidden structure of data, with the concept of equivariance relation playing the central role.
no code implementations • 13 Oct 2022 • Masanori Koyama, Takeru Miyato, Kenji Fukumizu
Recent years have witnessed the effectiveness of contrastive learning in obtaining the representation of dataset that is useful in interpretation and downstream tasks.
1 code implementation • 12 Oct 2022 • Takeru Miyato, Masanori Koyama, Kenji Fukumizu
In this study, we present meta-sequential prediction (MSP), an unsupervised framework to learn the symmetry from the time sequence of length at least three.
no code implementations • 15 Nov 2021 • Masanori Koyama, Kentaro Minami, Takeru Miyato, Yarin Gal
In contrastive representation learning, data representation is trained so that it can classify the image instances even when the images are altered by augmentations.
no code implementations • 1 Jan 2021 • Shin-ichi Maeda, Hayato Watahiki, Yi Ouyang, Shintarou Okada, Masanori Koyama
In this study, we consider a situation in which the agent has access to the generative model which provides us with a next state sample for any given state-action pair, and propose a model to solve a CMDP problem by decomposing the CMDP into a pair of MDPs; \textit{reconnaissance} MDP (R-MDP) and \textit{planning} MDP (P-MDP).
no code implementations • 1 Jan 2021 • Masanori Koyama, Toshiki Nakanishi, Shin-ichi Maeda, Vitor Campagnolo Guizilini, Adrien Gaidon
Estimating the 3D shape of real-world objects is a key perceptual challenge.
1 code implementation • 4 Aug 2020 • Masanori Koyama, Shoichiro Yamaguchi
Popular approaches in this field use the hypothesis that such a predictor shall be an \textit{invariant predictor} that captures the mechanism that remains constant across environments.
no code implementations • ICML 2020 • Ruixiang Zhang, Masanori Koyama, katsuhiko Ishiguro
Learning controllable and generalizable representation of multivariate data with desired structural properties remains a fundamental problem in machine learning.
no code implementations • 2 Jun 2020 • Shin-ichi Maeda, Toshiki Nakanishi, Masanori Koyama
However, the posterior distribution in Neural Process violates the way the posterior distribution changes with the contextual dataset.
no code implementations • 20 Sep 2019 • Shin-ichi Maeda, Hayato Watahiki, Shintarou Okada, Masanori Koyama
Practical reinforcement learning problems are often formulated as constrained Markov decision process (CMDP) problems, in which the agent has to maximize the expected return while satisfying a set of prescribed safety constraints.
11 code implementations • 25 Jul 2019 • Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, Masanori Koyama
We will present the design-techniques that became necessary in the development of the software that meets the above criteria, and demonstrate the power of our new design through experimental results and real world applications.
3 code implementations • NeurIPS 2019 • Mitsuru Kusumoto, Takuya Inoue, Gentaro Watanabe, Takuya Akiba, Masanori Koyama
Recomputation algorithms collectively refer to a family of methods that aims to reduce the memory consumption of the backpropagation by selectively discarding the intermediate results of the forward propagation and recomputing the discarded results as needed.
no code implementations • NeurIPS 2019 • Amir Najafi, Shin-ichi Maeda, Masanori Koyama, Takeru Miyato
What is the role of unlabeled data in an inference problem, when the presumed underlying distribution is adversarially perturbed?
no code implementations • ICLR 2019 • Ken Nakanishi, Shin-ichi Maeda, Takeru Miyato, Masanori Koyama
We propose Adaptive Sample-space & Adaptive Probability (ASAP) coding, an efficient neural-network based method for lossy data compression.
no code implementations • ICLR 2019 • Shoichiro Yamaguchi, Masanori Koyama
We propose Distributional Concavity (DC) regularization for Generative Adversarial Networks (GANs), a functional gradient-based method that promotes the entropy of the generator distribution and works against mode collapse.
1 code implementation • 8 Feb 2019 • Yoshihiro Nagano, Shoichiro Yamaguchi, Yasuhiro Fujita, Masanori Koyama
Hyperbolic space is a geometry that is known to be well-suited for representation learning of data with an underlying hierarchical structure.
1 code implementation • 4 Feb 2019 • Katsuhiko Ishiguro, Shin-ichi Maeda, Masanori Koyama
Graph Neural Network (GNN) is a popular architecture for the analysis of chemical molecules, and it has numerous applications in material and medicinal science.
1 code implementation • 26 Nov 2018 • Ryohei Suzuki, Masanori Koyama, Takeru Miyato, Taizan Yonetsuji, Huachun Zhu
We present a novel CNN-based image editing strategy that allows the user to change the semantic information of an image over an arbitrary region by manipulating the feature-space representation of the image in a trained GAN model.
2 code implementations • 22 Nov 2018 • Masaki Saito, Shunta Saito, Masanori Koyama, Sosuke Kobayashi
Training of Generative Adversarial Network (GAN) on a video dataset is a challenge because of the sheer size of the dataset and the complexity of each observation.
38 code implementations • ICLR 2018 • Takeru Miyato, Toshiki Kataoka, Masanori Koyama, Yuichi Yoshida
One of the challenges in the study of generative adversarial networks is the instability of its training.
Ranked #26 on
Image Generation
on STL-10
12 code implementations • ICLR 2018 • Takeru Miyato, Masanori Koyama
We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model.
Ranked #16 on
Conditional Image Generation
on CIFAR-10
14 code implementations • 13 Apr 2017 • Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Shin Ishii
In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets.
6 code implementations • 2 Jul 2015 • Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii
We propose local distributional smoothness (LDS), a new notion of smoothness for statistical model that can be used as a regularization term to promote the smoothness of the model distribution.
no code implementations • 31 Jan 2015 • Sotetsu Koyamada, Yumi Shikauchi, Ken Nakae, Masanori Koyama, Shin Ishii
Our PSA successfully visualized the subject-independent features contributing to the subject-transferability of the trained decoder.
no code implementations • 21 Dec 2014 • Sotetsu Koyamada, Masanori Koyama, Ken Nakae, Shin Ishii
We then visualize the PSMs to demonstrate the PSA's ability to decompose the knowledge acquired by the trained classifiers.