Search Results for author: Shin-ichi Maeda

Found 29 papers, 11 papers with code

Deep Bayesian Filter for Bayes-faithful Data Assimilation

no code implementations29 May 2024 Yuta Tarumi, Keisuke Fukuda, Shin-ichi Maeda

State estimation for nonlinear state space models is a challenging task.

Virtual Human Generative Model: Masked Modeling Approach for Learning Human Characteristics

no code implementations19 Jun 2023 Kenta Oono, Nontawat Charoenphakdee, Kotatsu Bito, Zhengyan Gao, Yoshiaki Ota, Shoichiro Yamaguchi, Yohei Sugawara, Shin-ichi Maeda, Kunihiko Miyoshi, Yuki Saito, Koki Tsuda, Hiroshi Maruyama, Kohei Hayashi

In this paper, we propose Virtual Human Generative Model (VHGM), a machine learning model for estimating attributes about healthcare, lifestyles, and personalities.

Controlling Posterior Collapse by an Inverse Lipschitz Constraint on the Decoder Network

no code implementations25 Apr 2023 Yuri Kinoshita, Kenta Oono, Kenji Fukumizu, Yuichi Yoshida, Shin-ichi Maeda

Variational autoencoders (VAEs) are one of the deep generative models that have experienced enormous success over the past decades.


A Scaling Law for Syn-to-Real Transfer: How Much Is Your Pre-training Effective?

no code implementations29 Sep 2021 Hiroaki Mikami, Kenji Fukumizu, Shogo Murai, Shuji Suzuki, Yuta Kikuchi, Taiji Suzuki, Shin-ichi Maeda, Kohei Hayashi

Synthetic-to-real transfer learning is a framework in which a synthetically generated dataset is used to pre-train a model to improve its performance on real vision tasks.

Image Generation Transfer Learning

A Scaling Law for Synthetic-to-Real Transfer: How Much Is Your Pre-training Effective?

1 code implementation25 Aug 2021 Hiroaki Mikami, Kenji Fukumizu, Shogo Murai, Shuji Suzuki, Yuta Kikuchi, Taiji Suzuki, Shin-ichi Maeda, Kohei Hayashi

Synthetic-to-real transfer learning is a framework in which a synthetically generated dataset is used to pre-train a model to improve its performance on real vision tasks.

Image Generation Transfer Learning


no code implementations1 Jan 2021 Shin-ichi Maeda, Hayato Watahiki, Yi Ouyang, Shintarou Okada, Masanori Koyama

In this study, we consider a situation in which the agent has access to the generative model which provides us with a next state sample for any given state-action pair, and propose a model to solve a CMDP problem by decomposing the CMDP into a pair of MDPs; \textit{reconnaissance} MDP (R-MDP) and \textit{planning} MDP (P-MDP).

reinforcement-learning Reinforcement Learning (RL)

Meta Learning as Bayes Risk Minimization

no code implementations2 Jun 2020 Shin-ichi Maeda, Toshiki Nakanishi, Masanori Koyama

However, the posterior distribution in Neural Process violates the way the posterior distribution changes with the contextual dataset.

Meta-Learning Philosophy

MANGA: Method Agnostic Neural-policy Generalization and Adaptation

no code implementations19 Nov 2019 Homanga Bharadhwaj, Shoichiro Yamaguchi, Shin-ichi Maeda

Efficiently transferring learned policies to an unknown environment with changes in dynamics configurations in the presence of motor noise is very important for operating robots in the real world, and our work is a novel attempt in that direction.

Imitation Learning Reinforcement Learning (RL)

Reconnaissance and Planning algorithm for constrained MDP

no code implementations20 Sep 2019 Shin-ichi Maeda, Hayato Watahiki, Shintarou Okada, Masanori Koyama

Practical reinforcement learning problems are often formulated as constrained Markov decision process (CMDP) problems, in which the agent has to maximize the expected return while satisfying a set of prescribed safety constraints.

Robustness to Adversarial Perturbations in Learning from Incomplete Data

no code implementations NeurIPS 2019 Amir Najafi, Shin-ichi Maeda, Masanori Koyama, Takeru Miyato

What is the role of unlabeled data in an inference problem, when the presumed underlying distribution is adversarially perturbed?

Graph Warp Module: an Auxiliary Module for Boosting the Power of Graph Neural Networks in Molecular Graph Analysis

1 code implementation4 Feb 2019 Katsuhiko Ishiguro, Shin-ichi Maeda, Masanori Koyama

Graph Neural Network (GNN) is a popular architecture for the analysis of chemical molecules, and it has numerous applications in material and medicinal science.

Graph Neural Network

DQN-TAMER: Human-in-the-Loop Reinforcement Learning with Intractable Feedback

1 code implementation28 Oct 2018 Riku Arakawa, Sosuke Kobayashi, Yuya Unno, Yuta Tsuboi, Shin-ichi Maeda

A remedy for this is to train an agent with real-time feedback from a human observer who immediately gives rewards for some actions.

reinforcement-learning Reinforcement Learning (RL)

BayesGrad: Explaining Predictions of Graph Convolutional Networks

1 code implementation4 Jul 2018 Hirotaka Akita, Kosuke Nakago, Tomoki Komatsu, Yohei Sugawara, Shin-ichi Maeda, Yukino Baba, Hisashi Kashima

A possible approach to answer this question is to visualize evidence substructures responsible for the predictions.

Property Prediction

Neural Multi-scale Image Compression

no code implementations16 May 2018 Ken Nakanishi, Shin-ichi Maeda, Takeru Miyato, Daisuke Okanohara

This study presents a new lossy image compression method that utilizes the multi-scale features of natural images.

Image Compression

Clipped Action Policy Gradient

1 code implementation ICML 2018 Yasuhiro Fujita, Shin-ichi Maeda

We propose a policy gradient estimator that exploits the knowledge of actions being clipped to reduce the variance in estimation.

Continuous Control Policy Gradient Methods

Semi-supervised learning of hierarchical representations of molecules using neural message passing

1 code implementation28 Nov 2017 Hai Nguyen, Shin-ichi Maeda, Kenta Oono

With the rapid increase of compound databases available in medicinal and material science, there is a growing need for learning representations of molecules in a semi-supervised manner.

Neural Sequence Model Training via $α$-divergence Minimization

1 code implementation30 Jun 2017 Sotetsu Koyamada, Yuta Kikuchi, Atsunori Kanemura, Shin-ichi Maeda, Shin Ishii

We propose a new neural sequence model training method in which the objective function is defined by $\alpha$-divergence.

Machine Translation reinforcement-learning +2

Bayesian Masking: Sparse Bayesian Estimation with Weaker Shrinkage Bias

no code implementations3 Sep 2015 Yohei Kondo, Kohei Hayashi, Shin-ichi Maeda

A common strategy for sparse linear regression is to introduce regularization, which eliminates irrelevant features by letting the corresponding weights be zeros.

Bayesian Inference feature selection

Distributional Smoothing with Virtual Adversarial Training

5 code implementations2 Jul 2015 Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii

We propose local distributional smoothness (LDS), a new notion of smoothness for statistical model that can be used as a regularization term to promote the smoothness of the model distribution.

Rebuilding Factorized Information Criterion: Asymptotically Accurate Marginal Likelihood

no code implementations22 Apr 2015 Kohei Hayashi, Shin-ichi Maeda, Ryohei Fujimaki

Our analysis provides a formal justification of FIC as a model selection criterion for LVMs and also a systematic procedure for pruning redundant latent variables that have been removed heuristically in previous studies.

Model Selection

A Bayesian encourages dropout

no code implementations22 Dec 2014 Shin-ichi Maeda

Dropout is one of the key techniques to prevent the learning from overfitting.

L2 Regularization

Cannot find the paper you are looking for? You can Submit a new open access paper.