no code implementations • 29 Feb 2024 • Noboru Isobe, Masanori Koyama, Jinzhe Zhang, Kohei Hayashi, Kenji Fukumizu
We show that we can introduce inductive bias to the conditional generation through the matrix field and demonstrate this fact with MMOT-EFM, a version of EFM that aims to minimize the Dirichlet energy or the sensitivity of the distribution with respect to conditions.
no code implementations • 29 Jan 2024 • Kei Nakagawa, Kohei Hayashi, Yugo Fujimoto
This approach incorporates fractional Brownian motion~(fBm) to effectively identify positive or negative correlations in topic and word distribution over time, revealing long-term dependency or roughness.
no code implementations • 19 Jun 2023 • Kenta Oono, Nontawat Charoenphakdee, Kotatsu Bito, Zhengyan Gao, Yoshiaki Ota, Shoichiro Yamaguchi, Yohei Sugawara, Shin-ichi Maeda, Kunihiko Miyoshi, Yuki Saito, Koki Tsuda, Hiroshi Maruyama, Kohei Hayashi
In this paper, we propose Virtual Human Generative Model (VHGM), a machine learning model for estimating attributes about healthcare, lifestyles, and personalities.
no code implementations • 29 May 2023 • Masanori Koyama, Kenji Fukumizu, Kohei Hayashi, Takeru Miyato
Symmetry learning has proven to be an effective approach for extracting the hidden structure of data, with the concept of equivariance relation playing the central role.
1 code implementation • 28 Mar 2023 • Soma Onishi, Kenta Oono, Kohei Hayashi
We present \emph{TabRet}, a pre-trainable Transformer-based model for tabular data.
no code implementations • 16 Jan 2022 • Kohei Hayashi, Kei Nakagawa
It generalizes the neural stochastic differential equation model by using fractional Brownian motion with a Hurst index larger than half, which exhibits the LRD property.
no code implementations • 29 Sep 2021 • Hiroaki Mikami, Kenji Fukumizu, Shogo Murai, Shuji Suzuki, Yuta Kikuchi, Taiji Suzuki, Shin-ichi Maeda, Kohei Hayashi
Synthetic-to-real transfer learning is a framework in which a synthetically generated dataset is used to pre-train a model to improve its performance on real vision tasks.
1 code implementation • 25 Aug 2021 • Hiroaki Mikami, Kenji Fukumizu, Shogo Murai, Shuji Suzuki, Yuta Kikuchi, Taiji Suzuki, Shin-ichi Maeda, Kohei Hayashi
Synthetic-to-real transfer learning is a framework in which a synthetically generated dataset is used to pre-train a model to improve its performance on real vision tasks.
no code implementations • 12 Jun 2020 • Katsuhiko Ishiguro, Kenta Oono, Kohei Hayashi
A graph neural network (GNN) is a good choice for predicting the chemical properties of molecules.
1 code implementation • NeurIPS 2019 • Kohei Hayashi, Taiki Yamaguchi, Yohei Sugawara, Shin-ichi Maeda
Tensor decomposition methods are widely used for model compression and fast inference in convolutional neural networks (CNNs).
1 code implementation • 13 Aug 2019 • Kohei Hayashi, Taiki Yamaguchi, Yohei Sugawara, Shin-ichi Maeda
Tensor decomposition methods are widely used for model compression and fast inference in convolutional neural networks (CNNs).
no code implementations • ICLR Workshop LLD 2019 • Takuya Shimada, Shoichiro Yamaguchi, Kohei Hayashi, Sosuke Kobayashi
Data augmentation by mixing samples, such as Mixup, has widely been used typically for classification tasks.
no code implementations • 28 Jan 2019 • Kohei Hayashi, Masaaki Imaizumi, Yuichi Yoshida
In this paper, we study random subsampling of Gaussian process regression, one of the simplest approximation baselines, from a theoretical perspective.
no code implementations • COLING 2018 • Huda Hakami, Kohei Hayashi, Danushka Bollegala
We show that, if the word embed- dings are standardised and uncorrelated, such an operator will be independent of bilinear terms, and can be simplified to a linear form, where PairDiff is a special case.
1 code implementation • NeurIPS 2017 • Kohei Hayashi, Yuichi Yoshida
Then, we show that the residual error of the Tucker decomposition of $\tilde{X}$ is sufficiently close to that of $X$ with high probability.
no code implementations • NeurIPS 2017 • Masaaki Imaizumi, Takanori Maehara, Kohei Hayashi
Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors.
1 code implementation • 19 Sep 2017 • Danushka Bollegala, Kohei Hayashi, Ken-ichi Kawarabayashi
Distributed word embeddings have shown superior performances in numerous Natural Language Processing (NLP) tasks.
no code implementations • ICML 2017 • Masaaki Imaizumi, Kohei Hayashi
Real data tensors are usually high dimensional but their intrinsic information is preserved in low-dimensional space, which motivates to use tensor decompositions such as Tucker decomposition.
no code implementations • 1 Aug 2017 • Masaaki Imaizumi, Takanori Maehara, Kohei Hayashi
Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors.
1 code implementation • NeurIPS 2016 • Kohei Hayashi, Yuichi Yoshida
A sampling-based optimization method for quadratic functions is proposed.
1 code implementation • 29 Jun 2016 • Satoshi Hara, Kohei Hayashi
In this study, we present a method to make a complex tree ensemble interpretable by simplifying the model.
no code implementations • 17 Jun 2016 • Satoshi Hara, Kohei Hayashi
Tree ensembles, such as random forest and boosted trees, are renowned for their high prediction performance, whereas their interpretability is critically limited.
no code implementations • 6 Feb 2016 • Kohei Hayashi, Takuya Konishi, Tatsuro Kawamoto
The stochastic block model (SBM) is a generative model revealing macroscopic structures in graphs.
no code implementations • 3 Sep 2015 • Yohei Kondo, Kohei Hayashi, Shin-ichi Maeda
A common strategy for sparse linear regression is to introduce regularization, which eliminates irrelevant features by letting the corresponding weights be zeros.
no code implementations • 19 Jun 2015 • Masaaki Imaizumi, Kohei Hayashi
Nonparametric extension of tensor regression is proposed.
no code implementations • 22 Apr 2015 • Kohei Hayashi, Shin-ichi Maeda, Ryohei Fujimaki
Our analysis provides a formal justification of FIC as a model selection criterion for LVMs and also a systematic procedure for pruning redundant latent variables that have been removed heuristically in previous studies.
no code implementations • NeurIPS 2013 • Kohei Hayashi, Ryohei Fujimaki
This paper extends factorized asymptotic Bayesian (FAB) inference for latent feature models~(LFMs).
no code implementations • NeurIPS 2012 • Tsuyoshi Ueno, Kohei Hayashi, Takashi Washio, Yoshinobu Kawahara
Reinforcement learning (RL) methods based on direct policy search (DPS) have been actively discussed to achieve an efficient approach to complicated Markov decision processes (MDPs).
no code implementations • NeurIPS 2011 • Ryota Tomioka, Taiji Suzuki, Kohei Hayashi, Hisashi Kashima
We analyze the statistical performance of a recently proposed convex tensor decomposition algorithm.