Search Results for author: Kohei Hayashi

Found 24 papers, 7 papers with code

Fractional SDE-Net: Generation of Time Series Data with Long-term Memory

no code implementations16 Jan 2022 Kohei Hayashi, Kei Nakagawa

In this paper, we focus on generation of time-series data using neural networks.

Time Series

A Scaling Law for Syn-to-Real Transfer: How Much Is Your Pre-training Effective?

no code implementations29 Sep 2021 Hiroaki Mikami, Kenji Fukumizu, Shogo Murai, Shuji Suzuki, Yuta Kikuchi, Taiji Suzuki, Shin-ichi Maeda, Kohei Hayashi

Synthetic-to-real transfer learning is a framework in which a synthetically generated dataset is used to pre-train a model to improve its performance on real vision tasks.

Image Generation Transfer Learning

A Scaling Law for Synthetic-to-Real Transfer: How Much Is Your Pre-training Effective?

1 code implementation25 Aug 2021 Hiroaki Mikami, Kenji Fukumizu, Shogo Murai, Shuji Suzuki, Yuta Kikuchi, Taiji Suzuki, Shin-ichi Maeda, Kohei Hayashi

Synthetic-to-real transfer learning is a framework in which a synthetically generated dataset is used to pre-train a model to improve its performance on real vision tasks.

Image Generation Transfer Learning

Weisfeiler-Lehman Embedding for Molecular Graph Neural Networks

no code implementations12 Jun 2020 Katsuhiko Ishiguro, Kenta Oono, Kohei Hayashi

A graph neural network (GNN) is a good choice for predicting the chemical properties of molecules.

Feature Engineering Link Prediction

On Random Subsampling of Gaussian Process Regression: A Graphon-Based Analysis

no code implementations28 Jan 2019 Kohei Hayashi, Masaaki Imaizumi, Yuichi Yoshida

In this paper, we study random subsampling of Gaussian process regression, one of the simplest approximation baselines, from a theoretical perspective.

Why does PairDiff work? - A Mathematical Analysis of Bilinear Relational Compositional Operators for Analogy Detection

no code implementations COLING 2018 Huda Hakami, Kohei Hayashi, Danushka Bollegala

We show that, if the word embed- dings are standardised and uncorrelated, such an operator will be independent of bilinear terms, and can be simplified to a linear form, where PairDiff is a special case.

Information Retrieval Knowledge Base Completion +1

Fitting Low-Rank Tensors in Constant Time

1 code implementation NeurIPS 2017 Kohei Hayashi, Yuichi Yoshida

Then, we show that the residual error of the Tucker decomposition of $\tilde{X}$ is sufficiently close to that of $X$ with high probability.

Tensor Decomposition

On Tensor Train Rank Minimization: Statistical Efficiency and Scalable Algorithm

no code implementations1 Aug 2017 Masaaki Imaizumi, Takanori Maehara, Kohei Hayashi

Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors.

Tensor Decomposition with Smoothness

no code implementations ICML 2017 Masaaki Imaizumi, Kohei Hayashi

Real data tensors are usually high dimensional but their intrinsic information is preserved in low-dimensional space, which motivates to use tensor decompositions such as Tucker decomposition.

Tensor Decomposition

Minimizing Quadratic Functions in Constant Time

1 code implementation NeurIPS 2016 Kohei Hayashi, Yuichi Yoshida

A sampling-based optimization method for quadratic functions is proposed.

Making Tree Ensembles Interpretable: A Bayesian Model Selection Approach

1 code implementation29 Jun 2016 Satoshi Hara, Kohei Hayashi

In this study, we present a method to make a complex tree ensemble interpretable by simplifying the model.

Model Selection

Making Tree Ensembles Interpretable

no code implementations17 Jun 2016 Satoshi Hara, Kohei Hayashi

Tree ensembles, such as random forest and boosted trees, are renowned for their high prediction performance, whereas their interpretability is critically limited.

Bayesian Masking: Sparse Bayesian Estimation with Weaker Shrinkage Bias

no code implementations3 Sep 2015 Yohei Kondo, Kohei Hayashi, Shin-ichi Maeda

A common strategy for sparse linear regression is to introduce regularization, which eliminates irrelevant features by letting the corresponding weights be zeros.

Bayesian Inference feature selection

Rebuilding Factorized Information Criterion: Asymptotically Accurate Marginal Likelihood

no code implementations22 Apr 2015 Kohei Hayashi, Shin-ichi Maeda, Ryohei Fujimaki

Our analysis provides a formal justification of FIC as a model selection criterion for LVMs and also a systematic procedure for pruning redundant latent variables that have been removed heuristically in previous studies.

Model Selection

Weighted Likelihood Policy Search with Model Selection

no code implementations NeurIPS 2012 Tsuyoshi Ueno, Kohei Hayashi, Takashi Washio, Yoshinobu Kawahara

Reinforcement learning (RL) methods based on direct policy search (DPS) have been actively discussed to achieve an efficient approach to complicated Markov decision processes (MDPs).

Model Selection reinforcement-learning

Cannot find the paper you are looking for? You can Submit a new open access paper.