Search Results for author: Tomoyuki Obuchi

Found 14 papers, 3 papers with code

On Model Selection Consistency of Lasso for High-Dimensional Ising Models

no code implementations16 Oct 2021 Xiangming Meng, Tomoyuki Obuchi, Yoshiyuki Kabashima

Moreover, we provide a rigorous proof of the model selection consistency of Lasso with post-thresholding for general tree-like graphs in the paramagnetic phase without further assumptions on the dependency and incoherence conditions.

Model Selection regression +1

Ising Model Selection Using $\ell_{1}$-Regularized Linear Regression: A Statistical Mechanics Analysis

no code implementations NeurIPS 2021 Xiangming Meng, Tomoyuki Obuchi, Yoshiyuki Kabashima

We theoretically analyze the typical learning performance of $\ell_{1}$-regularized linear regression ($\ell_1$-LinR) for Ising model selection using the replica method from statistical mechanics.

Model Selection regression

Structure Learning in Inverse Ising Problems Using $\ell_2$-Regularized Linear Estimator

no code implementations19 Aug 2020 Xiangming Meng, Tomoyuki Obuchi, Yoshiyuki Kabashima

Further, to access the underdetermined region $M < N$, we examine the effect of the $\ell_2$ regularization, and find that biases appear in all the coupling estimates, preventing the perfect identification of the network structure.

regression

Reconstructing Sparse Signals via Greedy Monte-Carlo Search

no code implementations7 Aug 2020 Kao Hayashi, Tomoyuki Obuchi, Yoshiyuki Kabashima

We propose a Monte-Carlo-based method for reconstructing sparse signals in the formulation of sparse linear regression in a high-dimensional setting.

Learning performance in inverse Ising problems with sparse teacher couplings

no code implementations25 Dec 2019 Alia Abbara, Yoshiyuki Kabashima, Tomoyuki Obuchi, Yingying Xu

These results are considered to be exact in the thermodynamic limit on locally tree-like networks, such as the regular random or Erd\H{o}s--R\'enyi graphs.

Empirical Bayes Method for Boltzmann Machines

no code implementations14 Jun 2019 Muneki Yasuda, Tomoyuki Obuchi

In this study, we consider an empirical Bayes method for Boltzmann machines and propose an algorithm for it.

Cross validation in sparse linear regression with piecewise continuous nonconvex penalties and its acceleration

1 code implementation27 Feb 2019 Tomoyuki Obuchi, Ayaka Sakata

Second, we develop an approximate formula efficiently computing the cross-validation error without actually conducting the cross-validation, which is also applicable to the non-i. i. d.

regression

Perfect reconstruction of sparse signals with piecewise continuous nonconvex penalties and nonconvexity control

no code implementations20 Feb 2019 Ayaka Sakata, Tomoyuki Obuchi

A part of the discrepancy is resolved by introducing the control of the nonconvexity parameters to guide the AMP trajectory to the basin of the attraction.

Statistical mechanical analysis of sparse linear regression as a variable selection problem

no code implementations29 May 2018 Tomoyuki Obuchi, Yoshinori Nakanishi-Ohno, Masato Okada, Yoshiyuki Kabashima

The analysis is conducted through evaluation of the entropy, an exponential rate of the number of combinations of variables giving a specific value of fit error to given data which is assumed to be generated from a linear process using the design matrix.

regression Variable Selection

Semi-Analytic Resampling in Lasso

1 code implementation28 Feb 2018 Tomoyuki Obuchi, Yoshiyuki Kabashima

An approximate method for conducting resampling in Lasso, the $\ell_1$ penalized linear regression, in a semi-analytic manner is developed, whereby the average over the resampled datasets is directly computed without repeated numerical sampling, thus enabling an inference free of the statistical fluctuations due to sampling finiteness, as well as a significant reduction of computational time.

Variable Selection

Accelerating Cross-Validation in Multinomial Logistic Regression with $\ell_1$-Regularization

2 code implementations15 Nov 2017 Tomoyuki Obuchi, Yoshiyuki Kabashima

We develop an approximate formula for evaluating a cross-validation estimator of predictive likelihood for multinomial logistic regression regularized by an $\ell_1$-norm.

BIG-bench Machine Learning regression

Approximate cross-validation formula for Bayesian linear regression

no code implementations25 Oct 2016 Yoshiyuki Kabashima, Tomoyuki Obuchi, Makoto Uemura

Cross-validation (CV) is a technique for evaluating the ability of statistical models/learning systems based on a given data set.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.