Search Results for author: Hironori Fujisawa

Found 15 papers, 1 papers with code

Adaptive Lasso, Transfer Lasso, and Beyond: An Asymptotic Perspective

1 code implementation30 Aug 2023 Masaaki Takada, Hironori Fujisawa

This paper presents a comprehensive exploration of the theoretical properties inherent in the Adaptive Lasso and the Transfer Lasso.

Variable Selection

Outlier Robust and Sparse Estimation of Linear Regression Coefficients

no code implementations24 Aug 2022 Takeyuki Sasai, Hironori Fujisawa

We consider outlier-robust and sparse estimation of linear regression coefficients, when the covariates and the noises are contaminated by adversarial outliers and noises are sampled from a heavy-tailed distribution.

regression

Adversarial robust weighted Huber regression

no code implementations22 Feb 2021 Takeyuki Sasai, Hironori Fujisawa

We consider a robust estimation of linear regression coefficients.

regression

Adversarial Robust Low Rank Matrix Estimation: Compressed Sensing and Matrix Completion

no code implementations25 Oct 2020 Takeyuki Sasai, Hironori Fujisawa

We deal with matrix compressed sensing, including lasso as a partial problem, and matrix completion, and then we obtain sharp estimation error bounds.

Matrix Completion regression

Estimation of Structural Causal Model via Sparsely Mixing Independent Component Analysis

no code implementations7 Sep 2020 Kazuharu Harada, Hironori Fujisawa

To address {these issues}, we propose a new estimation method for a linear DAG model with non-Gaussian noises.

Transfer Learning via $\ell_1$ Regularization

no code implementations NeurIPS 2020 Masaaki Takada, Hironori Fujisawa

The proposed method has a tight estimation error bound under a stationary environment, and the estimate remains unchanged from the source estimate under small residuals.

Transfer Learning

Robust estimation with Lasso when outputs are adversarially contaminated

no code implementations13 Apr 2020 Takeyuki Sasai, Hironori Fujisawa

Nguyen and Tran (2012) proposed an extended Lasso for robust parameter estimation and then they showed the convergence rate of the estimation error.

HMLasso: Lasso with High Missing Rate

no code implementations1 Nov 2018 Masaaki Takada, Hironori Fujisawa, Takeichiro Nishikawa

Convex Conditioned Lasso (CoCoLasso) has been proposed for dealing with high-dimensional data with missing values, but it performs poorly when there are many missing values, so that the high missing rate problem has not been resolved.

regression Vocal Bursts Intensity Prediction

Stochastic Gradient Descent for Stochastic Doubly-Nonconvex Composite Optimization

no code implementations21 May 2018 Takayuki Kawashima, Hironori Fujisawa

There is no convergence property when both composite functions are nonconvex, which is named the \textit{doubly-nonconvex} case. To overcome this difficulty, we assume a simple and weak condition that the penalty function is \textit{quasiconvex} and then we obtain convergence properties for the stochastic doubly-nonconvex composite optimization problem. The convergence rate obtained here is of the same order as the existing work. We deeply analyze the convergence rate with the constant step size and mini-batch size and give the optimal convergence rate with appropriate sizes, which is superior to the existing work.

Robust and Sparse Regression in GLM by Stochastic Optimization

no code implementations9 Feb 2018 Takayuki Kawashima, Hironori Fujisawa

Particularly, we show the linear regression, logistic regression and Poisson regression with $L_1$ regularization in detail as specific examples of robust and sparse GLM.

regression Stochastic Optimization

Independently Interpretable Lasso: A New Regularizer for Sparse Regression with Uncorrelated Variables

no code implementations6 Nov 2017 Masaaki Takada, Taiji Suzuki, Hironori Fujisawa

However, one of the biggest issues in sparse regularization is that its performance is quite sensitive to correlations between features.

regression

Robust and Sparse Regression via $γ$-divergence

no code implementations22 Apr 2016 Takayuki Kawashima, Hironori Fujisawa

The loss function is constructed by an empirical estimate of the $\gamma$-divergence with sparse regularization and the parameter estimate is defined as the minimizer of the loss function.

regression

Sparse principal component regression with adaptive loading

no code implementations26 Feb 2014 Shuichi. Kawano, Hironori Fujisawa, Toyoyuki Takada, Toshihiko Shiroishi

Principal component regression (PCR) is a two-stage procedure that selects some principal components and then constructs a regression model regarding them as new explanatory variables.

regression

Affine Invariant Divergences associated with Composite Scores and its Applications

no code implementations11 May 2013 Takafumi Kanamori, Hironori Fujisawa

By using the equivariant estimators under the affine transformation, one can obtain estimators that do no essentially depend on the choice of the system of units in the measurement.

Cannot find the paper you are looking for? You can Submit a new open access paper.