1 code implementation • 30 Aug 2023 • Masaaki Takada, Hironori Fujisawa
This paper presents a comprehensive exploration of the theoretical properties inherent in the Adaptive Lasso and the Transfer Lasso.
no code implementations • 24 Aug 2022 • Takeyuki Sasai, Hironori Fujisawa
We consider outlier-robust and sparse estimation of linear regression coefficients, when the covariates and the noises are contaminated by adversarial outliers and noises are sampled from a heavy-tailed distribution.
no code implementations • 22 Feb 2021 • Takeyuki Sasai, Hironori Fujisawa
We consider a robust estimation of linear regression coefficients.
no code implementations • 25 Oct 2020 • Takeyuki Sasai, Hironori Fujisawa
We deal with matrix compressed sensing, including lasso as a partial problem, and matrix completion, and then we obtain sharp estimation error bounds.
no code implementations • 7 Sep 2020 • Kazuharu Harada, Hironori Fujisawa
To address {these issues}, we propose a new estimation method for a linear DAG model with non-Gaussian noises.
no code implementations • NeurIPS 2020 • Masaaki Takada, Hironori Fujisawa
The proposed method has a tight estimation error bound under a stationary environment, and the estimate remains unchanged from the source estimate under small residuals.
no code implementations • 13 Apr 2020 • Takeyuki Sasai, Hironori Fujisawa
Nguyen and Tran (2012) proposed an extended Lasso for robust parameter estimation and then they showed the convergence rate of the estimation error.
no code implementations • 1 Nov 2018 • Masaaki Takada, Hironori Fujisawa, Takeichiro Nishikawa
Convex Conditioned Lasso (CoCoLasso) has been proposed for dealing with high-dimensional data with missing values, but it performs poorly when there are many missing values, so that the high missing rate problem has not been resolved.
no code implementations • 21 May 2018 • Takayuki Kawashima, Hironori Fujisawa
There is no convergence property when both composite functions are nonconvex, which is named the \textit{doubly-nonconvex} case. To overcome this difficulty, we assume a simple and weak condition that the penalty function is \textit{quasiconvex} and then we obtain convergence properties for the stochastic doubly-nonconvex composite optimization problem. The convergence rate obtained here is of the same order as the existing work. We deeply analyze the convergence rate with the constant step size and mini-batch size and give the optimal convergence rate with appropriate sizes, which is superior to the existing work.
no code implementations • 9 Feb 2018 • Takayuki Kawashima, Hironori Fujisawa
Particularly, we show the linear regression, logistic regression and Poisson regression with $L_1$ regularization in detail as specific examples of robust and sparse GLM.
no code implementations • 6 Nov 2017 • Masaaki Takada, Taiji Suzuki, Hironori Fujisawa
However, one of the biggest issues in sparse regularization is that its performance is quite sensitive to correlations between features.
no code implementations • 28 Sep 2016 • Shuichi. Kawano, Hironori Fujisawa, Toyoyuki Takada, Toshihiko Shiroishi
The basic loss function is based on a combination of the regression loss and PCA loss.
no code implementations • 22 Apr 2016 • Takayuki Kawashima, Hironori Fujisawa
The loss function is constructed by an empirical estimate of the $\gamma$-divergence with sparse regularization and the parameter estimate is defined as the minimizer of the loss function.
no code implementations • 26 Feb 2014 • Shuichi. Kawano, Hironori Fujisawa, Toyoyuki Takada, Toshihiko Shiroishi
Principal component regression (PCR) is a two-stage procedure that selects some principal components and then constructs a regression model regarding them as new explanatory variables.
no code implementations • 11 May 2013 • Takafumi Kanamori, Hironori Fujisawa
By using the equivariant estimators under the affine transformation, one can obtain estimators that do no essentially depend on the choice of the system of units in the measurement.