no code implementations • 21 May 2018 • Takayuki Kawashima, Hironori Fujisawa
There is no convergence property when both composite functions are nonconvex, which is named the \textit{doubly-nonconvex} case. To overcome this difficulty, we assume a simple and weak condition that the penalty function is \textit{quasiconvex} and then we obtain convergence properties for the stochastic doubly-nonconvex composite optimization problem. The convergence rate obtained here is of the same order as the existing work. We deeply analyze the convergence rate with the constant step size and mini-batch size and give the optimal convergence rate with appropriate sizes, which is superior to the existing work.
no code implementations • 9 Feb 2018 • Takayuki Kawashima, Hironori Fujisawa
Particularly, we show the linear regression, logistic regression and Poisson regression with $L_1$ regularization in detail as specific examples of robust and sparse GLM.
no code implementations • 22 Apr 2016 • Takayuki Kawashima, Hironori Fujisawa
The loss function is constructed by an empirical estimate of the $\gamma$-divergence with sparse regularization and the parameter estimate is defined as the minimizer of the loss function.