Search Results for author: Takayuki Kawashima

Found 3 papers, 0 papers with code

Stochastic Gradient Descent for Stochastic Doubly-Nonconvex Composite Optimization

no code implementations21 May 2018 Takayuki Kawashima, Hironori Fujisawa

There is no convergence property when both composite functions are nonconvex, which is named the \textit{doubly-nonconvex} case. To overcome this difficulty, we assume a simple and weak condition that the penalty function is \textit{quasiconvex} and then we obtain convergence properties for the stochastic doubly-nonconvex composite optimization problem. The convergence rate obtained here is of the same order as the existing work. We deeply analyze the convergence rate with the constant step size and mini-batch size and give the optimal convergence rate with appropriate sizes, which is superior to the existing work.

Robust and Sparse Regression in GLM by Stochastic Optimization

no code implementations9 Feb 2018 Takayuki Kawashima, Hironori Fujisawa

Particularly, we show the linear regression, logistic regression and Poisson regression with $L_1$ regularization in detail as specific examples of robust and sparse GLM.

regression Stochastic Optimization

Robust and Sparse Regression via $γ$-divergence

no code implementations22 Apr 2016 Takayuki Kawashima, Hironori Fujisawa

The loss function is constructed by an empirical estimate of the $\gamma$-divergence with sparse regularization and the parameter estimate is defined as the minimizer of the loss function.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.