no code implementations • 21 Mar 2017 • Atsushi Shibagaki, Ichiro Takeuchi
We study primal-dual type stochastic optimization algorithms with non-uniform sampling.
no code implementations • 1 Jun 2016 • Hiroyuki Hanada, Atsushi Shibagaki, Jun Sakuma, Ichiro Takeuchi
We study large-scale classification problems in changing environments where a small part of the dataset is modified, and the effect of the data modification must be quickly incorporated into the classifier.
no code implementations • 8 Feb 2016 • Atsushi Shibagaki, Masayuki Karasuyama, Kohei Hatano, Ichiro Takeuchi
A significant advantage of considering them simultaneously rather than individually is that they have a synergy effect in the sense that the results of the previous safe feature screening can be exploited for improving the next safe sample screening performances, and vice-versa.
1 code implementation • NeurIPS 2015 • Atsushi Shibagaki, Yoshiki Suzuki, Masayuki Karasuyama, Ichiro Takeuchi
Careful tuning of a regularization parameter is indispensable in many machine learning tasks because it has a significant impact on generalization performances.