no code implementations • 7 Jun 2022 • Shinya Suzumura, Hitoshi Abe
In online advertising, a set of potential advertisements can be ranked by a certain auction system where usually the top-1 advertisement would be selected and displayed at an advertising space.
no code implementations • ICML 2017 • Shinya Suzumura, Kazuya Nakagawa, Yuta Umezu, Koji Tsuda, Ichiro Takeuchi
Finding statistically significant high-order interactions in predictive modeling is important but challenging task because the possible number of high-order interactions is extremely large (e. g., $> 10^{17}$).
no code implementations • 15 Feb 2016 • Kazuya Nakagawa, Shinya Suzumura, Masayuki Karasuyama, Koji Tsuda, Ichiro Takeuchi
The SPP method allows us to efficiently find a superset of all the predictive patterns in the database that are needed for the optimal predictive model.
no code implementations • 15 Feb 2016 • Shinya Suzumura, Kazuya Nakagawa, Mahito Sugiyama, Koji Tsuda, Ichiro Takeuchi
The main obstacle of this problem is in the difficulty of taking into account the selection bias, i. e., the bias arising from the fact that patterns are selected from extremely large number of candidates in databases.
no code implementations • 12 Jul 2015 • Shinya Suzumura, Kohei Ogawa, Masashi Sugiyama, Masayuki Karasuyama, Ichiro Takeuchi
An advantage of our homotopy approach is that it can be interpreted as simulated annealing, a common approach for finding a good local optimal solution in non-convex optimization problems.
no code implementations • 26 Jun 2015 • Kazuya Nakagawa, Shinya Suzumura, Masayuki Karasuyama, Koji Tsuda, Ichiro Takeuchi
An SFS rule has a property that, if a feature satisfies the rule, then the feature is guaranteed to be non-active in the LASSO solution, meaning that it can be safely screened-out prior to the LASSO training process.
no code implementations • 27 Jan 2014 • Kohei Ogawa, Yoshiki Suzuki, Shinya Suzumura, Ichiro Takeuchi
Sparse classifiers such as the support vector machines (SVM) are efficient in test-phases because the classifier is characterized only by a subset of the samples called support vectors (SVs), and the rest of the samples (non SVs) have no influence on the classification result.