Search Results for author: Shinya Suzumura

Found 7 papers, 0 papers with code

An Analysis of Selection Bias Issue for Online Advertising

no code implementations7 Jun 2022 Shinya Suzumura, Hitoshi Abe

In online advertising, a set of potential advertisements can be ranked by a certain auction system where usually the top-1 advertisement would be selected and displayed at an advertising space.

Multi-Task Learning Selection bias

Selective Inference for Sparse High-Order Interaction Models

no code implementations ICML 2017 Shinya Suzumura, Kazuya Nakagawa, Yuta Umezu, Koji Tsuda, Ichiro Takeuchi

Finding statistically significant high-order interactions in predictive modeling is important but challenging task because the possible number of high-order interactions is extremely large (e. g., $> 10^{17}$).

Drug Response Prediction feature selection +1

Safe Pattern Pruning: An Efficient Approach for Predictive Pattern Mining

no code implementations15 Feb 2016 Kazuya Nakagawa, Shinya Suzumura, Masayuki Karasuyama, Koji Tsuda, Ichiro Takeuchi

The SPP method allows us to efficiently find a superset of all the predictive patterns in the database that are needed for the optimal predictive model.

Graph Mining

Selective Inference Approach for Statistically Sound Predictive Pattern Mining

no code implementations15 Feb 2016 Shinya Suzumura, Kazuya Nakagawa, Mahito Sugiyama, Koji Tsuda, Ichiro Takeuchi

The main obstacle of this problem is in the difficulty of taking into account the selection bias, i. e., the bias arising from the fact that patterns are selected from extremely large number of candidates in databases.

Selection bias Two-sample testing

Homotopy Continuation Approaches for Robust SV Classification and Regression

no code implementations12 Jul 2015 Shinya Suzumura, Kohei Ogawa, Masashi Sugiyama, Masayuki Karasuyama, Ichiro Takeuchi

An advantage of our homotopy approach is that it can be interpreted as simulated annealing, a common approach for finding a good local optimal solution in non-convex optimization problems.

Classification General Classification +3

Safe Feature Pruning for Sparse High-Order Interaction Models

no code implementations26 Jun 2015 Kazuya Nakagawa, Shinya Suzumura, Masayuki Karasuyama, Koji Tsuda, Ichiro Takeuchi

An SFS rule has a property that, if a feature satisfies the rule, then the feature is guaranteed to be non-active in the LASSO solution, meaning that it can be safely screened-out prior to the LASSO training process.

Sparse Learning Vocal Bursts Intensity Prediction

Safe Sample Screening for Support Vector Machines

no code implementations27 Jan 2014 Kohei Ogawa, Yoshiki Suzuki, Shinya Suzumura, Ichiro Takeuchi

Sparse classifiers such as the support vector machines (SVM) are efficient in test-phases because the classifier is characterized only by a subset of the samples called support vectors (SVs), and the rest of the samples (non SVs) have no influence on the classification result.

Cannot find the paper you are looking for? You can Submit a new open access paper.