Search Results for author: Shin Matsushima

Found 9 papers, 5 papers with code

Detection of Unobserved Common Causes based on NML Code in Discrete, Mixed, and Continuous Variables

1 code implementation11 Mar 2024 Masatoshi Kobayashi, Kohei Miyagichi, Shin Matsushima

Causal discovery in the presence of unobserved common causes from observational data only is a crucial but challenging problem.

Causal Discovery Model Selection

Selective Sampling-based Scalable Sparse Subspace Clustering

1 code implementation NeurIPS 2019 Shin Matsushima, Maria Brbic

Sparse subspace clustering (SSC) represents each data point as a sparse linear combination of other data points in the dataset.

Clustering Representation Learning

Statistical Learnability of Generalized Additive Models based on Total Variation Regularization

no code implementations8 Feb 2018 Shin Matsushima

A generalized additive model (GAM, Hastie and Tibshirani (1987)) is a nonparametric model by the sum of univariate functions with respect to each explanatory variable, i. e., $f({\mathbf x}) = \sum f_j(x_j)$, where $x_j\in\mathbb{R}$ is $j$-th component of a sample ${\mathbf x}\in \mathbb{R}^p$.

Additive models General Classification

Grafting for Combinatorial Boolean Model using Frequent Itemset Mining

1 code implementation7 Nov 2017 Taito Lee, Shin Matsushima, Kenji Yamanishi

To overcome this computational difficulty, we propose an algorithm GRAB (GRAfting for Boolean datasets), which efficiently learns CBM within the $L_1$-regularized loss minimization framework.

Computational Efficiency

WordRank: Learning Word Embeddings via Robust Ranking

2 code implementations EMNLP 2016 Shihao Ji, Hyokun Yun, Pinar Yanardag, Shin Matsushima, S. V. N. Vishwanathan

Then, based on this insight, we propose a novel framework WordRank that efficiently estimates word representations via robust ranking, in which the attention mechanism and robustness to noise are readily achieved via the DCG-like ranking losses.

Learning Word Embeddings Word Similarity

Totally Corrective Boosting with Cardinality Penalization

no code implementations7 Apr 2015 Vasil S. Denchev, Nan Ding, Shin Matsushima, S. V. N. Vishwanathan, Hartmut Neven

If actual quantum optimization were to be used with this algorithm in the future, we would expect equivalent or superior results at much smaller time and energy costs during training.

Benchmarking Combinatorial Optimization

Distributed Stochastic Optimization of the Regularized Risk

no code implementations17 Jun 2014 Shin Matsushima, Hyokun Yun, Xinhua Zhang, S. V. N. Vishwanathan

Many machine learning algorithms minimize a regularized risk, and stochastic optimization is widely used for this task.

Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.