1 code implementation • 11 Mar 2024 • Masatoshi Kobayashi, Kohei Miyagichi, Shin Matsushima
Causal discovery in the presence of unobserved common causes from observational data only is a crucial but challenging problem.
no code implementations • 27 Mar 2022 • Toyotaro Suzumura, Akiyoshi Sugiki, Hiroyuki Takizawa, Akira Imakura, Hiroshi Nakamura, Kenjiro Taura, Tomohiro Kudoh, Toshihiro Hanawa, Yuji Sekiya, Hiroki Kobayashi, Shin Matsushima, Yohei Kuga, Ryo Nakamura, Renhe Jiang, Junya Kawase, Masatoshi Hanai, Hiroshi Miyazaki, Tsutomu Ishizaki, Daisuke Shimotoku, Daisuke Miyamoto, Kento Aida, Atsuko Takefusa, Takashi Kurimoto, Koji Sasayama, Naoya Kitagawa, Ikki Fujiwara, Yusuke Tanimura, Takayuki Aoki, Toshio Endo, Satoshi Ohshima, Keiichiro Fukazawa, Susumu Date, Toshihiro Uchibayashi
The growing amount of data and advances in data science have created a need for a new kind of cloud platform that provides users with flexibility, strong security, and the ability to couple with supercomputers and edge devices through high-performance networks.
1 code implementation • NeurIPS 2019 • Shin Matsushima, Maria Brbic
Sparse subspace clustering (SSC) represents each data point as a sparse linear combination of other data points in the dataset.
no code implementations • 8 Feb 2018 • Shin Matsushima
A generalized additive model (GAM, Hastie and Tibshirani (1987)) is a nonparametric model by the sum of univariate functions with respect to each explanatory variable, i. e., $f({\mathbf x}) = \sum f_j(x_j)$, where $x_j\in\mathbb{R}$ is $j$-th component of a sample ${\mathbf x}\in \mathbb{R}^p$.
1 code implementation • 7 Nov 2017 • Taito Lee, Shin Matsushima, Kenji Yamanishi
To overcome this computational difficulty, we propose an algorithm GRAB (GRAfting for Boolean datasets), which efficiently learns CBM within the $L_1$-regularized loss minimization framework.
1 code implementation • 16 Apr 2016 • Parameswaran Raman, Sriram Srinivasan, Shin Matsushima, Xinhua Zhang, Hyokun Yun, S. V. N. Vishwanathan
Scaling multinomial logistic regression to datasets with very large number of data points and classes is challenging.
2 code implementations • EMNLP 2016 • Shihao Ji, Hyokun Yun, Pinar Yanardag, Shin Matsushima, S. V. N. Vishwanathan
Then, based on this insight, we propose a novel framework WordRank that efficiently estimates word representations via robust ranking, in which the attention mechanism and robustness to noise are readily achieved via the DCG-like ranking losses.
no code implementations • 7 Apr 2015 • Vasil S. Denchev, Nan Ding, Shin Matsushima, S. V. N. Vishwanathan, Hartmut Neven
If actual quantum optimization were to be used with this algorithm in the future, we would expect equivalent or superior results at much smaller time and energy costs during training.
no code implementations • 17 Jun 2014 • Shin Matsushima, Hyokun Yun, Xinhua Zhang, S. V. N. Vishwanathan
Many machine learning algorithms minimize a regularized risk, and stochastic optimization is widely used for this task.