no code implementations • 11 Dec 2022 • Masatsugu Yamada, Mahito Sugiyama
Designing molecular structures with desired chemical properties is an essential task in drug discovery and material design.
no code implementations • 25 May 2022 • Ryuichi Kanoh, Mahito Sugiyama
This kernel leads to the remarkable finding that only the number of leaves at each depth is relevant for the tree architecture in ensemble learning with an infinite number of trees.
1 code implementation • 25 Oct 2021 • Kazu Ghalamkari, Mahito Sugiyama
We propose a fast non-gradient-based method of rank-1 non-negative matrix factorization (NMF) for missing data, called A1GM, that minimizes the KL divergence from an input matrix to the reconstructed rank-1 matrix.
no code implementations • 29 Sep 2021 • Simon Luo, Feng Zhou, Lamiae Azizi, Mahito Sugiyama
We present the Additive Poisson Process (APP), a novel framework that can model the higher-order interaction effects of the intensity functions in Poisson processes using projections into lower-dimensional space.
no code implementations • ICLR 2022 • Ryuichi Kanoh, Mahito Sugiyama
In practical situations, the tree ensemble is one of the most popular models along with neural networks.
no code implementations • 5 Mar 2021 • Ryuichi Kanoh, Mahito Sugiyama
A multiplicative constant scaling factor is often applied to the model output to adjust the dynamics of neural network parameters.
1 code implementation • NeurIPS 2021 • Kazu Ghalamkari, Mahito Sugiyama
We present an efficient low-rank approximation algorithm for non-negative tensors.
no code implementations • NeurIPS Workshop DL-IG 2020 • Prasad Cheema, Mahito Sugiyama
The double descent risk phenomenon has received much interest in the machine learning and statistics community.
no code implementations • NeurIPS Workshop DL-IG 2020 • Mahito Sugiyama, Koji Tsuda, Hiroyuki Nakahara
We present a lightweight variant of Boltzmann machines via sample space truncation, called a truncated Boltzmann machine (TBM), which has not been investigated before while can be naturally introduced from the log-linear model viewpoint.
no code implementations • NeurIPS Workshop DL-IG 2020 • Simon Luo, Sally Cripps, Mahito Sugiyama
We present a novel perspective on deep learning architectures using a partial order structure, which is naturally incorporated into the information geometric formulation of the log-linear model.
no code implementations • NeurIPS Workshop DL-IG 2020 • Simon Luo, Feng Zhou, Lamiae Azizi, Mahito Sugiyama
Learning of the model is achieved via convex optimization, thanks to the dually flat statistical manifold generated by the log-linear model.
no code implementations • 16 Jun 2020 • Simon Luo, Feng Zhou, Lamiae Azizi, Mahito Sugiyama
We present the Additive Poisson Process (APP), a novel framework that can model the higher-order interaction effects of the intensity functions in stochastic processes using lower dimensional projections.
1 code implementation • 9 Jun 2020 • Kazu Ghalamkari, Mahito Sugiyama
We propose an efficient matrix rank reduction method for non-negative matrices, whose time complexity is quadratic in the number of rows or columns of a matrix.
no code implementations • 8 Jun 2020 • Prasad Cheema, Mahito Sugiyama
The appearance of the double-descent risk phenomenon has received growing interest in the machine learning and statistics community, as it challenges well-understood notions behind the U-shaped train-test curves.
1 code implementation • 25 Sep 2019 • Simon Luo, Lamiae Azizi, Mahito Sugiyama
We present a novel blind source separation (BSS) method, called information geometric blind source separation (IGBSS).
1 code implementation • 28 Jun 2019 • Simon Luo, Mahito Sugiyama
However, it is well known that increasing the number of parameters also increases the complexity of the model which leads to a bias-variance trade-off.
no code implementations • 8 Dec 2018 • Yuka Yoneda, Mahito Sugiyama, Takashi Washio
We present a novel method that can learn a graph representation from multivariate data.
no code implementations • 21 May 2018 • Mahito Sugiyama, Koji Tsuda, Hiroyuki Nakahara
We present transductive Boltzmann machines (TBMs), which firstly achieve transductive learning of the Gibbs distribution.
1 code implementation • NeurIPS 2018 • Mahito Sugiyama, Hiroyuki Nakahara, Koji Tsuda
We present a novel nonnegative tensor decomposition method, called Legendre decomposition, which factorizes an input tensor into a multiplicative combination of parameters.
no code implementations • ICLR 2018 • Mahito Sugiyama, Koji Tsuda, Hiroyuki Nakahara
We achieve bias-variance decomposition for Boltzmann machines using an information geometric formulation.
no code implementations • 28 Feb 2017 • Mahito Sugiyama, Karsten Borgwardt
The search for higher-order feature interactions that are statistically significantly associated with a class variable is of high relevance in fields such as Genetics or Healthcare, but the combinatorial explosion of the candidate space makes this problem extremely challenging in terms of computational efficiency and proper correction for multiple testing.
1 code implementation • ICML 2017 • Mahito Sugiyama, Hiroyuki Nakahara, Koji Tsuda
To theoretically prove the correctness of the algorithm, we model tensors as probability distributions in a statistical manifold and realize tensor balancing as projection onto a submanifold.
no code implementations • 15 Feb 2016 • Shinya Suzumura, Kazuya Nakagawa, Mahito Sugiyama, Koji Tsuda, Ichiro Takeuchi
The main obstacle of this problem is in the difficulty of taking into account the selection bias, i. e., the bias arising from the fact that patterns are selected from extremely large number of candidates in databases.
no code implementations • NeurIPS 2015 • Mahito Sugiyama, Karsten Borgwardt
Random walk kernels measure graph similarity by counting matching walks in two graphs.
no code implementations • 15 Feb 2015 • Felipe Llinares López, Mahito Sugiyama, Laetitia Papaxanthos, Karsten M. Borgwardt
Westfall-Young light opens the door to significant pattern mining on large datasets that previously led to prohibitive runtime or memory costs.
no code implementations • 4 Jul 2014 • Felipe Llinares, Mahito Sugiyama, Karsten M. Borgwardt
Finding statistically significant interactions between binary variables is computationally and statistically challenging in high-dimensional settings, due to the combinatorial explosion in the number of hypotheses.
no code implementations • 1 Jul 2014 • Mahito Sugiyama, Felipe Llinares López, Niklas Kasenburg, Karsten M. Borgwardt
An open question, however, is whether this strategy of excluding untestable hypotheses also leads to greater statistical power in subgraph mining, in which the number of hypotheses is much larger than in itemset mining.
no code implementations • NeurIPS 2013 • Mahito Sugiyama, Karsten Borgwardt
Distance-based approaches to outlier detection are popular in data mining, as they do not require to model the underlying probability distribution, which is particularly challenging for high-dimensional data.
no code implementations • 10 Nov 2012 • Chloé-Agathe Azencott, Dominik Grimm, Mahito Sugiyama, Yoshinobu Kawahara, Karsten M. Borgwardt
We present SConES, a new efficient method to discover sets of genetic loci that are maximally associated with a phenotype, while being connected in an underlying network.