Search Results for author: Mahito Sugiyama

Found 30 papers, 7 papers with code

StiefelGen: A Simple, Model Agnostic Approach for Time Series Data Augmentation over Riemannian Manifolds

no code implementations29 Feb 2024 Prasad Cheema, Mahito Sugiyama

However, for many practical problems that work with time series data in the industry: (i) One usually does not have access to a robust physical model, (ii) The addition of noise can in of itself require large or difficult assumptions (for example, what probability distribution should be used?

Data Augmentation Time Series

Molecular Graph Generation by Decomposition and Reassembling

no code implementations11 Dec 2022 Masatsugu Yamada, Mahito Sugiyama

Designing molecular structures with desired chemical properties is an essential task in drug discovery and material design.

Drug Discovery Graph Generation +2

Analyzing Tree Architectures in Ensembles via Neural Tangent Kernel

no code implementations25 May 2022 Ryuichi Kanoh, Mahito Sugiyama

This kernel leads to the remarkable finding that only the number of leaves at each depth is relevant for the tree architecture in ensemble learning with an infinite number of trees.

Ensemble Learning

Fast Rank-1 NMF for Missing Data with KL Divergence

1 code implementation25 Oct 2021 Kazu Ghalamkari, Mahito Sugiyama

We propose a fast non-gradient-based method of rank-1 non-negative matrix factorization (NMF) for missing data, called A1GM, that minimizes the KL divergence from an input matrix to the reconstructed rank-1 matrix.

Matrix Factorization / Decomposition

Additive Poisson Process: Learning Intensity of Higher-Order Interaction in Poisson Processes

no code implementations29 Sep 2021 Simon Luo, Feng Zhou, Lamiae Azizi, Mahito Sugiyama

We present the Additive Poisson Process (APP), a novel framework that can model the higher-order interaction effects of the intensity functions in Poisson processes using projections into lower-dimensional space.

Additive models

A Neural Tangent Kernel Perspective of Infinite Tree Ensembles

no code implementations ICLR 2022 Ryuichi Kanoh, Mahito Sugiyama

In practical situations, the tree ensemble is one of the most popular models along with neural networks.

Unintended Effects on Adaptive Learning Rate for Training Neural Network with Output Scale Change

no code implementations5 Mar 2021 Ryuichi Kanoh, Mahito Sugiyama

A multiplicative constant scaling factor is often applied to the model output to adjust the dynamics of neural network parameters.

The Volume of Non-Restricted Boltzmann Machines and Their Double Descent Model Complexity

no code implementations NeurIPS Workshop DL-IG 2020 Prasad Cheema, Mahito Sugiyama

The double descent risk phenomenon has received much interest in the machine learning and statistics community.

Learning Joint Intensity in a Multivariate Poisson Process on Statistical Manifolds

no code implementations NeurIPS Workshop DL-IG 2020 Simon Luo, Feng Zhou, Lamiae Azizi, Mahito Sugiyama

Learning of the model is achieved via convex optimization, thanks to the dually flat statistical manifold generated by the log-linear model.

Additive models

A Deep Architecture for Log-Linear Models

no code implementations NeurIPS Workshop DL-IG 2020 Simon Luo, Sally Cripps, Mahito Sugiyama

We present a novel perspective on deep learning architectures using a partial order structure, which is naturally incorporated into the information geometric formulation of the log-linear model.

Sample Space Truncation on Boltzmann Machines

no code implementations NeurIPS Workshop DL-IG 2020 Mahito Sugiyama, Koji Tsuda, Hiroyuki Nakahara

We present a lightweight variant of Boltzmann machines via sample space truncation, called a truncated Boltzmann machine (TBM), which has not been investigated before while can be naturally introduced from the log-linear model viewpoint.

Additive Poisson Process: Learning Intensity of Higher-Order Interaction in Stochastic Processes

no code implementations16 Jun 2020 Simon Luo, Feng Zhou, Lamiae Azizi, Mahito Sugiyama

We present the Additive Poisson Process (APP), a novel framework that can model the higher-order interaction effects of the intensity functions in stochastic processes using lower dimensional projections.

Additive models

Fast Rank Reduction for Non-negative Matrices via Mean Field Theory

1 code implementation9 Jun 2020 Kazu Ghalamkari, Mahito Sugiyama

We propose an efficient matrix rank reduction method for non-negative matrices, whose time complexity is quadratic in the number of rows or columns of a matrix.

Matrix Factorization / Decomposition

Double Descent Risk and Volume Saturation Effects: A Geometric Perspective

no code implementations8 Jun 2020 Prasad Cheema, Mahito Sugiyama

The appearance of the double-descent risk phenomenon has received growing interest in the machine learning and statistics community, as it challenges well-understood notions behind the U-shaped train-test curves.

Model Selection regression

Hierarchical Probabilistic Model for Blind Source Separation via Legendre Transformation

1 code implementation25 Sep 2019 Simon Luo, Lamiae Azizi, Mahito Sugiyama

We present a novel blind source separation (BSS) method, called information geometric blind source separation (IGBSS).

blind source separation Time Series +1

Bias-Variance Trade-Off in Hierarchical Probabilistic Models Using Higher-Order Feature Interactions

1 code implementation28 Jun 2019 Simon Luo, Mahito Sugiyama

However, it is well known that increasing the number of parameters also increases the complexity of the model which leads to a bias-variance trade-off.

Learning Graph Representation via Formal Concept Analysis

no code implementations8 Dec 2018 Yuka Yoneda, Mahito Sugiyama, Takashi Washio

We present a novel method that can learn a graph representation from multivariate data.

Transductive Boltzmann Machines

no code implementations21 May 2018 Mahito Sugiyama, Koji Tsuda, Hiroyuki Nakahara

We present transductive Boltzmann machines (TBMs), which firstly achieve transductive learning of the Gibbs distribution.

Transductive Learning

Legendre Decomposition for Tensors

1 code implementation NeurIPS 2018 Mahito Sugiyama, Hiroyuki Nakahara, Koji Tsuda

We present a novel nonnegative tensor decomposition method, called Legendre decomposition, which factorizes an input tensor into a multiplicative combination of parameters.

Tensor Decomposition

Bias-Variance Decomposition for Boltzmann Machines

no code implementations ICLR 2018 Mahito Sugiyama, Koji Tsuda, Hiroyuki Nakahara

We achieve bias-variance decomposition for Boltzmann machines using an information geometric formulation.

Finding Statistically Significant Interactions between Continuous Features

no code implementations28 Feb 2017 Mahito Sugiyama, Karsten Borgwardt

The search for higher-order feature interactions that are statistically significantly associated with a class variable is of high relevance in fields such as Genetics or Healthcare, but the combinatorial explosion of the candidate space makes this problem extremely challenging in terms of computational efficiency and proper correction for multiple testing.

Computational Efficiency

Tensor Balancing on Statistical Manifold

1 code implementation ICML 2017 Mahito Sugiyama, Hiroyuki Nakahara, Koji Tsuda

To theoretically prove the correctness of the algorithm, we model tensors as probability distributions in a statistical manifold and realize tensor balancing as projection onto a submanifold.

Selective Inference Approach for Statistically Sound Predictive Pattern Mining

no code implementations15 Feb 2016 Shinya Suzumura, Kazuya Nakagawa, Mahito Sugiyama, Koji Tsuda, Ichiro Takeuchi

The main obstacle of this problem is in the difficulty of taking into account the selection bias, i. e., the bias arising from the fact that patterns are selected from extremely large number of candidates in databases.

Selection bias Two-sample testing

Fast and Memory-Efficient Significant Pattern Mining via Permutation Testing

no code implementations15 Feb 2015 Felipe Llinares López, Mahito Sugiyama, Laetitia Papaxanthos, Karsten M. Borgwardt

Westfall-Young light opens the door to significant pattern mining on large datasets that previously led to prohibitive runtime or memory costs.

Two-sample testing

Identifying Higher-order Combinations of Binary Features

no code implementations4 Jul 2014 Felipe Llinares, Mahito Sugiyama, Karsten M. Borgwardt

Finding statistically significant interactions between binary variables is computationally and statistically challenging in high-dimensional settings, due to the combinatorial explosion in the number of hypotheses.

Significant Subgraph Mining with Multiple Testing Correction

no code implementations1 Jul 2014 Mahito Sugiyama, Felipe Llinares López, Niklas Kasenburg, Karsten M. Borgwardt

An open question, however, is whether this strategy of excluding untestable hypotheses also leads to greater statistical power in subgraph mining, in which the number of hypotheses is much larger than in itemset mining.

Open-Ended Question Answering Two-sample testing

Rapid Distance-Based Outlier Detection via Sampling

no code implementations NeurIPS 2013 Mahito Sugiyama, Karsten Borgwardt

Distance-based approaches to outlier detection are popular in data mining, as they do not require to model the underlying probability distribution, which is particularly challenging for high-dimensional data.

Outlier Detection

Efficient network-guided multi-locus association mapping with graph cuts

no code implementations10 Nov 2012 Chloé-Agathe Azencott, Dominik Grimm, Mahito Sugiyama, Yoshinobu Kawahara, Karsten M. Borgwardt

We present SConES, a new efficient method to discover sets of genetic loci that are maximally associated with a phenotype, while being connected in an underlying network.

Cannot find the paper you are looking for? You can Submit a new open access paper.