Search Results for author: Hiroshi Mamitsuka

Found 14 papers, 4 papers with code

Wasserstein Gradient Flow over Variational Parameter Space for Variational Inference

no code implementations25 Oct 2023 Dai Hai Nguyen, Tetsuya Sakurai, Hiroshi Mamitsuka

Notably, the optimization techniques, namely black-box VI and natural-gradient VI, can be reinterpreted as specific instances of the proposed Wasserstein gradient descent.

Variational Inference

Central-Smoothing Hypergraph Neural Networks for Predicting Drug-Drug Interactions

no code implementations15 Dec 2021 Duc Anh Nguyen, Canh Hao Nguyen, Hiroshi Mamitsuka

This problem can be formulated as predicting labels (i. e. side effects) for each pair of nodes in a DDI graph, of which nodes are drugs and edges are interacting drugs with known labels.

Learning subtree pattern importance for Weisfeiler-Lehmanbased graph kernels

1 code implementation8 Jun 2021 Dai Hai Nguyen, Canh Hao Nguyen, Hiroshi Mamitsuka

Graph is an usual representation of relational data, which are ubiquitous in manydomains such as molecules, biological and social networks.

Graph Classification

DIVERSE: Bayesian Data IntegratiVE learning for precise drug ResponSE prediction

no code implementations31 Mar 2021 Betül Güvenç Paltun, Samuel Kaski, Hiroshi Mamitsuka

More specifically, we sequentially integrate five different data sets, which have not all been combined in earlier bioinformatic methods for predicting drug responses.

Drug Response Prediction

Scalable Probabilistic Matrix Factorization with Graph-Based Priors

1 code implementation25 Aug 2019 Jonathan Strahl, Jaakko Peltonen, Hiroshi Mamitsuka, Samuel Kaski

The identification and removal of contested edges adds no computational complexity to state-of-the-art graph-regularized matrix factorization, remaining linear with respect to the number of non-zeros.

 Ranked #1 on Recommendation Systems on YahooMusic (using extra training data)

Matrix Completion Recommendation Systems

AttentionXML: Label Tree-based Attention-Aware Deep Model for High-Performance Extreme Multi-Label Text Classification

3 code implementations NeurIPS 2019 Ronghui You, Zihan Zhang, Ziye Wang, Suyang Dai, Hiroshi Mamitsuka, Shanfeng Zhu

We propose a new label tree-based deep learning model for XMTC, called AttentionXML, with two unique features: 1) a multi-label attention mechanism with raw text as input, which allows to capture the most relevant part of text to each label; and 2) a shallow and wide probabilistic label tree (PLT), which allows to handle millions of labels, especially for "tail labels".

General Classification Multi-Label Text Classification +3

Learning on Hypergraphs with Sparsity

no code implementations3 Apr 2018 Canh Hao Nguyen, Hiroshi Mamitsuka

On a hypergraph, as a generalization of graph, one wishes to learn a smooth function with respect to its topology.

Sparse Learning

Convex Coupled Matrix and Tensor Completion

no code implementations15 May 2017 Kishan Wimalawarne, Makoto Yamada, Hiroshi Mamitsuka

We propose a set of convex low rank inducing norms for a coupled matrices and tensors (hereafter coupled tensors), which shares information between matrices and tensors through common modes.

Convex Factorization Machine for Regression

1 code implementation4 Jul 2015 Makoto Yamada, Wenzhao Lian, Amit Goyal, Jianhui Chen, Kishan Wimalawarne, Suleiman A. Khan, Samuel Kaski, Hiroshi Mamitsuka, Yi Chang

We propose the convex factorization machine (CFM), which is a convex variant of the widely used Factorization Machines (FMs).

regression

Sparse Learning over Infinite Subgraph Features

no code implementations20 Mar 2014 Ichigaku Takigawa, Hiroshi Mamitsuka

We present a supervised-learning algorithm from graph data (a set of graphs) for arbitrary twice-differentiable loss functions and sparse linear models over all possible subgraph features.

Sparse Learning

Manifold-based Similarity Adaptation for Label Propagation

no code implementations NeurIPS 2013 Masayuki Karasuyama, Hiroshi Mamitsuka

In this approach, edge weights represent both similarity and local reconstruction weight simultaneously, both being reasonable for label propagation.

Cannot find the paper you are looking for? You can Submit a new open access paper.