Search Results for author: Takanori Maehara

Found 28 papers, 10 papers with code

Self-Supervised Pretraining for Heterogeneous Hypergraph Neural Networks

no code implementations19 Nov 2023 Abdalgader Abubaker, Takanori Maehara, Madhav Nimishakavi, Vassilis Plachouras

SPHH is consist of two self-supervised pretraining tasks that aim to simultaneously learn both local and global representations of the entities in the hypergraph by using informative representations derived from the hypergraph structure.

Link Prediction Node Classification

Abelian Neural Networks

no code implementations24 Feb 2021 Kenshin Abe, Takanori Maehara, Issei Sato

We study the problem of modeling a binary operation that satisfies some algebraic requirements.

Word Embeddings

Adaptive Stacked Graph Filter

no code implementations1 Jan 2021 Hoang NT, Takanori Maehara, Tsuyoshi Murata

We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fully-connected weights versus trainable polynomial coefficients.

Classification General Classification

Stacked Graph Filter

1 code implementation22 Nov 2020 Hoang NT, Takanori Maehara, Tsuyoshi Murata

We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fully connected weights versus trainable polynomial coefficients.

Classification General Classification

Graph Homomorphism Convolution

1 code implementation ICML 2020 Hoang NT, Takanori Maehara

In this paper, we study the graph classification problem from the graph homomorphism perspective.

General Classification Graph Classification

Tightly Robust Optimization via Empirical Domain Reduction

no code implementations29 Feb 2020 Akihiro Yabe, Takanori Maehara

Data-driven decision-making is performed by solving a parameterized optimization problem, and the optimal decision is given by an optimal solution for unknown true parameters.

Decision Making

Learning Directly from Grammar Compressed Text

1 code implementation28 Feb 2020 Yoichi Sasaki, Kosuke Akimoto, Takanori Maehara

Neural networks using numerous text data have been successfully applied to a variety of tasks.

Computational Efficiency

A Simple Proof of the Universality of Invariant/Equivariant Graph Neural Networks

no code implementations9 Oct 2019 Takanori Maehara, Hoang NT

We present a simple proof for the universality of invariant and equivariant tensorized graph neural networks.

Frequency Analysis for Graph Convolution Network

no code implementations25 Sep 2019 Hoang NT, Takanori Maehara

In this work, we develop quantitative results to the learnablity of a two-layers Graph Convolutional Network (GCN).

Empirical Hypothesis Space Reduction

no code implementations4 Sep 2019 Akihiro Yabe, Takanori Maehara

Selecting appropriate regularization coefficients is critical to performance with respect to regularized empirical risk minimization problems.

Data Cleansing for Models Trained with SGD

1 code implementation NeurIPS 2019 Satoshi Hara, Atsushi Nitanda, Takanori Maehara

Data cleansing is a typical approach used to improve the accuracy of machine learning models, which, however, requires extensive domain knowledge to identify the influential instances that affect the models.

BIG-bench Machine Learning

Revisiting Graph Neural Networks: All We Have is Low-Pass Filters

2 code implementations23 May 2019 Hoang NT, Takanori Maehara

However, we find that the feature vectors of benchmark datasets are already quite informative for the classification task, and the graph structure only provides a means to denoise the data.

General Classification

Faking Fairness via Stealthily Biased Sampling

2 code implementations24 Jan 2019 Kazuto Fukuchi, Satoshi Hara, Takanori Maehara

The focus of this study is to raise an awareness of the risk of malicious decision-makers who fake fairness by abusing the auditing tools and thereby deceiving the social communities.

Fairness

Joint Learning of Hierarchical Word Embeddings from a Corpus and a Taxonomy

no code implementations AKBC 2019 Mohammed Alsuhaibani, Takanori Maehara, Danushka Bollegala

To learn the word embeddings, the proposed method considers not only the hypernym relations that exists between words on a taxonomy, but also their contextual information in a large text corpus.

Word Embeddings

Convex Hull Approximation of Nearly Optimal Lasso Solutions

1 code implementation14 Oct 2018 Satoshi Hara, Takanori Maehara

To this end, we formulate the problem as finding a small number of solutions such that the convex hull of these solutions approximates the set of nearly optimal solutions.

feature selection

A Simple Way to Deal with Cherry-picking

no code implementations11 Oct 2018 Junpei Komiyama, Takanori Maehara

Statistical hypothesis testing serves as statistical evidence for scientific innovation.

Selection bias Two-sample testing

Feature Attribution As Feature Selection

no code implementations27 Sep 2018 Satoshi Hara, Koichi Ikeno, Tasuku Soma, Takanori Maehara

In this study, we formalize the feature attribution problem as a feature selection problem.

feature selection

Maximally Invariant Data Perturbation as Explanation

1 code implementation19 Jun 2018 Satoshi Hara, Kouichi Ikeno, Tasuku Soma, Takanori Maehara

In adversarial example, one seeks the smallest data perturbation that changes the model's output.

Image Classification

ClassiNet -- Predicting Missing Features for Short-Text Classification

no code implementations14 Apr 2018 Danushka Bollegala, Vincent Atanasov, Takanori Maehara, Ken-ichi Kawarabayashi

We propose \emph{ClassiNet} -- a network of classifiers trained for predicting missing features in a given instance, to overcome the feature sparseness problem.

General Classification text-classification +1

Neural Inverse Rendering for General Reflectance Photometric Stereo

no code implementations ICML 2018 Tatsunori Taniai, Takanori Maehara

We present a novel convolutional neural network architecture for photometric stereo (Woodham, 1980), a problem of recovering 3D object surface normals from multiple images observed under varying illuminations.

Inverse Rendering

On Tensor Train Rank Minimization: Statistical Efficiency and Scalable Algorithm

no code implementations1 Aug 2017 Masaaki Imaizumi, Takanori Maehara, Kohei Hayashi

Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors.

Finding Alternate Features in Lasso

1 code implementation18 Nov 2016 Satoshi Hara, Takanori Maehara

We propose a method for finding alternate features missing in the Lasso optimal solution.

Joint Word Representation Learning using a Corpus and a Semantic Lexicon

1 code implementation19 Nov 2015 Danushka Bollegala, Alsuhaibani Mohammed, Takanori Maehara, Ken-ichi Kawarabayashi

For this purpose, we propose a joint word representation learning method that simultaneously predicts the co-occurrences of two words in a sentence subject to the relational constrains given by the semantic lexicon.

Representation Learning Semantic Similarity +2

Unsupervised Cross-Domain Word Representation Learning

no code implementations IJCNLP 2015 Danushka Bollegala, Takanori Maehara, Ken-ichi Kawarabayashi

Given a pair of \emph{source}-\emph{target} domains, we propose an unsupervised method for learning domain-specific word representations that accurately capture the domain-specific aspects of word semantics.

Domain Adaptation Representation Learning +2

Embedding Semantic Relations into Word Representations

no code implementations1 May 2015 Danushka Bollegala, Takanori Maehara, Ken-ichi Kawarabayashi

We propose an unsupervised method for learning vector representations for words such that the learnt representations are sensitive to the semantic relations that exist between two words.

Relation Classification

Learning Word Representations from Relational Graphs

no code implementations7 Dec 2014 Danushka Bollegala, Takanori Maehara, Yuichi Yoshida, Ken-ichi Kawarabayashi

To evaluate the accuracy of the word representations learnt using the proposed method, we use the learnt word representations to solve semantic word analogy problems.

Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.