Search Results for author: Chihiro Watanabe

Found 10 papers, 0 papers with code

AutoLL: Automatic Linear Layout of Graphs based on Deep Neural Network

no code implementations5 Aug 2021 Chihiro Watanabe, Taiji Suzuki

However, it is limited to a two-mode reordering (i. e., the rows and columns are reordered separately) and it cannot be applied in the one-mode setting (i. e., the same node order is used for reordering both rows and columns), owing to the characteristics of its model architecture.

Deep Two-Way Matrix Reordering for Relational Data Analysis

no code implementations26 Mar 2021 Chihiro Watanabe, Taiji Suzuki

This denoised mean matrix can be used to visualize the global structure of the reordered observed matrix.

A Goodness-of-fit Test on the Number of Biclusters in a Relational Data Matrix

no code implementations23 Feb 2021 Chihiro Watanabe, Taiji Suzuki

Biclustering is a method for detecting homogeneous submatrices in a given observed matrix, and it is an effective tool for relational data analysis.

X-DC: Explainable Deep Clustering based on Learnable Spectrogram Templates

no code implementations18 Sep 2020 Chihiro Watanabe, Hirokazu Kameoka

Particularly, it has been shown that a monaural speech separation task can be successfully solved with a DNN-based method called deep clustering (DC), which uses a DNN to describe the process of assigning a continuous vector to each time-frequency (TF) bin and measure how likely each pair of TF bins is to be dominated by the same speaker.

Deep Clustering Speech Separation

Selective Inference for Latent Block Models

no code implementations27 May 2020 Chihiro Watanabe, Taiji Suzuki

In this case, it becomes crucial to consider the selective bias in the block structure, that is, the block structure is selected from all the possible cluster memberships based on some criterion by the clustering algorithm.

Model Selection

Goodness-of-fit Test for Latent Block Models

no code implementations10 Jun 2019 Chihiro Watanabe, Taiji Suzuki

Latent block models are used for probabilistic biclustering, which is shown to be an effective method for analyzing various relational data sets.

Interpreting Layered Neural Networks via Hierarchical Modular Representation

no code implementations3 Oct 2018 Chihiro Watanabe

Interpreting the prediction mechanism of complex models is currently one of the most important tasks in the machine learning field, especially with layered neural networks, which have achieved high predictive performance with various practical data sets.

Knowledge Discovery from Layered Neural Networks based on Non-negative Task Decomposition

no code implementations18 May 2018 Chihiro Watanabe, Kaoru Hiramatsu, Kunio Kashino

Interpretability has become an important issue in the machine learning field, along with the success of layered neural networks in various practical tasks.

Understanding Community Structure in Layered Neural Networks

no code implementations13 Apr 2018 Chihiro Watanabe, Kaoru Hiramatsu, Kunio Kashino

We show experimentally that our proposed method can reveal the role of each part of a layered neural network by applying the neural networks to three types of data sets, extracting communities from the trained network, and applying the proposed method to the community structure.

Modular Representation of Layered Neural Networks

no code implementations1 Mar 2017 Chihiro Watanabe, Kaoru Hiramatsu, Kunio Kashino

And (3) data analysis: in practical data it reveals the community structure in the input, hidden, and output layers, which serves as a clue for discovering knowledge from a trained neural network.

speech-recognition Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.