Search Results for author: Hideaki Hayashi

Found 21 papers, 3 papers with code

Pseudo-label Learning with Calibrated Confidence Using an Energy-based Model

no code implementations15 Apr 2024 Masahito Toba, Seiichi Uchida, Hideaki Hayashi

In pseudo-labeling (PL), which is a type of semi-supervised learning, pseudo-labels are assigned based on the confidence scores provided by the classifier; therefore, accurate confidence is important for successful PL.

Pseudo Label Semi-Supervised Image Classification

Multi-Scale Spatio-Temporal Graph Convolutional Network for Facial Expression Spotting

no code implementations24 Mar 2024 Yicheng Deng, Hideaki Hayashi, Hajime Nagahara

In this paper, we propose a Multi-Scale Spatio-Temporal Graph Convolutional Network (SpoT-GCN) for facial expression spotting.

Contrastive Learning Micro-Expression Spotting

A Hybrid of Generative and Discriminative Models Based on the Gaussian-coupled Softmax Layer

no code implementations10 May 2023 Hideaki Hayashi

In this paper, we propose a method to train a hybrid of discriminative and generative models in a single neural network (NN), which exhibits the characteristics of both models.

Deep Bayesian Active-Learning-to-Rank for Endoscopic Image Data

no code implementations5 Aug 2022 Takeaki Kadota, Hideaki Hayashi, Ryoma Bise, Kiyohito Tanaka, Seiichi Uchida

This paper proposes a deep Bayesian active-learning-to-rank, which trains a Bayesian convolutional neural network while automatically selecting appropriate pairs for relative annotation.

Active Learning Learning-To-Rank

Meta-learning of Pooling Layers for Character Recognition

1 code implementation17 Mar 2021 Takato Otsuzuki, Heon Song, Seiichi Uchida, Hideaki Hayashi

As part of our framework, a parameterized pooling layer is proposed in which the kernel shape and pooling operation are trainable using two parameters, thereby allowing flexible pooling of the input data.

Dimensionality Reduction Meta-Learning

Layer-Wise Interpretation of Deep Neural Networks Using Identity Initialization

no code implementations26 Feb 2021 Shohei Kubota, Hideaki Hayashi, Tomohiro Hayase, Seiichi Uchida

The interpretability of neural networks (NNs) is a challenging but essential topic for transparency in the decision-making process using machine learning.

Classification Decision Making +1

Handwriting Prediction Considering Inter-Class Bifurcation Structures

no code implementations27 Sep 2020 Masaki Yamagata, Hideaki Hayashi, Seiichi Uchida

In this paper, we propose a temporal prediction model that can deal with this bifurcation structure.

Regularized Pooling

no code implementations6 May 2020 Takato Otsuzuki, Hideaki Hayashi, Yuchen Zheng, Seiichi Uchida

This means that max pooling is too flexible to compensate for actual deformations.

Dimensionality Reduction

A Neural Network Based on the Johnson $S_\mathrm{U}$ Translation System and Related Application to Electromyogram Classification

no code implementations14 Nov 2019 Hideaki Hayashi, Taro Shibanoki, Toshio Tsuji

In this study, a discriminative model based on the multivariate Johnson $S_\mathrm{U}$ translation system is transformed into a linear combination of coefficients and input vectors using log-linearization.

Classification General Classification +2

A Discriminative Gaussian Mixture Model with Sparsity

no code implementations ICLR 2021 Hideaki Hayashi, Seiichi Uchida

We propose a sparse classifier based on a discriminative GMM, referred to as a sparse discriminative Gaussian mixture (SDGM).

Sparse Learning

A Recurrent Probabilistic Neural Network with Dimensionality Reduction Based on Time-series Discriminant Component Analysis

no code implementations14 Nov 2019 Hideaki Hayashi, Taro Shibanoki, Keisuke Shima, Yuichi Kurita, Toshio Tsuji

This paper proposes a probabilistic neural network developed on the basis of time-series discriminant component analysis (TSDCA) that can be used to classify high-dimensional time-series patterns.

Dimensionality Reduction EEG +3

SDGM: Sparse Bayesian Classifier Based on a Discriminative Gaussian Mixture Model

no code implementations25 Sep 2019 Hideaki Hayashi, Seiichi Uchida

In the SDGM, a GMM-based discriminative model is trained by sparse Bayesian learning.

Modality Conversion of Handwritten Patterns by Cross Variational Autoencoders

no code implementations14 Jun 2019 Taichi Sumi, Brian Kenji Iwana, Hideaki Hayashi, Seiichi Uchida

This research attempts to construct a network that can convert online and offline handwritten characters to each other.

Combining Noise-to-Image and Image-to-Image GANs: Brain MR Image Augmentation for Tumor Detection

no code implementations31 May 2019 Changhee Han, Leonardo Rundo, Ryosuke Araki, Yudai Nagano, Yujiro Furukawa, Giancarlo Mauri, Hideki Nakayama, Hideaki Hayashi

In this context, Generative Adversarial Networks (GANs) can synthesize realistic/diverse additional training images to fill the data lack in the real image distribution; researchers have improved classification by augmenting data with noise-to-image (e. g., random noise samples to diverse pathological images) or image-to-image GANs (e. g., a benign image to a malignant one).

General Classification Image Augmentation +2

A Trainable Multiplication Layer for Auto-correlation and Co-occurrence Extraction

no code implementations30 May 2019 Hideaki Hayashi, Seiichi Uchida

In this paper, we propose a trainable multiplication layer (TML) for a neural network that can be used to calculate the multiplication between the input features.

Network Interpretation

GlyphGAN: Style-Consistent Font Generation Based on Generative Adversarial Networks

1 code implementation29 May 2019 Hideaki Hayashi, Kohtaro Abe, Seiichi Uchida

In GlyphGAN, the input vector for the generator network consists of two vectors: character class vector and style vector.

Font Generation

ProbAct: A Probabilistic Activation Function for Deep Neural Networks

1 code implementation26 May 2019 Kumar Shridhar, Joonho Lee, Hideaki Hayashi, Purvanshi Mehta, Brian Kenji Iwana, Seokjun Kang, Seiichi Uchida, Sheraz Ahmed, Andreas Dengel

We show that ProbAct increases the classification accuracy by +2-3% compared to ReLU or other conventional activation functions on both original datasets and when datasets are reduced to 50% and 25% of the original size.

Image Classification

Biosignal Generation and Latent Variable Analysis with Recurrent Generative Adversarial Networks

no code implementations17 May 2019 Shota Harada, Hideaki Hayashi, Seiichi Uchida

GAN-based generative models only learn the projection between a random distribution as input data and the distribution of training data. Therefore, the relationship between input and generated data is unclear, and the characteristics of the data generated from this model cannot be controlled.

Data Augmentation Time Series +1

Infinite Brain MR Images: PGGAN-based Data Augmentation for Tumor Detection

no code implementations29 Mar 2019 Changhee Han, Leonardo Rundo, Ryosuke Araki, Yujiro Furukawa, Giancarlo Mauri, Hideki Nakayama, Hideaki Hayashi

Due to the lack of available annotated medical images, accurate computer-assisted diagnosis requires intensive Data Augmentation (DA) techniques, such as geometric/intensity transformations of original images; however, those transformed images intrinsically have a similar distribution to the original ones, leading to limited performance improvement.

Data Augmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.