Search Results for author: Nakamasa Inoue

Found 11 papers, 3 papers with code

PoF: Post-Training of Feature Extractor for Improving Generalization

1 code implementation5 Jul 2022 Ikuro Sato, Ryota Yamada, Masayuki Tanaka, Nakamasa Inoue, Rei Kawakami

We developed a training algorithm called PoF: Post-Training of Feature Extractor that updates the feature extractor part of an already-trained deep model to search a flatter minimum.

Replacing Labeled Real-image Datasets with Auto-generated Contours

no code implementations CVPR 2022 Hirokatsu Kataoka, Ryo Hayamizu, Ryosuke Yamada, Kodai Nakashima, Sora Takashima, Xinyu Zhang, Edgar Josafat Martinez-Noriega, Nakamasa Inoue, Rio Yokota

In the present work, we show that the performance of formula-driven supervised learning (FDSL) can match or even exceed that of ImageNet-21k without the use of real images, human-, and self-supervision during the pre-training of Vision Transformers (ViTs).

Can Vision Transformers Learn without Natural Images?

1 code implementation24 Mar 2021 Kodai Nakashima, Hirokatsu Kataoka, Asato Matsumoto, Kenji Iwata, Nakamasa Inoue

Moreover, although the ViT pre-trained without natural images produces some different visualizations from ImageNet pre-trained ViT, it can interpret natural image datasets to a large extent.

Fairness Self-Supervised Learning

Pre-training without Natural Images

2 code implementations21 Jan 2021 Hirokatsu Kataoka, Kazushige Okayasu, Asato Matsumoto, Eisuke Yamagata, Ryosuke Yamada, Nakamasa Inoue, Akio Nakamura, Yutaka Satoh

Is it possible to use convolutional neural networks pre-trained without any natural images to assist natural image understanding?

Initialization Using Perlin Noise for Training Networks with a Limited Amount of Data

no code implementations19 Jan 2021 Nakamasa Inoue, Eisuke Yamagata, Hirokatsu Kataoka

Our main idea is to initialize the network parameters by solving an artificial noise classification problem, where the aim is to classify Perlin noise samples into their noise categories.

Classification General Classification +1

Augmented Cyclic Consistency Regularization for Unpaired Image-to-Image Translation

no code implementations29 Feb 2020 Takehiko Ohkawa, Naoto Inoue, Hirokatsu Kataoka, Nakamasa Inoue

Herein, we propose Augmented Cyclic Consistency Regularization (ACCR), a novel regularization method for unpaired I2I translation.

Data Augmentation Image-to-Image Translation +1

Sequence-Level Knowledge Distillation for Model Compression of Attention-based Sequence-to-Sequence Speech Recognition

no code implementations12 Nov 2018 Raden Mu'az Mun'im, Nakamasa Inoue, Koichi Shinoda

We investigate the feasibility of sequence-level knowledge distillation of Sequence-to-Sequence (Seq2Seq) models for Large Vocabulary Continuous Speech Recognition (LVSCR).

Knowledge Distillation Model Compression +2

Few-Shot Adaptation for Multimedia Semantic Indexing

no code implementations19 Jul 2018 Nakamasa Inoue, Koichi Shinoda

Few-shot adaptation provides robust parameter estimation with few training examples, by optimizing the parameters of zero-shot learning and supervised many-shot learning simultaneously.

Few-Shot Learning Zero-Shot Learning

I-vector Transformation Using Conditional Generative Adversarial Networks for Short Utterance Speaker Verification

no code implementations1 Apr 2018 Jiacen Zhang, Nakamasa Inoue, Koichi Shinoda

I-vector based text-independent speaker verification (SV) systems often have poor performance with short utterances, as the biased phonetic distribution in a short utterance makes the extracted i-vector unreliable.

Text-Independent Speaker Verification

Cannot find the paper you are looking for? You can Submit a new open access paper.