Search Results for author: Hideki Nakayama

Found 53 papers, 9 papers with code

A Visually-grounded First-person Dialogue Dataset with Verbal and Non-verbal Responses

no code implementations EMNLP 2020 Hisashi Kamezawa, Noriki Nishida, Nobuyuki Shimizu, Takashi Miyazaki, Hideki Nakayama

The results demonstrate that first-person vision helps neural network models correctly understand human intentions, and the production of non-verbal responses is a challenging task like that of verbal responses.

SciXGen: A Scientific Paper Dataset for Context-Aware Text Generation

no code implementations Findings (EMNLP) 2021 Hong Chen, Hiroya Takamura, Hideki Nakayama

Generating texts in scientific papers requires not only capturing the content contained within the given input but also frequently acquiring the external information called \textit{context}.

Text Generation

Gradient-based Hyperparameter Optimization without Validation Data for Learning fom Limited Labels

no code implementations29 Sep 2021 Ryuichiro Hataya, Hideki Nakayama

Optimizing hyperparameters of machine learning algorithms especially for limited labeled data is important but difficult, because then obtaining enough validation data is practically impossible.

Hyperparameter Optimization Model Selection

Graph Energy-based Model for Substructure Preserving Molecular Design

no code implementations9 Feb 2021 Ryuichiro Hataya, Hideki Nakayama, Kazuki Yoshizoe

It is common practice for chemists to search chemical databases based on substructures of compounds for finding molecules with desired properties.

GraphPlan: Story Generation by Planning with Event Graph

no code implementations INLG (ACL) 2021 Hong Chen, Raphael Shu, Hiroya Takamura, Hideki Nakayama

In this paper, we focus on planning a sequence of events assisted by event graphs, and use the events to guide the generator.

Story Generation

Commonsense Knowledge Aware Concept Selection For Diverse and Informative Visual Storytelling

no code implementations5 Feb 2021 Hong Chen, Yifei HUANG, Hiroya Takamura, Hideki Nakayama

To enrich the candidate concepts, a commonsense knowledge graph is created for each image sequence from which the concept candidates are proposed.

Informativeness Visual Storytelling

DJMix: Unsupervised Task-agnostic Augmentation for Improving Robustness

no code implementations1 Jan 2021 Ryuichiro Hataya, Hideki Nakayama

Convolutional Neural Networks (CNNs) are vulnerable to unseen noise on input images at the test time, and thus improving the robustness is crucial.

Data Augmentation Semantic Segmentation

Supervised Visual Attention for Multimodal Neural Machine Translation

no code implementations COLING 2020 Tetsuro Nishihara, Akihiro Tamura, Takashi Ninomiya, Yutaro Omote, Hideki Nakayama

This paper proposed a supervised visual attention mechanism for multimodal neural machine translation (MNMT), trained with constraints based on manual alignments between words in a sentence and their corresponding regions of an image.

Machine Translation Translation

Graph-based Heuristic Search for Module Selection Procedure in Neural Module Network

no code implementations30 Sep 2020 Yuxuan Wu, Hideki Nakayama

To overcome this problem, existing work either included ground-truth program into training data or applied reinforcement learning to explore the program.

Question Answering Visual Question Answering

MADGAN: unsupervised Medical Anomaly Detection GAN using multiple adjacent brain MRI slice reconstruction

no code implementations24 Jul 2020 Changhee Han, Leonardo Rundo, Kohei Murao, Tomoyuki Noguchi, Yuki Shimahara, Zoltan Adam Milacski, Saori Koshino, Evis Sala, Hideki Nakayama, Shinichi Satoh

Therefore, we propose unsupervised Medical Anomaly Detection Generative Adversarial Network (MADGAN), a novel two-step method using GAN-based multiple adjacent brain MRI slice reconstruction to detect brain anomalies at different stages on multi-sequence structural MRI: (Reconstruction) Wasserstein loss with Gradient Penalty + 100 L1 loss-trained on 3 healthy brain axial MRI slices to reconstruct the next 3 ones-reconstructs unseen healthy/abnormal scans; (Diagnosis) Average L2 loss per scan discriminates them, comparing the ground truth/reconstructed slices.

MRI Reconstruction Unsupervised Anomaly Detection

Meta Approach to Data Augmentation Optimization

no code implementations14 Jun 2020 Ryuichiro Hataya, Jan Zdenek, Kazuki Yoshizoe, Hideki Nakayama

Data augmentation policies drastically improve the performance of image recognition tasks, especially when the policies are optimized for the target data and tasks.

Data Augmentation General Classification +1

Single Model Ensemble using Pseudo-Tags and Distinct Vectors

no code implementations ACL 2020 Ryosuke Kuwabara, Jun Suzuki, Hideki Nakayama

Model ensemble techniques often increase task performance in neural networks; however, they require increased time, memory, and management effort.

Text Classification

A Visually-Grounded Parallel Corpus with Phrase-to-Region Linking

no code implementations LREC 2020 Hideki Nakayama, Akihiro Tamura, Takashi Ninomiya

To verify our dataset, we performed phrase localization experiments in both languages and investigated the effectiveness of our Japanese annotations as well as multilingual learning realized by our dataset.

Image Captioning Multimodal Machine Translation +1

Bridging the gap between AI and Healthcare sides: towards developing clinically relevant AI-powered diagnosis systems

no code implementations12 Jan 2020 Changhee Han, Leonardo Rundo, Kohei Murao, Takafumi Nemoto, Hideki Nakayama

Then, a questionnaire survey for physicians evaluates our pathology-aware Generative Adversarial Network (GAN)-based image augmentation projects in terms of Data Augmentation and physician training.

Image Augmentation Image Generation

Incorporating Horizontal Connections in Convolution by Spatial Shuffling

no code implementations ICLR 2020 Ikki Kishida, Hideki Nakayama

In the view of the regular convolution's RF, the outputs of neurons in lower layers with smaller RF are bundled to create neurons in higher layers with larger RF.

Empirical Study of Easy and Hard Examples in CNN Training

no code implementations ICLR 2019 Ikki Kishida, Hideki Nakayama

In this work, we study the similarities of easy and hard examples respectively for different Convolutional Neural Network (CNN) architectures, assessing how those examples contribute to generalization.

Faster AutoAugment: Learning Augmentation Strategies using Backpropagation

1 code implementation ECCV 2020 Ryuichiro Hataya, Jan Zdenek, Kazuki Yoshizoe, Hideki Nakayama

In this paper, we propose a differentiable policy search pipeline for data augmentation, which is much faster than previous methods.

Latent-Variable Non-Autoregressive Neural Machine Translation with Deterministic Inference Using a Delta Posterior

1 code implementation20 Aug 2019 Raphael Shu, Jason Lee, Hideki Nakayama, Kyunghyun Cho

By decoding multiple initial latent variables in parallel and rescore using a teacher model, the proposed model further brings the gap down to 1. 0 BLEU point on WMT'14 En-De task with 6. 8x speedup.

Machine Translation Translation

Generating Diverse Translations with Sentence Codes

no code implementations ACL 2019 Raphael Shu, Hideki Nakayama, Kyunghyun Cho

In this work, we attempt to obtain diverse translations by using sentence codes to condition the sentence generation.

Machine Translation Translation

Combining Noise-to-Image and Image-to-Image GANs: Brain MR Image Augmentation for Tumor Detection

no code implementations31 May 2019 Changhee Han, Leonardo Rundo, Ryosuke Araki, Yudai Nagano, Yujiro Furukawa, Giancarlo Mauri, Hideki Nakayama, Hideaki Hayashi

In this context, Generative Adversarial Networks (GANs) can synthesize realistic/diverse additional training images to fill the data lack in the real image distribution; researchers have improved classification by augmenting data with noise-to-image (e. g., random noise samples to diverse pathological images) or image-to-image GANs (e. g., a benign image to a malignant one).

General Classification Image Augmentation +2

Investigating CNNs' Learning Representation under label noise

no code implementations ICLR 2019 Ryuichiro Hataya, Hideki Nakayama

Deep convolutional neural networks (CNNs) are known to be robust against label noise on extensive datasets.

USE-Net: incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets

no code implementations17 Apr 2019 Leonardo Rundo, Changhee Han, Yudai Nagano, Jin Zhang, Ryuichiro Hataya, Carmelo Militello, Andrea Tangherloni, Marco S. Nobile, Claudio Ferretti, Daniela Besozzi, Maria Carla Gilardi, Salvatore Vitabile, Giancarlo Mauri, Hideki Nakayama, Paolo Cazzaniga

The following mixed scheme is used for training/testing: (i) training on either each individual dataset or multiple prostate MRI datasets and (ii) testing on all three datasets with all possible training/testing combinations.

Learning More with Less: GAN-based Medical Image Augmentation

no code implementations29 Mar 2019 Changhee Han, Kohei Murao, Shin'ichi Satoh, Hideki Nakayama

Convolutional Neural Network (CNN)-based accurate prediction typically requires large-scale annotated training data.

Image Augmentation Object Detection

Infinite Brain MR Images: PGGAN-based Data Augmentation for Tumor Detection

no code implementations29 Mar 2019 Changhee Han, Leonardo Rundo, Ryosuke Araki, Yujiro Furukawa, Giancarlo Mauri, Hideki Nakayama, Hideaki Hayashi

Due to the lack of available annotated medical images, accurate computer-assisted diagnosis requires intensive Data Augmentation (DA) techniques, such as geometric/intensity transformations of original images; however, those transformed images intrinsically have a similar distribution to the original ones, leading to limited performance improvement.

Data Augmentation

Unifying semi-supervised and robust learning by mixup

no code implementations ICLR Workshop LLD 2019 Ryuichiro Hataya, Hideki Nakayama

In this study, we consider learning from bi-quality data as a generalization of these studies, in which a small portion of data is cleanly labeled, and the rest is corrupt.

Learning with noisy labels

Real-time Neural-based Input Method

no code implementations ICLR 2019 Jiali Yao, Raphael Shu, Xinjian Li, Katsutoshi Ohtsuki, Hideki Nakayama

The input method is an essential service on every mobile and desktop devices that provides text suggestions.

Language Modelling

Semantic Aware Attention Based Deep Object Co-segmentation

3 code implementations16 Oct 2018 Hong Chen, Yifei HUANG, Hideki Nakayama

Object co-segmentation is the task of segmenting the same objects from multiple images.

Discrete Structural Planning for Generating Diverse Translations

no code implementations27 Sep 2018 Raphael Shu, Hideki Nakayama

Planning is important for humans when producing complex languages, which is a missing part in current language generation models.

Machine Translation Text Generation +1

Discrete Structural Planning for Neural Machine Translation

no code implementations14 Aug 2018 Raphael Shu, Hideki Nakayama

Structural planning is important for producing long sentences, which is a missing part in current language generation models.

Machine Translation Text Generation +1

Coherence Modeling Improves Implicit Discourse Relation Recognition

no code implementations WS 2018 Noriki Nishida, Hideki Nakayama

The research described in this paper examines how to learn linguistic knowledge associated with discourse relations from unlabeled corpora.

Implicit Discourse Relation Classification Transfer Learning

Improving Beam Search by Removing Monotonic Constraint for Neural Machine Translation

no code implementations ACL 2018 Raphael Shu, Hideki Nakayama

However, as the algorithm produces hypotheses in a monotonic left-to-right order, a hypothesis can not be revisited once it is discarded.

Language Modelling Machine Translation +1

Deep Learning for Forecasting Stock Returns in the Cross-Section

no code implementations3 Jan 2018 Masaya Abe, Hideki Nakayama

Many studies have been undertaken by using machine learning techniques, including neural networks, to predict stock returns.

Speech Recognition

Parameter Reference Loss for Unsupervised Domain Adaptation

no code implementations20 Nov 2017 Jiren Jin, Richard G. Calland, Takeru Miyato, Brian K. Vogel, Hideki Nakayama

Unsupervised domain adaptation (UDA) aims to utilize labeled data from a source domain to learn a model that generalizes to a target domain of unlabeled data.

Model Selection Unsupervised Domain Adaptation

Word Ordering as Unsupervised Learning Towards Syntactically Plausible Word Representations

1 code implementation IJCNLP 2017 Noriki Nishida, Hideki Nakayama

The research question we explore in this study is how to obtain syntactically plausible word representations without using human annotations.

Dependency Parsing Part-Of-Speech Tagging +1

Single-Queue Decoding for Neural Machine Translation

1 code implementation6 Jul 2017 Raphael Shu, Hideki Nakayama

Neural machine translation models rely on the beam search algorithm for decoding.

Machine Translation Translation

Later-stage Minimum Bayes-Risk Decoding for Neural Machine Translation

no code implementations11 Apr 2017 Raphael Shu, Hideki Nakayama

For extended periods of time, sequence generation models rely on beam search algorithm to generate output sequence.

Machine Translation Translation

Zero-resource Machine Translation by Multimodal Encoder-decoder Network with Multimedia Pivot

no code implementations14 Nov 2016 Hideki Nakayama, Noriki Nishida

We propose an approach to build a neural machine translation system with no supervised resources (i. e., no parallel corpora) using multimodal embedded representation over texts and images.

Machine Translation Translation

Annotation Order Matters: Recurrent Image Annotator for Arbitrary Length Image Tagging

1 code implementation18 Apr 2016 Jiren Jin, Hideki Nakayama

In addition to comparing our model with existing methods using the conventional top-k evaluation measures, we also provide our model as a high quality baseline for the arbitrary length image tagging task.

Image Captioning Machine Translation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.