no code implementations • EMNLP 2020 • Hisashi Kamezawa, Noriki Nishida, Nobuyuki Shimizu, Takashi Miyazaki, Hideki Nakayama
The results demonstrate that first-person vision helps neural network models correctly understand human intentions, and the production of non-verbal responses is a challenging task like that of verbal responses.
1 code implementation • COLING 2022 • Yuxuan Wu, Hideki Nakayama
Existing work suggested solving this task with a two-phase approach, where the model first predicts formulas from questions and then calculates answers from such formulas.
no code implementations • NAACL (ACL) 2022 • Jun Takeuchi, Noriki Nishida, Hideki Nakayama
Therefore, we propose a novel method to extend a given HNN in a single space to a product of hyperbolic spaces.
no code implementations • NAACL (ACL) 2022 • Yi-Pei Chen, Nobuyuki Shimizu, Takashi Miyazaki, Hideki Nakayama
This paper explores how humans conduct conversations with images by investigating an open-domain image conversation dataset, ImageChat.
no code implementations • ACL 2022 • Hisashi Kamezawa, Noriki Nishida, Nobuyuki Shimizu, Takashi Miyazaki, Hideki Nakayama
A release note is a technical document that describes the latest changes to a software product and is crucial in open source software development.
no code implementations • ACL (WAT) 2021 • Toshiaki Nakazawa, Hideki Nakayama, Chenchen Ding, Raj Dabre, Shohei Higashiyama, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukuttan, Shantipriya Parida, Ondřej Bojar, Chenhui Chu, Akiko Eriguchi, Kaori Abe, Yusuke Oda, Sadao Kurohashi
This paper presents the results of the shared tasks from the 8th workshop on Asian translation (WAT2021).
no code implementations • AACL (WAT) 2020 • Toshiaki Nakazawa, Hideki Nakayama, Chenchen Ding, Raj Dabre, Shohei Higashiyama, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukuttan, Shantipriya Parida, Ondřej Bojar, Sadao Kurohashi
This paper presents the results of the shared tasks from the 7th workshop on Asian translation (WAT2020).
no code implementations • 31 May 2023 • Kai Katsumata, Duc Minh Vo, Bei Liu, Hideki Nakayama
The exploration of the latent space in StyleGANs and GAN inversion exemplify impressive real-world image editing, yet the trade-off between reconstruction quality and editing quality remains an open problem.
1 code implementation • 17 Apr 2023 • Yi-Pei Chen, An-Zi Yen, Hen-Hsen Huang, Hideki Nakayama, Hsin-Hsi Chen
Our proposed life event dialog dataset and in-depth analysis of IE frameworks will facilitate future research on life event extraction from conversations.
no code implementations • CVPR 2023 • Duc Minh Vo, Quoc-An Luong, Akihiro Sugimoto, Hideki Nakayama
Humans possess the capacity to reason about the future based on a sparse collection of visual cues acquired over time.
2 code implementations • 16 Oct 2022 • Hong Chen, Rujun Han, Te-Lin Wu, Hideki Nakayama, Nanyun Peng
This task requires machines to 1) understand long text inputs and 2) produce a globally consistent image sequence that illustrates the contents of the story.
1 code implementation • 16 Oct 2022 • Hong Chen, Duc Minh Vo, Hiroya Takamura, Yusuke Miyao, Hideki Nakayama
Existing automatic story evaluation methods place a premium on story lexical level coherence, deviating from human preference.
1 code implementation • 26 Sep 2022 • Erica K. Shimomoto, Edison Marrese-Taylor, Hiroya Takamura, Ichiro Kobayashi, Hideki Nakayama, Yusuke Miyao
This paper explores the task of Temporal Video Grounding (TVG) where, given an untrimmed video and a natural language sentence query, the goal is to recognize and determine temporal boundaries of action instances in the video described by the query.
1 code implementation • 13 Jun 2022 • Ikki Kishida, Hideki Nakayama
There are approaches to replace pixel values with binary embeddings to tackle the problem of adversarial perturbations, which successfully improve robustness.
1 code implementation • CVPR 2022 • Kai Katsumata, Duc Minh Vo, Hideki Nakayama
We introduce a challenging training scheme of conditional GANs, called open-set semi-supervised image generation, where the training dataset consists of two parts: (i) labeled data and (ii) unlabeled data with samples belonging to one of the labeled data classes, namely, a closed-set, and samples not belonging to any of the labeled data classes, namely, an open-set.
no code implementations • CVPR 2022 • Duc Minh Vo, Hong Chen, Akihiro Sugimoto, Hideki Nakayama
We propose an end-to-end Novel Object Captioning with Retrieved vocabulary from External Knowledge method (NOC-REK), which simultaneously learns vocabulary retrieval and caption generation, successfully describing novel objects outside of the training dataset.
no code implementations • 16 Mar 2022 • Duc Minh Vo, Akihiro Sugimoto, Hideki Nakayama
We push forward neural network compression research by exploiting a novel challenging task of large-scale conditional generative adversarial networks (GANs) compression.
no code implementations • Findings (EMNLP) 2021 • Hong Chen, Hiroya Takamura, Hideki Nakayama
Generating texts in scientific papers requires not only capturing the content contained within the given input but also frequently acquiring the external information called \textit{context}.
no code implementations • 29 Sep 2021 • Ryuichiro Hataya, Hideki Nakayama
Optimizing hyperparameters of machine learning algorithms especially for limited labeled data is important but difficult, because then obtaining enough validation data is practically impossible.
no code implementations • ICLR Workshop EBM 2021 • Ryuichiro Hataya, Hideki Nakayama, Kazuki Yoshizoe
We present Graph Energy-based Model (GEM), an energy-based model for molecular graph generation.
no code implementations • 9 Feb 2021 • Ryuichiro Hataya, Hideki Nakayama, Kazuki Yoshizoe
It is common practice for chemists to search chemical databases based on substructures of compounds for finding molecules with desired properties.
no code implementations • 5 Feb 2021 • Hong Chen, Yifei HUANG, Hiroya Takamura, Hideki Nakayama
To enrich the candidate concepts, a commonsense knowledge graph is created for each image sequence from which the concept candidates are proposed.
no code implementations • INLG (ACL) 2021 • Hong Chen, Raphael Shu, Hiroya Takamura, Hideki Nakayama
In this paper, we focus on planning a sequence of events assisted by event graphs, and use the events to guide the generator.
no code implementations • 1 Jan 2021 • Ryuichiro Hataya, Hideki Nakayama
Convolutional Neural Networks (CNNs) are vulnerable to unseen noise on input images at the test time, and thus improving the robustness is crucial.
no code implementations • COLING 2020 • Tetsuro Nishihara, Akihiro Tamura, Takashi Ninomiya, Yutaro Omote, Hideki Nakayama
This paper proposed a supervised visual attention mechanism for multimodal neural machine translation (MNMT), trained with constraints based on manual alignments between words in a sentence and their corresponding regions of an image.
no code implementations • 30 Sep 2020 • Yuxuan Wu, Hideki Nakayama
To overcome this problem, existing work either included ground-truth program into training data or applied reinforcement learning to explore the program.
no code implementations • 24 Jul 2020 • Changhee Han, Leonardo Rundo, Kohei Murao, Tomoyuki Noguchi, Yuki Shimahara, Zoltan Adam Milacski, Saori Koshino, Evis Sala, Hideki Nakayama, Shinichi Satoh
Therefore, we propose unsupervised Medical Anomaly Detection Generative Adversarial Network (MADGAN), a novel two-step method using GAN-based multiple adjacent brain MRI slice reconstruction to detect brain anomalies at different stages on multi-sequence structural MRI: (Reconstruction) Wasserstein loss with Gradient Penalty + 100 L1 loss-trained on 3 healthy brain axial MRI slices to reconstruct the next 3 ones-reconstructs unseen healthy/abnormal scans; (Diagnosis) Average L2 loss per scan discriminates them, comparing the ground truth/reconstructed slices.
no code implementations • 14 Jun 2020 • Ryuichiro Hataya, Jan Zdenek, Kazuki Yoshizoe, Hideki Nakayama
Data augmentation policies drastically improve the performance of image recognition tasks, especially when the policies are optimized for the target data and tasks.
no code implementations • ACL 2020 • Ryosuke Kuwabara, Jun Suzuki, Hideki Nakayama
Model ensemble techniques often increase task performance in neural networks; however, they require increased time, memory, and management effort.
no code implementations • LREC 2020 • Hideki Nakayama, Akihiro Tamura, Takashi Ninomiya
To verify our dataset, we performed phrase localization experiments in both languages and investigated the effectiveness of our Japanese annotations as well as multilingual learning realized by our dataset.
no code implementations • 12 Jan 2020 • Changhee Han, Leonardo Rundo, Kohei Murao, Takafumi Nemoto, Hideki Nakayama
Then, a questionnaire survey for physicians evaluates our pathology-aware Generative Adversarial Network (GAN)-based image augmentation projects in terms of Data Augmentation and physician training.
1 code implementation • TACL 2020 • Noriki Nishida, Hideki Nakayama
In this paper, we introduce an unsupervised discourse constituency parsing algorithm.
no code implementations • ICLR 2020 • Ikki Kishida, Hideki Nakayama
In the view of the regular convolution's RF, the outputs of neurons in lower layers with smaller RF are bundled to create neurons in higher layers with larger RF.
no code implementations • ICLR 2019 • Ikki Kishida, Hideki Nakayama
In this work, we study the similarities of easy and hard examples respectively for different Convolutional Neural Network (CNN) architectures, assessing how those examples contribute to generalization.
1 code implementation • ECCV 2020 • Ryuichiro Hataya, Jan Zdenek, Kazuki Yoshizoe, Hideki Nakayama
In this paper, we propose a differentiable policy search pipeline for data augmentation, which is much faster than previous methods.
Ranked #1 on
Data Augmentation
on CIFAR-10
1 code implementation • 20 Aug 2019 • Raphael Shu, Jason Lee, Hideki Nakayama, Kyunghyun Cho
By decoding multiple initial latent variables in parallel and rescore using a teacher model, the proposed model further brings the gap down to 1. 0 BLEU point on WMT'14 En-De task with 6. 8x speedup.
no code implementations • ACL 2019 • Raphael Shu, Hideki Nakayama, Kyunghyun Cho
In this work, we attempt to obtain diverse translations by using sentence codes to condition the sentence generation.
no code implementations • 14 Jun 2019 • Changhee Han, Leonardo Rundo, Kohei Murao, Zoltán Ádám Milacski, Kazuki Umemoto, Evis Sala, Hideki Nakayama, Shin'ichi Satoh
Unsupervised learning can discover various unseen diseases, relying on large-scale unannotated medical images of healthy subjects.
no code implementations • 12 Jun 2019 • Changhee Han, Yoshiro Kitamura, Akira Kudo, Akimichi Ichinose, Leonardo Rundo, Yujiro Furukawa, Kazuki Umemoto, Yuanzhong Li, Hideki Nakayama
Accurate Computer-Assisted Diagnosis, relying on large-scale annotated pathological images, can alleviate the risk of overlooking the diagnosis.
1 code implementation • NAACL 2019 • Jiali Yao, Raphael Shu, Xinjian Li, Katsutoshi Ohtsuki, Hideki Nakayama
Input method editor (IME) converts sequential alphabet key inputs to words in a target language.
no code implementations • 31 May 2019 • Changhee Han, Leonardo Rundo, Ryosuke Araki, Yudai Nagano, Yujiro Furukawa, Giancarlo Mauri, Hideki Nakayama, Hideaki Hayashi
In this context, Generative Adversarial Networks (GANs) can synthesize realistic/diverse additional training images to fill the data lack in the real image distribution; researchers have improved classification by augmenting data with noise-to-image (e. g., random noise samples to diverse pathological images) or image-to-image GANs (e. g., a benign image to a malignant one).
no code implementations • ICLR 2019 • Ryuichiro Hataya, Hideki Nakayama
Deep convolutional neural networks (CNNs) are known to be robust against label noise on extensive datasets.
no code implementations • 17 Apr 2019 • Leonardo Rundo, Changhee Han, Yudai Nagano, Jin Zhang, Ryuichiro Hataya, Carmelo Militello, Andrea Tangherloni, Marco S. Nobile, Claudio Ferretti, Daniela Besozzi, Maria Carla Gilardi, Salvatore Vitabile, Giancarlo Mauri, Hideki Nakayama, Paolo Cazzaniga
The following mixed scheme is used for training/testing: (i) training on either each individual dataset or multiple prostate MRI datasets and (ii) testing on all three datasets with all possible training/testing combinations.
no code implementations • 29 Mar 2019 • Changhee Han, Kohei Murao, Shin'ichi Satoh, Hideki Nakayama
Convolutional Neural Network (CNN)-based accurate prediction typically requires large-scale annotated training data.
no code implementations • 29 Mar 2019 • Changhee Han, Leonardo Rundo, Ryosuke Araki, Yujiro Furukawa, Giancarlo Mauri, Hideki Nakayama, Hideaki Hayashi
Due to the lack of available annotated medical images, accurate computer-assisted diagnosis requires intensive Data Augmentation (DA) techniques, such as geometric/intensity transformations of original images; however, those transformed images intrinsically have a similar distribution to the original ones, leading to limited performance improvement.
no code implementations • 29 Mar 2019 • Leonardo Rundo, Changhee Han, Jin Zhang, Ryuichiro Hataya, Yudai Nagano, Carmelo Militello, Claudio Ferretti, Marco S. Nobile, Andrea Tangherloni, Maria Carla Gilardi, Salvatore Vitabile, Hideki Nakayama, Giancarlo Mauri
Prostate cancer is the most common cancer among US men.
no code implementations • ICLR Workshop LLD 2019 • Ryuichiro Hataya, Hideki Nakayama
In this study, we consider learning from bi-quality data as a generalization of these studies, in which a small portion of data is cleanly labeled, and the rest is corrupt.
no code implementations • 26 Feb 2019 • Changhee Han, Kohei Murao, Tomoyuki Noguchi, Yusuke Kawata, Fumiya Uchiyama, Leonardo Rundo, Hideki Nakayama, Shin'ichi Satoh
Accurate Computer-Assisted Diagnosis, associated with proper data wrangling, can alleviate the risk of overlooking the diagnosis in a clinical environment.
no code implementations • ICLR 2019 • Jiali Yao, Raphael Shu, Xinjian Li, Katsutoshi Ohtsuki, Hideki Nakayama
The input method is an essential service on every mobile and desktop devices that provides text suggestions.
3 code implementations • 16 Oct 2018 • Hong Chen, Yifei HUANG, Hideki Nakayama
Object co-segmentation is the task of segmenting the same objects from multiple images.
no code implementations • 27 Sep 2018 • Raphael Shu, Hideki Nakayama
Planning is important for humans when producing complex languages, which is a missing part in current language generation models.
no code implementations • 14 Aug 2018 • Raphael Shu, Hideki Nakayama
Structural planning is important for producing long sentences, which is a missing part in current language generation models.
no code implementations • ACL 2018 • Raphael Shu, Hideki Nakayama
However, as the algorithm produces hypotheses in a monotonic left-to-right order, a hypothesis can not be revisited once it is discarded.
no code implementations • WS 2018 • Noriki Nishida, Hideki Nakayama
The research described in this paper examines how to learn linguistic knowledge associated with discourse relations from unlabeled corpora.
Implicit Discourse Relation Classification
Transfer Learning
no code implementations • 3 Jan 2018 • Masaya Abe, Hideki Nakayama
Many studies have been undertaken by using machine learning techniques, including neural networks, to predict stock returns.
no code implementations • 20 Nov 2017 • Jiren Jin, Richard G. Calland, Takeru Miyato, Brian K. Vogel, Hideki Nakayama
Unsupervised domain adaptation (UDA) aims to utilize labeled data from a source domain to learn a model that generalizes to a target domain of unlabeled data.
3 code implementations • ICLR 2018 • Raphael Shu, Hideki Nakayama
For each word, the composition of basis vectors is determined by a hash code.
Ranked #10 on
Machine Translation
on IWSLT2015 German-English
1 code implementation • IJCNLP 2017 • Noriki Nishida, Hideki Nakayama
The research question we explore in this study is how to obtain syntactically plausible word representations without using human annotations.
1 code implementation • 6 Jul 2017 • Raphael Shu, Hideki Nakayama
Neural machine translation models rely on the beam search algorithm for decoding.
no code implementations • 11 Apr 2017 • Raphael Shu, Hideki Nakayama
For extended periods of time, sequence generation models rely on beam search algorithm to generate output sequence.
no code implementations • WS 2017 • Raphael Shu, Hideki Nakayama
Recently, the attention mechanism plays a key role to achieve high performance for Neural Machine Translation models.
no code implementations • COLING 2016 • Natsuda Laokulrat, Sang Phan, Noriki Nishida, Raphael Shu, Yo Ehara, Naoaki Okazaki, Yusuke Miyao, Hideki Nakayama
Automatic video description generation has recently been getting attention after rapid advancement in image caption generation.
no code implementations • 14 Nov 2016 • Hideki Nakayama, Noriki Nishida
We propose an approach to build a neural machine translation system with no supervised resources (i. e., no parallel corpora) using multimodal embedded representation over texts and images.
1 code implementation • 18 Apr 2016 • Jiren Jin, Hideki Nakayama
In addition to comparing our model with existing methods using the conventional top-k evaluation measures, we also provide our model as a high quality baseline for the arbitrary length image tagging task.