Search Results for author: Hidetaka Kamigaito

Found 35 papers, 9 papers with code

SODA: Story Oriented Dense Video Captioning Evaluation Framework

1 code implementation ECCV 2020 Soichiro Fujita, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata

This paper proposes a new evaluation framework, Story Oriented Dense video cAptioning evaluation framework (SODA), for measuring the performance of video story description systems.

Dense Video Captioning

Abstractive Document Summarization with Word Embedding Reconstruction

no code implementations RANLP 2021 Jingyi You, Chenlong Hu, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura

Neural sequence-to-sequence (Seq2Seq) models and BERT have achieved substantial improvements in abstractive document summarization (ADS) without and with pre-training, respectively.

Document Summarization Word Embeddings

Making Your Tweets More Fancy: Emoji Insertion to Texts

no code implementations RANLP 2021 Jingun Kwon, Naoki Kobayashi, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura

The results demonstrate that the position of emojis in texts is a good clue to boost the performance of emoji label prediction.

Improving Character-Aware Neural Language Model by Warming up Character Encoder under Skip-gram Architecture

no code implementations RANLP 2021 Yukun Feng, Chenlong Hu, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura

Character-aware neural language models can capture the relationship between words by exploiting character-level information and are particularly effective for languages with rich morphology.

Language Modelling

A Language Model-based Generative Classifier for Sentence-level Discourse Parsing

no code implementations EMNLP 2021 Ying Zhang, Hidetaka Kamigaito, Manabu Okumura

Discourse segmentation and sentence-level discourse parsing play important roles for various NLP tasks to consider textual coherence.

Discourse Parsing Language Modelling

Why does Negative Sampling not Work Well? Analysis of Convexity in Negative Sampling

no code implementations29 Sep 2021 Hidetaka Kamigaito, Katsuhiko Hayashi

On the other hand, properties of the NS loss function that are considered important for learning, such as the relationship between the noise distribution and the number of negative samples, have not been investigated theoretically.

Knowledge Graph Embedding

Towards Table-to-Text Generation with Numerical Reasoning

1 code implementation ACL 2021 Lya Hulliyyatus Suadaa, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura, Hiroya Takamura

In summary, our contributions are (1) a new dataset for numerical table-to-text generation using pairs of a table and a paragraph of a table description with richer inference from scientific papers, and (2) a table-to-text generation framework enriched with numerical reasoning.

Fine-tuning Table-to-Text Generation

An Empirical Study of Generating Texts for Search Engine Advertising

no code implementations NAACL 2021 Hidetaka Kamigaito, Peinan Zhang, Hiroya Takamura, Manabu Okumura

Although there are many studies on neural language generation (NLG), few trials are put into the real world, especially in the advertising domain.

Text Generation

A New Surprise Measure for Extracting Interesting Relationships between Persons

no code implementations EACL 2021 Hidetaka Kamigaito, Jingun Kwon, Young-In Song, Manabu Okumura

We therefore propose a method for extracting interesting relationships between persons from natural language texts by focusing on their surprisingness.

Generating Weather Comments from Meteorological Simulations

1 code implementation EACL 2021 Soichiro Murakami, Sora Tanaka, Masatsugu Hangyo, Hidetaka Kamigaito, Kotaro Funakoshi, Hiroya Takamura, Manabu Okumura

The task of generating weather-forecast comments from meteorological simulations has the following requirements: (i) the changes in numerical values for various physical quantities need to be considered, (ii) the weather comments should be dependent on delivery time and area information, and (iii) the comments should provide useful information for users.

Top-Down RST Parsing Utilizing Granularity Levels in Documents

1 code implementation3 Apr 2020 Naoki Kobayashi, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata

To obtain better discourse dependency trees, we need to improve the accuracy of RST trees at the upper parts of the structures.

Discourse Parsing

Syntactically Look-Ahead Attention Network for Sentence Compression

1 code implementation4 Feb 2020 Hidetaka Kamigaito, Manabu Okumura

Sentence compression is the task of compressing a long sentence into a short one by deleting redundant words.

Sentence Compression

Context-aware Neural Machine Translation with Coreference Information

no code implementations WS 2019 Takumi Ohtani, Hidetaka Kamigaito, Masaaki Nagata, Manabu Okumura

We present neural machine translation models for translating a sentence in a text by using a graph-based encoder which can consider coreference relations provided within the text explicitly.

Machine Translation Translation

Split or Merge: Which is Better for Unsupervised RST Parsing?

no code implementations IJCNLP 2019 Naoki Kobayashi, Tsutomu Hirao, Kengo Nakamura, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata

The first one builds the optimal tree in terms of a dissimilarity score function that is defined for splitting a text span into smaller ones.

Discourse-Aware Hierarchical Attention Network for Extractive Single-Document Summarization

no code implementations RANLP 2019 Tatsuya Ishigaki, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura

To incorporate the information of a discourse tree structure into the neural network-based summarizers, we propose a discourse-aware neural extractive summarizer which can explicitly take into account the discourse dependency tree structure of the source document.

Document Summarization

Higher-Order Syntactic Attention Network for Longer Sentence Compression

no code implementations NAACL 2018 Hidetaka Kamigaito, Katsuhiko Hayashi, Tsutomu Hirao, Masaaki Nagata

To solve this problem, we propose a higher-order syntactic attention network (HiSAN) that can handle higher-order dependency features as an attention distribution on LSTM hidden states.

Machine Translation Sentence Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.