Search Results for author: Tsutomu Hirao

Found 31 papers, 5 papers with code

SODA: Story Oriented Dense Video Captioning Evaluation Framework

1 code implementation ECCV 2020 Soichiro Fujita, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata

This paper proposes a new evaluation framework, Story Oriented Dense video cAptioning evaluation framework (SODA), for measuring the performance of video story description systems.

Dense Video Captioning

WikiSplit++: Easy Data Refinement for Split and Rephrase

1 code implementation13 Apr 2024 Hayato Tsukagoshi, Tsutomu Hirao, Makoto Morishita, Katsuki Chousa, Ryohei Sasano, Koichi Takeda

The task of Split and Rephrase, which splits a complex sentence into multiple simple sentences with the same meaning, improves readability and enhances the performance of downstream tasks in natural language processing (NLP).

Decoder Sentence +2

Can we obtain significant success in RST discourse parsing by using Large Language Models?

1 code implementation8 Mar 2024 Aru Maekawa, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura

Recently, decoder-only pre-trained large language models (LLMs), with several tens of billion parameters, have significantly impacted a wide range of natural language processing (NLP) tasks.

Decoder Discourse Parsing

A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing

1 code implementation15 Oct 2022 Naoki Kobayashi, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata

To promote and further develop RST-style discourse parsing models, we need a strong baseline that can be regarded as a reference for reporting reliable experimental results.

Discourse Parsing

Top-Down RST Parsing Utilizing Granularity Levels in Documents

1 code implementation3 Apr 2020 Naoki Kobayashi, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata

To obtain better discourse dependency trees, we need to improve the accuracy of RST trees at the upper parts of the structures.

Discourse Parsing Relation

Recovery command generation towards automatic recovery in ICT systems by Seq2Seq learning

no code implementations24 Mar 2020 Hiroki Ikeuchi, Akio Watanabe, Tsutomu Hirao, Makoto Morishita, Masaaki Nishino, Yoichi Matsuo, Keishiro Watanabe

With the increase in scale and complexity of ICT systems, their operation increasingly requires automatic recovery from failures.

Split or Merge: Which is Better for Unsupervised RST Parsing?

no code implementations IJCNLP 2019 Naoki Kobayashi, Tsutomu Hirao, Kengo Nakamura, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata

The first one builds the optimal tree in terms of a dissimilarity score function that is defined for splitting a text span into smaller ones.

Higher-Order Syntactic Attention Network for Longer Sentence Compression

no code implementations NAACL 2018 Hidetaka Kamigaito, Katsuhiko Hayashi, Tsutomu Hirao, Masaaki Nagata

To solve this problem, we propose a higher-order syntactic attention network (HiSAN) that can handle higher-order dependency features as an attention distribution on LSTM hidden states.

Informativeness Machine Translation +2

Pruning Basic Elements for Better Automatic Evaluation of Summaries

no code implementations NAACL 2018 Ukyo Honda, Tsutomu Hirao, Masaaki Nagata

We propose a simple but highly effective automatic evaluation measure of summarization, pruned Basic Elements (pBE).

Word Embeddings Word Similarity

Provable Fast Greedy Compressive Summarization with Any Monotone Submodular Function

no code implementations NAACL 2018 Shinsaku Sakaue, Tsutomu Hirao, Masaaki Nishino, Masaaki Nagata

This approach is known to have three advantages: its applicability to many useful submodular objective functions, the efficiency of the greedy algorithm, and the provable performance guarantee.

Document Summarization Extractive Summarization +1

Oracle Summaries of Compressive Summarization

no code implementations ACL 2017 Tsutomu Hirao, Masaaki Nishino, Masaaki Nagata

This paper derives an Integer Linear Programming (ILP) formulation to obtain an oracle summary of the compressive summarization paradigm in terms of ROUGE.

Sentence Compression

Enumeration of Extractive Oracle Summaries

no code implementations EACL 2017 Tsutomu Hirao, Masaaki Nishino, Jun Suzuki, Masaaki Nagata

To analyze the limitations and the future directions of the extractive summarization paradigm, this paper proposes an Integer Linear Programming (ILP) formulation to obtain extractive oracle summaries in terms of ROUGE-N. We also propose an algorithm that enumerates all of the oracle summaries for a set of reference summaries to exploit F-measures that evaluate which system summaries contain how many sentences that are extracted as an oracle summary.

document understanding Extractive Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.