Search Results for author: Manabu Okumura

Found 71 papers, 15 papers with code

SODA: Story Oriented Dense Video Captioning Evaluation Framework

1 code implementation ECCV 2020 Soichiro Fujita, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata

This paper proposes a new evaluation framework, Story Oriented Dense video cAptioning evaluation framework (SODA), for measuring the performance of video story description systems.

Dense Video Captioning

A-TIP: Attribute-aware Text Infilling via Pre-trained Language Model

no code implementations COLING 2022 Dongyuan Li, Jingyi You, Kotaro Funakoshi, Manabu Okumura

Text infilling aims to restore incomplete texts by filling in blanks, which has attracted more attention recently because of its wide application in ancient text restoration and text rewriting.

Ancient Text Restoration Attribute +2

JPG - Jointly Learn to Align: Automated Disease Prediction and Radiology Report Generation

no code implementations COLING 2022 Jingyi You, Dongyuan Li, Manabu Okumura, Kenji Suzuki

Automated radiology report generation aims to generate paragraphs that describe fine-grained visual differences among cases, especially those between the normal and the diseased.

Disease Prediction Image Captioning +1

Joint Learning-based Heterogeneous Graph Attention Network for Timeline Summarization

no code implementations NAACL 2022 Jingyi You, Dongyuan Li, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura

Previous studies on the timeline summarization (TLS) task ignored the information interaction between sentences and dates, and adopted pre-defined unlearnable representations for them.

Event Detection Graph Attention +1

Abstractive Document Summarization with Word Embedding Reconstruction

no code implementations RANLP 2021 Jingyi You, Chenlong Hu, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura

Neural sequence-to-sequence (Seq2Seq) models and BERT have achieved substantial improvements in abstractive document summarization (ADS) without and with pre-training, respectively.

Document Summarization Word Embeddings

Making Your Tweets More Fancy: Emoji Insertion to Texts

no code implementations RANLP 2021 Jingun Kwon, Naoki Kobayashi, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura

The results demonstrate that the position of emojis in texts is a good clue to boost the performance of emoji label prediction.

Position

Improving Character-Aware Neural Language Model by Warming up Character Encoder under Skip-gram Architecture

no code implementations RANLP 2021 Yukun Feng, Chenlong Hu, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura

Character-aware neural language models can capture the relationship between words by exploiting character-level information and are particularly effective for languages with rich morphology.

Language Modelling

Can we obtain significant success in RST discourse parsing by using Large Language Models?

1 code implementation8 Mar 2024 Aru Maekawa, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura

Recently, decoder-only pre-trained large language models (LLMs), with several tens of billion parameters, have significantly impacted a wide range of natural language processing (NLP) tasks.

Discourse Parsing

MRKE: The Multi-hop Reasoning Evaluation of LLMs by Knowledge Edition

no code implementations19 Feb 2024 Jian Wu, Linyi Yang, Manabu Okumura, Yue Zhang

Although Large Language Models (LLMs) have shown strong performance in Multi-hop Question Answering (MHQA) tasks, their real reasoning ability remains exploration.

Multi-hop Question Answering Question Answering

Joyful: Joint Modality Fusion and Graph Contrastive Learning for Multimodal Emotion Recognition

no code implementations18 Nov 2023 Dongyuan Li, Yusong Wang, Kotaro Funakoshi, Manabu Okumura

In this paper, we propose a method for joint modality fusion and graph contrastive learning for multimodal emotion recognition (Joyful), where multimodality fusion, contrastive learning, and emotion recognition are jointly optimized.

Contrastive Learning Multimodal Emotion Recognition

All Data on the Table: Novel Dataset and Benchmark for Cross-Modality Scientific Information Extraction

no code implementations14 Nov 2023 Yuhan Li, Jian Wu, Zhiwei Yu, Börje F. Karlsson, Wei Shen, Manabu Okumura, Chin-Yew Lin

To close this gap in data availability and enable cross-modality IE, while alleviating labeling costs, we propose a semi-supervised pipeline for annotating entities in text, as well as entities and relations in tables, in an iterative procedure.

Active Learning Based Fine-Tuning Framework for Speech Emotion Recognition

no code implementations30 Sep 2023 Dongyuan Li, Yusong Wang, Kotaro Funakoshi, Manabu Okumura

However, existing SER methods ignore the information gap between the pre-training speech recognition task and the downstream SER task, leading to sub-optimal performance.

Active Learning Speech Emotion Recognition +2

Automatic Answerability Evaluation for Question Generation

no code implementations22 Sep 2023 Zifan Wang, Kotaro Funakoshi, Manabu Okumura

This work proposes PMAN (Prompting-based Metric on ANswerability), a novel automatic evaluation metric to assess whether the generated questions are answerable by the reference answers for the QG tasks.

Question Generation Question-Generation +1

Focused Prefix Tuning for Controllable Text Generation

no code implementations1 Jun 2023 Congda Ma, Tianyu Zhao, Makoto Shing, Kei Sawada, Manabu Okumura

In a controllable text generation dataset, there exist unannotated attributes that could provide irrelevant learning signals to models that use it for training and thus degrade their performance.

Attribute Text Generation

LATTE: Lattice ATTentive Encoding for Character-based Word Segmentation

2 code implementations Journal of Natural Language Processing 2023 Thodsaporn Chay-intr, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura

Our model employs the lattice structure to handle segmentation alternatives and utilizes graph neural networks along with an attention mechanism to attentively extract multi-granularity representation from the lattice for complementing character representations.

 Ranked #1 on Chinese Word Segmentation on CTB6 (using extra training data)

Chinese Word Segmentation Japanese Word Segmentation +2

TACR: A Table-alignment-based Cell-selection and Reasoning Model for Hybrid Question-Answering

no code implementations24 May 2023 Jian Wu, Yicheng Xu, Yan Gao, Jian-Guang Lou, Börje F. Karlsson, Manabu Okumura

A common challenge in HQA and other passage-table QA datasets is that it is generally unrealistic to iterate over all table rows, columns, and linked passages to retrieve evidence.

Question Answering Retrieval

Bidirectional Transformer Reranker for Grammatical Error Correction

1 code implementation22 May 2023 Ying Zhang, Hidetaka Kamigaito, Manabu Okumura

Pre-trained seq2seq models have achieved state-of-the-art results in the grammatical error correction task.

Grammatical Error Correction Language Modelling +2

A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing

1 code implementation15 Oct 2022 Naoki Kobayashi, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata

To promote and further develop RST-style discourse parsing models, we need a strong baseline that can be regarded as a reference for reporting reliable experimental results.

Discourse Parsing

Aspect-based Analysis of Advertising Appeals for Search Engine Advertising

no code implementations NAACL (ACL) 2022 Soichiro Murakami, Peinan Zhang, Sho Hoshino, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura

Writing an ad text that attracts people and persuades them to click or act is essential for the success of search engine advertising.

FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling

2 code implementations NeurIPS 2021 BoWen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, Takahiro Shinozaki

However, like other modern SSL algorithms, FixMatch uses a pre-defined constant threshold for all classes to select unlabeled data that contribute to the training, thus failing to consider different learning status and learning difficulties of different classes.

Semi-Supervised Image Classification

Towards Table-to-Text Generation with Numerical Reasoning

1 code implementation ACL 2021 Lya Hulliyyatus Suadaa, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura, Hiroya Takamura

In summary, our contributions are (1) a new dataset for numerical table-to-text generation using pairs of a table and a paragraph of a table description with richer inference from scientific papers, and (2) a table-to-text generation framework enriched with numerical reasoning.

Descriptive Table-to-Text Generation

An Empirical Study of Generating Texts for Search Engine Advertising

no code implementations NAACL 2021 Hidetaka Kamigaito, Peinan Zhang, Hiroya Takamura, Manabu Okumura

Although there are many studies on neural language generation (NLG), few trials are put into the real world, especially in the advertising domain.

Text Generation

Generating Weather Comments from Meteorological Simulations

1 code implementation EACL 2021 Soichiro Murakami, Sora Tanaka, Masatsugu Hangyo, Hidetaka Kamigaito, Kotaro Funakoshi, Hiroya Takamura, Manabu Okumura

The task of generating weather-forecast comments from meteorological simulations has the following requirements: (i) the changes in numerical values for various physical quantities need to be considered, (ii) the weather comments should be dependent on delivery time and area information, and (iii) the comments should provide useful information for users.

Informativeness

A New Surprise Measure for Extracting Interesting Relationships between Persons

no code implementations EACL 2021 Hidetaka Kamigaito, Jingun Kwon, Young-In Song, Manabu Okumura

We therefore propose a method for extracting interesting relationships between persons from natural language texts by focusing on their surprisingness.

Diverse and Non-redundant Answer Set Extraction on Community QA based on DPPs

no code implementations COLING 2020 Shogo Fujita, Tomohide Shibata, Manabu Okumura

In community-based question answering (CQA) platforms, it takes time for a user to get useful information from among many answers.

Point Processes Question Answering

Top-Down RST Parsing Utilizing Granularity Levels in Documents

1 code implementation3 Apr 2020 Naoki Kobayashi, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata

To obtain better discourse dependency trees, we need to improve the accuracy of RST trees at the upper parts of the structures.

Discourse Parsing Relation

Syntactically Look-Ahead Attention Network for Sentence Compression

1 code implementation4 Feb 2020 Hidetaka Kamigaito, Manabu Okumura

Sentence compression is the task of compressing a long sentence into a short one by deleting redundant words.

Informativeness Sentence +1

Split or Merge: Which is Better for Unsupervised RST Parsing?

no code implementations IJCNLP 2019 Naoki Kobayashi, Tsutomu Hirao, Kengo Nakamura, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata

The first one builds the optimal tree in terms of a dissimilarity score function that is defined for splitting a text span into smaller ones.

Context-aware Neural Machine Translation with Coreference Information

no code implementations WS 2019 Takumi Ohtani, Hidetaka Kamigaito, Masaaki Nagata, Manabu Okumura

We present neural machine translation models for translating a sentence in a text by using a graph-based encoder which can consider coreference relations provided within the text explicitly.

Machine Translation Sentence +1

Discourse-Aware Hierarchical Attention Network for Extractive Single-Document Summarization

no code implementations RANLP 2019 Tatsuya Ishigaki, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura

To incorporate the information of a discourse tree structure into the neural network-based summarizers, we propose a discourse-aware neural extractive summarizer which can explicitly take into account the discourse dependency tree structure of the source document.

Document Summarization Sentence

Global Optimization under Length Constraint for Neural Text Summarization

no code implementations ACL 2019 Takuya Makino, Tomoya Iwakura, Hiroya Takamura, Manabu Okumura

The experimental results show that a state-of-the-art neural summarization model optimized with GOLC generates fewer overlength summaries while maintaining the fastest processing speed; only 6. 70{\%} overlength summaries on CNN/Daily and 7. 8{\%} on long summary of Mainichi, compared to the approximately 20{\%} to 50{\%} on CNN/Daily Mail and 10{\%} to 30{\%} on Mainichi with the other optimization methods.

Document Summarization

Dataset Creation for Ranking Constructive News Comments

no code implementations ACL 2019 Soichiro Fujita, Hayato Kobayashi, Manabu Okumura

Ranking comments on an online news service is a practically important task for the service provider, and thus there have been many studies on this task.

A Large-Scale Multi-Length Headline Corpus for Analyzing Length-Constrained Headline Generation Model Evaluation

no code implementations WS 2019 Yuta Hitomi, Yuya Taguchi, Hideaki Tamori, Ko Kikuta, Jiro Nishitoba, Naoaki Okazaki, Kentaro Inui, Manabu Okumura

However, because there is no corpus of headlines of multiple lengths for a given article, previous research on controlling output length in headline generation has not discussed whether the system outputs could be adequately evaluated without multiple references of different lengths.

Headline Generation

Neural Machine Translation Incorporating Named Entity

no code implementations COLING 2018 Arata Ugawa, Akihiro Tamura, Takashi Ninomiya, Hiroya Takamura, Manabu Okumura

To alleviate these problems, the encoder of the proposed model encodes the input word on the basis of its NE tag at each time step, which could reduce the ambiguity of the input word.

Machine Translation NMT +3

Distinguishing Japanese Non-standard Usages from Standard Ones

no code implementations EMNLP 2017 Tatsuya Aoki, Ryohei Sasano, Hiroya Takamura, Manabu Okumura

Our experimental results show that the model leveraging the context embedding outperforms other methods and provide us with findings, for example, on how to construct context embeddings and which corpus to use.

Machine Translation Word Embeddings

Japanese Sentence Compression with a Large Training Dataset

no code implementations ACL 2017 Shun Hasegawa, Yuta Kikuchi, Hiroya Takamura, Manabu Okumura

In English, high-quality sentence compression models by deleting words have been trained on automatically created large training datasets.

Sentence Sentence Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.