Search Results for author: Hidetaka Kamigaito

Found 46 papers, 16 papers with code

Making Your Tweets More Fancy: Emoji Insertion to Texts

no code implementations RANLP 2021 Jingun Kwon, Naoki Kobayashi, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura

The results demonstrate that the position of emojis in texts is a good clue to boost the performance of emoji label prediction.

Joint Learning-based Heterogeneous Graph Attention Network for Timeline Summarization

no code implementations NAACL 2022 Jingyi You, Dongyuan Li, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura

Previous studies on the timeline summarization (TLS) task ignored the information interaction between sentences and dates, and adopted pre-defined unlearnable representations for them.

Event Detection Graph Attention +1

SODA: Story Oriented Dense Video Captioning Evaluation Framework

1 code implementation ECCV 2020 Soichiro Fujita, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata

This paper proposes a new evaluation framework, Story Oriented Dense video cAptioning evaluation framework (SODA), for measuring the performance of video story description systems.

Dense Video Captioning

Abstractive Document Summarization with Word Embedding Reconstruction

no code implementations RANLP 2021 Jingyi You, Chenlong Hu, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura

Neural sequence-to-sequence (Seq2Seq) models and BERT have achieved substantial improvements in abstractive document summarization (ADS) without and with pre-training, respectively.

Document Summarization Word Embeddings

Improving Character-Aware Neural Language Model by Warming up Character Encoder under Skip-gram Architecture

no code implementations RANLP 2021 Yukun Feng, Chenlong Hu, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura

Character-aware neural language models can capture the relationship between words by exploiting character-level information and are particularly effective for languages with rich morphology.

Language Modelling

Model-based Subsampling for Knowledge Graph Completion

1 code implementation17 Sep 2023 Xincan Feng, Hidetaka Kamigaito, Katsuhiko Hayashi, Taro Watanabe

Subsampling is effective in Knowledge Graph Embedding (KGE) for reducing overfitting caused by the sparsity in Knowledge Graph (KG) datasets.

Knowledge Graph Completion Knowledge Graph Embedding

Table and Image Generation for Investigating Knowledge of Entities in Pre-trained Vision and Language Models

1 code implementation3 Jun 2023 Hidetaka Kamigaito, Katsuhiko Hayashi, Taro Watanabe

This task consists of two parts: the first is to generate a table containing knowledge about an entity and its related image, and the second is to generate an image from an entity with a caption and a table containing related knowledge of the entity.

Image Generation

LATTE: Lattice ATTentive Encoding for Character-based Word Segmentation

2 code implementations Journal of Natural Language Processing 2023 Thodsaporn Chay-intr, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura

Our model employs the lattice structure to handle segmentation alternatives and utilizes graph neural networks along with an attention mechanism to attentively extract multi-granularity representation from the lattice for complementing character representations.

 Ranked #1 on Chinese Word Segmentation on CTB6 (using extra training data)

Chinese Word Segmentation Japanese Word Segmentation +2

Bidirectional Transformer Reranker for Grammatical Error Correction

1 code implementation22 May 2023 Ying Zhang, Hidetaka Kamigaito, Manabu Okumura

Pre-trained seq2seq models have achieved state-of-the-art results in the grammatical error correction task.

Grammatical Error Correction Language Modelling +1

A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing

1 code implementation15 Oct 2022 Naoki Kobayashi, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata

To promote and further develop RST-style discourse parsing models, we need a strong baseline that can be regarded as a reference for reporting reliable experimental results.

Discourse Parsing

Subsampling for Knowledge Graph Embedding Explained

no code implementations13 Sep 2022 Hidetaka Kamigaito, Katsuhiko Hayashi

In this article, we explain the recent advance of subsampling methods in knowledge graph embedding (KGE) starting from the original one used in word2vec.

Knowledge Graph Embedding

Comprehensive Analysis of Negative Sampling in Knowledge Graph Representation Learning

1 code implementation21 Jun 2022 Hidetaka Kamigaito, Katsuhiko Hayashi

To solve this problem, we theoretically analyzed NS loss to assist hyperparameter tuning and understand the better use of the NS loss in KGE learning.

Knowledge Graph Embedding

Aspect-based Analysis of Advertising Appeals for Search Engine Advertising

no code implementations NAACL (ACL) 2022 Soichiro Murakami, Peinan Zhang, Sho Hoshino, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura

Writing an ad text that attracts people and persuades them to click or act is essential for the success of search engine advertising.

Why does Negative Sampling not Work Well? Analysis of Convexity in Negative Sampling

no code implementations29 Sep 2021 Hidetaka Kamigaito, Katsuhiko Hayashi

On the other hand, properties of the NS loss function that are considered important for learning, such as the relationship between the noise distribution and the number of negative samples, have not been investigated theoretically.

Knowledge Graph Embedding

Towards Table-to-Text Generation with Numerical Reasoning

1 code implementation ACL 2021 Lya Hulliyyatus Suadaa, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura, Hiroya Takamura

In summary, our contributions are (1) a new dataset for numerical table-to-text generation using pairs of a table and a paragraph of a table description with richer inference from scientific papers, and (2) a table-to-text generation framework enriched with numerical reasoning.

Descriptive Table-to-Text Generation

An Empirical Study of Generating Texts for Search Engine Advertising

no code implementations NAACL 2021 Hidetaka Kamigaito, Peinan Zhang, Hiroya Takamura, Manabu Okumura

Although there are many studies on neural language generation (NLG), few trials are put into the real world, especially in the advertising domain.

Text Generation

Generating Weather Comments from Meteorological Simulations

1 code implementation EACL 2021 Soichiro Murakami, Sora Tanaka, Masatsugu Hangyo, Hidetaka Kamigaito, Kotaro Funakoshi, Hiroya Takamura, Manabu Okumura

The task of generating weather-forecast comments from meteorological simulations has the following requirements: (i) the changes in numerical values for various physical quantities need to be considered, (ii) the weather comments should be dependent on delivery time and area information, and (iii) the comments should provide useful information for users.

Informativeness

A New Surprise Measure for Extracting Interesting Relationships between Persons

no code implementations EACL 2021 Hidetaka Kamigaito, Jingun Kwon, Young-In Song, Manabu Okumura

We therefore propose a method for extracting interesting relationships between persons from natural language texts by focusing on their surprisingness.

Top-Down RST Parsing Utilizing Granularity Levels in Documents

1 code implementation3 Apr 2020 Naoki Kobayashi, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata

To obtain better discourse dependency trees, we need to improve the accuracy of RST trees at the upper parts of the structures.

Discourse Parsing

Syntactically Look-Ahead Attention Network for Sentence Compression

1 code implementation4 Feb 2020 Hidetaka Kamigaito, Manabu Okumura

Sentence compression is the task of compressing a long sentence into a short one by deleting redundant words.

Informativeness Sentence Compression

Split or Merge: Which is Better for Unsupervised RST Parsing?

no code implementations IJCNLP 2019 Naoki Kobayashi, Tsutomu Hirao, Kengo Nakamura, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata

The first one builds the optimal tree in terms of a dissimilarity score function that is defined for splitting a text span into smaller ones.

Context-aware Neural Machine Translation with Coreference Information

no code implementations WS 2019 Takumi Ohtani, Hidetaka Kamigaito, Masaaki Nagata, Manabu Okumura

We present neural machine translation models for translating a sentence in a text by using a graph-based encoder which can consider coreference relations provided within the text explicitly.

Machine Translation Translation

Discourse-Aware Hierarchical Attention Network for Extractive Single-Document Summarization

no code implementations RANLP 2019 Tatsuya Ishigaki, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura

To incorporate the information of a discourse tree structure into the neural network-based summarizers, we propose a discourse-aware neural extractive summarizer which can explicitly take into account the discourse dependency tree structure of the source document.

Document Summarization

Higher-Order Syntactic Attention Network for Longer Sentence Compression

no code implementations NAACL 2018 Hidetaka Kamigaito, Katsuhiko Hayashi, Tsutomu Hirao, Masaaki Nagata

To solve this problem, we propose a higher-order syntactic attention network (HiSAN) that can handle higher-order dependency features as an attention distribution on LSTM hidden states.

Informativeness Machine Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.