no code implementations • INLG (ACL) 2021 • Tatsuya Ishigaki, Goran Topic, Yumi Hamazono, Hiroshi Noji, Ichiro Kobayashi, Yusuke Miyao, Hiroya Takamura
In this study, we introduce a new large-scale dataset that contains aligned video data, structured numerical data, and transcribed commentaries that consist of 129, 226 utterances in 1, 389 races in a game.
no code implementations • LREC 2022 • Tatsuya Ishigaki, Suzuko Nishino, Sohei Washino, Hiroki Igarashi, Yukari Nagai, Yuichi Washida, Akihiko Murai
The contributions are: 1) we propose document retrieval and comment generation tasks for horizon scanning, 2) create and analyze a new dataset, and 3) report the performance of several models and show that comment generation tasks are challenging.
1 code implementation • 11 Jul 2025 • Keisuke Ueda, Wataru Hirota, Takuto Asakura, Takahiro Omi, Kosuke Takahashi, Kosuke Arima, Tatsuya Ishigaki
Our results show that enlarging the agent cohort, deepening the interaction depth, and broadening agent persona heterogeneity each enrich the diversity of generated ideas.
no code implementations • 12 Apr 2024 • Kosuke Takahashi, Takahiro Omi, Kosuke Arima, Tatsuya Ishigaki
The development of Large Language Models (LLMs) in various languages has been advancing, but the combination of non-English languages with domain-specific contexts remains underexplored.
1 code implementation • 3 Apr 2024 • Masayuki Kawarada, Tatsuya Ishigaki, Hiroya Takamura
Large language models (LLMs) have been applied to a wide range of data-to-text generation tasks, including tables, graphs, and time-series numerical data-to-text settings.
no code implementations • 12 Oct 2023 • Kosuke Takahashi, Takahiro Omi, Kosuke Arima, Tatsuya Ishigaki
This paper presents a simple and cost-effective method for synthesizing data to train question-answering systems.
1 code implementation • COLING 2020 • Yui Uehara, Tatsuya Ishigaki, Kasumi Aoki, Hiroshi Noji, Keiichi Goshima, Ichiro Kobayashi, Hiroya Takamura, Yusuke Miyao
Existing models for data-to-text tasks generate fluent but sometimes incorrect sentences e. g., {``}Nikkei gains{''} is generated when {``}Nikkei drops{''} is expected.
no code implementations • WS 2019 • Kasumi Aoki, Akira Miyazawa, Tatsuya Ishigaki, Tatsuya Aoki, Hiroshi Noji, Keiichi Goshima, Ichiro Kobayashi, Hiroya Takamura, Yusuke Miyao
We propose a data-to-document generator that can easily control the contents of output texts based on a neural language model.
no code implementations • RANLP 2019 • Tatsuya Ishigaki, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura
To incorporate the information of a discourse tree structure into the neural network-based summarizers, we propose a discourse-aware neural extractive summarizer which can explicitly take into account the discourse dependency tree structure of the source document.
2 code implementations • ACL 2019 • Hayate Iso, Yui Uehara, Tatsuya Ishigaki, Hiroshi Noji, Eiji Aramaki, Ichiro Kobayashi, Yusuke Miyao, Naoaki Okazaki, Hiroya Takamura
We propose a data-to-text generation model with two modules, one for tracking and the other for text generation.
1 code implementation • WS 2018 • Tatsuya Aoki, Akira Miyazawa, Tatsuya Ishigaki, Keiichi Goshima, Kasumi Aoki, Ichiro Kobayashi, Hiroya Takamura, Yusuke Miyao
Comments on a stock market often include the reason or cause of changes in stock prices, such as {``}Nikkei turns lower as yen{'}s rise hits exporters.
no code implementations • IJCNLP 2017 • Tatsuya Ishigaki, Hiroya Takamura, Manabu Okumura
In this research, we propose the task of question summarization.
Abstractive Text Summarization
Community Question Answering
+1