no code implementations • RANLP 2021 • Ying Zhang, Hidetaka Kamigaito, Tatsuya Aoki, Hiroya Takamura, Manabu Okumura
Encoder-decoder models have been commonly used for many tasks such as machine translation and response generation.
no code implementations • INLG (ACL) 2021 • Tatsuya Ishigaki, Goran Topic, Yumi Hamazono, Hiroshi Noji, Ichiro Kobayashi, Yusuke Miyao, Hiroya Takamura
In this study, we introduce a new large-scale dataset that contains aligned video data, structured numerical data, and transcribed commentaries that consist of 129, 226 utterances in 1, 389 races in a game.
no code implementations • ACL (WebNLG, INLG) 2020 • Natthawut Kertkeidkachorn, Hiroya Takamura
We report our system description for the RDFto-Text task in English on the WebNLG 2020 Challenge.
no code implementations • COLING 2022 • Yoshifumi Kawasaki, Maëlys Salingre, Marzena Karpinska, Hiroya Takamura, Ryo Nagata
This article revisits statistical relationships across Romance cognates between lexical semantic shift and six intra-linguistic variables, such as frequency and polysemy.
no code implementations • INLG (ACL) 2020 • Yumi Hamazono, Yui Uehara, Hiroshi Noji, Yusuke Miyao, Hiroya Takamura, Ichiro Kobayashi
On top of this, we employ a copy mechanism that is suitable for referring to the content of data records in the market price data.
no code implementations • RANLP 2021 • Yukun Feng, Chenlong Hu, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura
Character-aware neural language models can capture the relationship between words by exploiting character-level information and are particularly effective for languages with rich morphology.
no code implementations • RANLP 2021 • Jingun Kwon, Naoki Kobayashi, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura
The results demonstrate that the position of emojis in texts is a good clue to boost the performance of emoji label prediction.
no code implementations • RANLP 2021 • Jingyi You, Chenlong Hu, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura
Neural sequence-to-sequence (Seq2Seq) models and BERT have achieved substantial improvements in abstractive document summarization (ADS) without and with pre-training, respectively.
no code implementations • 19 May 2023 • Ryo Nagata, Hiroya Takamura, Naoki Otani, Yoshifumi Kawasaki
In this paper, we propose methods for discovering semantic differences in words appearing in two corpora based on the norms of contextualized word vectors.
no code implementations • 13 Nov 2022 • Chung-Chi Chen, Hiroya Takamura, Hsin-Hsi Chen
Making our research results positively impact on society and environment is one of the goals our community has been pursuing recently.
1 code implementation • 16 Oct 2022 • Hong Chen, Duc Minh Vo, Hiroya Takamura, Yusuke Miyao, Hideki Nakayama
Existing automatic story evaluation methods place a premium on story lexical level coherence, deviating from human preference.
1 code implementation • 26 Sep 2022 • Erica K. Shimomoto, Edison Marrese-Taylor, Hiroya Takamura, Ichiro Kobayashi, Hideki Nakayama, Yusuke Miyao
This paper explores the task of Temporal Video Grounding (TVG) where, given an untrimmed video and a natural language sentence query, the goal is to recognize and determine temporal boundaries of action instances in the video described by the query.
no code implementations • NAACL (ACL) 2022 • Soichiro Murakami, Peinan Zhang, Sho Hoshino, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura
Writing an ad text that attracts people and persuades them to click or act is essential for the success of search engine advertising.
no code implementations • 19 Dec 2021 • Cristian Rodriguez-Opazo, Edison Marrese-Taylor, Basura Fernando, Hiroya Takamura, Qi Wu
We propose LocFormer, a Transformer-based model for video grounding which operates at a constant memory footprint regardless of the video length, i. e. number of frames.
no code implementations • Findings (EMNLP) 2021 • Hong Chen, Hiroya Takamura, Hideki Nakayama
Generating texts in scientific papers requires not only capturing the content contained within the given input but also frequently acquiring the external information called \textit{context}.
1 code implementation • ACL 2021 • Lya Hulliyyatus Suadaa, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura, Hiroya Takamura
In summary, our contributions are (1) a new dataset for numerical table-to-text generation using pairs of a table and a paragraph of a table description with richer inference from scientific papers, and (2) a table-to-text generation framework enriched with numerical reasoning.
no code implementations • NAACL 2021 • Hidetaka Kamigaito, Peinan Zhang, Hiroya Takamura, Manabu Okumura
Although there are many studies on neural language generation (NLG), few trials are put into the real world, especially in the advertising domain.
1 code implementation • EACL 2021 • Soichiro Murakami, Sora Tanaka, Masatsugu Hangyo, Hidetaka Kamigaito, Kotaro Funakoshi, Hiroya Takamura, Manabu Okumura
The task of generating weather-forecast comments from meteorological simulations has the following requirements: (i) the changes in numerical values for various physical quantities need to be considered, (ii) the weather comments should be dependent on delivery time and area information, and (iii) the comments should provide useful information for users.
no code implementations • EACL 2021 • Chenlong Hu, Yukun Feng, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura
This work presents multi-modal deep SVDD (mSVDD) for one-class text classification.
no code implementations • INLG (ACL) 2021 • Hong Chen, Raphael Shu, Hiroya Takamura, Hideki Nakayama
In this paper, we focus on planning a sequence of events assisted by event graphs, and use the events to guide the generator.
no code implementations • 5 Feb 2021 • Hong Chen, Yifei HUANG, Hiroya Takamura, Hideki Nakayama
To enrich the candidate concepts, a commonsense knowledge graph is created for each image sequence from which the concept candidates are proposed.
no code implementations • EACL 2021 • Lya Hulliyyatus Suadaa, Hidetaka Kamigaito, Manabu Okumura, Hiroya Takamura
Numerical tables are widely used to present experimental results in scientific papers.
1 code implementation • Asian Chapter of the Association for Computational Linguistics 2020 • Yukun Feng, Chenlong Hu, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura
We propose a simple and effective method for incorporating word clusters into the Continuous Bag-of-Words (CBOW) model.
1 code implementation • COLING 2020 • Yui Uehara, Tatsuya Ishigaki, Kasumi Aoki, Hiroshi Noji, Keiichi Goshima, Ichiro Kobayashi, Hiroya Takamura, Yusuke Miyao
Existing models for data-to-text tasks generate fluent but sometimes incorrect sentences e. g., {``}Nikkei gains{''} is generated when {``}Nikkei drops{''} is expected.
1 code implementation • COLING 2020 • Namgi Han, Goran Topic, Hiroshi Noji, Hiroya Takamura, Yusuke Miyao
Our analysis, including shifting of training and test datasets and training on a union of the datasets, suggests that our progress in solving SimpleQuestions dataset does not indicate the success of more general simple question answering.
no code implementations • COLING 2020 • Maolin Li, Hiroya Takamura, Sophia Ananiadou
To ensure high-quality data, it is crucial to infer the correct labels by aggregating the noisy labels.
no code implementations • COLING 2020 • Shogo Fujita, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura
We tackle the task of automatically generating a function name from source code.
no code implementations • COLING 2020 • Riku Kawamura, Tatsuya Aoki, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura
We propose neural models that can normalize text by considering the similarities of word strings and sounds.
1 code implementation • Workshop on Noisy User-generated Text 2020 • Mohammad Golam Sohrab, Anh-Khoa Duong Nguyen, Makoto Miwa, Hiroya Takamura
In relation extraction task, we achieved 80. 46% in terms of F-score as the top system in the relation extraction or recognition task.
Ranked #1 on
Named Entity Recognition (NER)
on WNUT 2020
1 code implementation • ACL 2020 • Hiroshi Noji, Hiroya Takamura
Neural language models are commonly trained only on positive examples, a set of sentences in the training data, but recent studies suggest that the models trained in this way are not capable of robustly handling complex syntactic constructions, such as long-distance agreement.
no code implementations • WS 2019 • Hai-Long Trieu, Anh-Khoa Duong Nguyen, Nhung Nguyen, Makoto Miwa, Hiroya Takamura, Sophia Ananiadou
Additionally, the proposed model is able to detect coreferent pairs in long distances, even with a distance of more than 200 sentences.
no code implementations • CONLL 2019 • Yukun Feng, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura
Our injection method can also be used together with previous methods.
no code implementations • WS 2019 • Mohammad Golam Sohrab, Minh Thang Pham, Makoto Miwa, Hiroya Takamura
We present a neural pipeline approach that performs named entity recognition (NER) and concept indexing (CI), which links them to concept unique identifiers (CUIs) in a knowledge base, for the PharmaCoNER shared task on pharmaceutical drugs and chemical entities.
no code implementations • WS 2019 • Kasumi Aoki, Akira Miyazawa, Tatsuya Ishigaki, Tatsuya Aoki, Hiroshi Noji, Keiichi Goshima, Ichiro Kobayashi, Hiroya Takamura, Yusuke Miyao
We propose a data-to-document generator that can easily control the contents of output texts based on a neural language model.
no code implementations • RANLP 2019 • Tatsuya Ishigaki, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura
To incorporate the information of a discourse tree structure into the neural network-based summarizers, we propose a discourse-aware neural extractive summarizer which can explicitly take into account the discourse dependency tree structure of the source document.
2 code implementations • ACL 2019 • Hayate Iso, Yui Uehara, Tatsuya Ishigaki, Hiroshi Noji, Eiji Aramaki, Ichiro Kobayashi, Yusuke Miyao, Naoaki Okazaki, Hiroya Takamura
We propose a data-to-text generation model with two modules, one for tracking and the other for text generation.
1 code implementation • ACL 2019 • Chung-Chi Chen, Hen-Hsen Huang, Hiroya Takamura, Hsin-Hsi Chen
In this paper, we attempt to answer the question of whether neural network models can learn numeracy, which is the ability to predict the magnitude of a numeral at some specific position in a text description.
no code implementations • ACL 2019 • Takuya Makino, Tomoya Iwakura, Hiroya Takamura, Manabu Okumura
The experimental results show that a state-of-the-art neural summarization model optimized with GOLC generates fewer overlength summaries while maintaining the fastest processing speed; only 6. 70{\%} overlength summaries on CNN/Daily and 7. 8{\%} on long summary of Mainichi, compared to the approximately 20{\%} to 50{\%} on CNN/Daily Mail and 10{\%} to 30{\%} on Mainichi with the other optimization methods.
no code implementations • WS 2018 • Abdurrisyad Fikri, Hiroya Takamura, Manabu Okumura
Recent neural models for response generation show good results in terms of general responses.
1 code implementation • WS 2018 • Tatsuya Aoki, Akira Miyazawa, Tatsuya Ishigaki, Keiichi Goshima, Kasumi Aoki, Ichiro Kobayashi, Hiroya Takamura, Yusuke Miyao
Comments on a stock market often include the reason or cause of changes in stock prices, such as {``}Nikkei turns lower as yen{'}s rise hits exporters.
no code implementations • 27 Sep 2018 • Ran Tian, Yash Agrawal, Kento Watanabe, Hiroya Takamura
Word embeddings are known to boost performance of many NLP tasks such as text classification, meanwhile they can be enhanced by labels at the document level to capture nuanced meaning such as sentiment and topic.
no code implementations • COLING 2018 • Arata Ugawa, Akihiro Tamura, Takashi Ninomiya, Hiroya Takamura, Manabu Okumura
To alleviate these problems, the encoder of the proposed model encodes the input word on the basis of its NE tag at each time step, which could reduce the ambiguity of the input word.
no code implementations • COLING 2018 • Ryo Nagata, Taisei Sato, Hiroya Takamura
This paper introduces and examines the hypothesis that lexical richness measures become unstable in learner English because of spelling errors.
no code implementations • IJCNLP 2017 • Tatsuya Ishigaki, Hiroya Takamura, Manabu Okumura
In this research, we propose the task of question summarization.
no code implementations • IJCNLP 2017 • Hidetaka Kamigaito, Katsuhiko Hayashi, Tsutomu Hirao, Hiroya Takamura, Manabu Okumura, Masaaki Nagata
The sequence-to-sequence (Seq2Seq) model has been successfully applied to machine translation (MT).
no code implementations • EMNLP 2017 • Tatsuya Aoki, Ryohei Sasano, Hiroya Takamura, Manabu Okumura
Our experimental results show that the model leveraging the context embedding outperforms other methods and provide us with findings, for example, on how to construct context embeddings and which corpus to use.
no code implementations • SEMEVAL 2017 • Surya Agustian, Hiroya Takamura
We exploit words importance levels in sentences or questions for similarity features, for classification and ranking with machine learning.
no code implementations • ACL 2017 • Shun Hasegawa, Yuta Kikuchi, Hiroya Takamura, Manabu Okumura
In English, high-quality sentence compression models by deleting words have been trained on automatically created large training datasets.
no code implementations • ACL 2017 • Soichiro Murakami, Akihiko Watanabe, Akira Miyazawa, Keiichi Goshima, Toshihiko Yanase, Hiroya Takamura, Yusuke Miyao
This paper presents a novel encoder-decoder model for automatically generating market comments from stock prices.
no code implementations • EACL 2017 • Hiroya Takamura, Ryo Nagata, Yoshifumi Kawasaki
We analyze semantic changes in loanwords from English that are used in Japanese (Japanese loanwords).
1 code implementation • EMNLP 2016 • Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, Manabu Okumura
Neural encoder-decoder models have shown great success in many sequence generation tasks.
no code implementations • LREC 2016 • Hiroya Takamura, Ryo Nagata, Yoshifumi Kawasaki
We address the task of automatically estimating the missing values of linguistic features by making use of the fact that some linguistic features in typological databases are informative to each other.