no code implementations • INLG (ACL) 2021 • Ryo Nagata, Masato Hagiwara, Kazuaki Hanawa, Masato Mita, Artem Chernodub, Olena Nahorna
In this paper, we propose a generation challenge called Feedback comment generation for language learners.
no code implementations • LREC 2022 • Yujin Takahashi, Masahiro Kaneko, Masato Mita, Mamoru Komachi
This study investigates how supervised quality estimation (QE) models of grammatical error correction (GEC) are affected by the learners’ proficiency with the data.
1 code implementation • 4 Oct 2024 • Akihiko Kato, Masato Mita, Soichiro Murakami, Ukyo Honda, Sho Hoshino, Peinan Zhang
In this study, we collaborate with in-house ad creators to refine the CAMERA references and develop an alternative ATG evaluation dataset called FaithCAMERA, in which the faithfulness of references is guaranteed.
1 code implementation • 12 Aug 2024 • Peinan Zhang, Yusuke Sakai, Masato Mita, Hiroki Ouchi, Taro Watanabe
With the increase in the more fluent ad texts automatically created by natural language generation technology, it is in the high demand to verify the quality of these creatives in a real-world setting.
1 code implementation • 17 Jun 2024 • Ukyo Honda, Tatsushi Oka, Peinan Zhang, Masato Mita
Recent models for natural language understanding are inclined to exploit simple patterns in datasets, commonly known as shortcuts.
no code implementations • 26 Mar 2024 • Masamune Kobayashi, Masato Mita, Mamoru Komachi
Large Language Models (LLMs) have been reported to outperform existing automatic evaluation metrics in some tasks, such as text summarization and machine translation.
2 code implementations • 5 Mar 2024 • Masamune Kobayashi, Masato Mita, Mamoru Komachi
The results of improved correlations by aligning the granularity in the sentence-level meta-evaluation, suggest that edit-based metrics may have been underestimated in existing studies.
1 code implementation • 21 Sep 2023 • Masato Mita, Soichiro Murakami, Akihiko Kato, Peinan Zhang
In response to the limitations of manual ad creation, significant research has been conducted in the field of automatic ad text generation (ATG).
2 code implementations • 30 Jun 2023 • Yusuke Ide, Masato Mita, Adam Nohejl, Hiroki Ouchi, Taro Watanabe
Lexical complexity prediction (LCP) is the task of predicting the complexity of words in a text on a continuous scale.
1 code implementation • 23 May 2022 • Masato Mita, Keisuke Sakaguchi, Masato Hagiwara, Tomoya Mizumoto, Jun Suzuki, Kentaro Inui
Natural language processing technology has rapidly improved automated grammatical error correction tasks, and the community begins to explore document-level revision as one of the next challenges.
no code implementations • LREC 2022 • Daisuke Suzuki, Yujin Takahashi, Ikumi Yamashita, Taichi Aida, Tosho Hirasawa, Michitaka Nakatsuji, Masato Mita, Mamoru Komachi
Therefore, in this study, we created a quality estimation dataset with manual evaluation to build an automatic evaluation model for Japanese GEC.
no code implementations • 17 Jan 2022 • Yujin Takahashi, Masahiro Kaneko, Masato Mita, Mamoru Komachi
This study investigates how supervised quality estimation (QE) models of grammatical error correction (GEC) are affected by the learners' proficiency with the data.
no code implementations • Findings (ACL) 2021 • Masato Mita, Hitomi Yanaka
There has been an increased interest in data generation approaches to grammatical error correction (GEC) using pseudo data.
1 code implementation • COLING 2020 • Takumi Gotou, Ryo Nagata, Masato Mita, Kazuaki Hanawa
The performance measures are based on the simple idea that the more systems successfully correct an error, the easier it is considered to be.
1 code implementation • COLING 2020 • Ryo Fujii, Masato Mita, Kaori Abe, Kazuaki Hanawa, Makoto Morishita, Jun Suzuki, Kentaro Inui
Neural Machine Translation (NMT) has shown drastic improvement in its quality when translating clean input, such as text from the news domain.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Masato Mita, Shun Kiyono, Masahiro Kaneko, Jun Suzuki, Kentaro Inui
Existing approaches for grammatical error correction (GEC) largely rely on supervised learning with manually created GEC datasets.
no code implementations • ACL 2020 • Hiroaki Funayama, Shota Sasaki, Yuichiroh Matsubayashi, Tomoya Mizumoto, Jun Suzuki, Masato Mita, Kentaro Inui
We introduce a new task formulation of SAS that matches the actual usage.
1 code implementation • ACL 2020 • Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, Kentaro Inui
The answer to this question is not as straightforward as one might expect because the previous common methods for incorporating a MLM into an EncDec model have potential drawbacks when applied to GEC.
Ranked #2 on Grammatical Error Correction on JFLEG
1 code implementation • LREC 2020 • Masato Hagiwara, Masato Mita
The lack of large-scale datasets has been a major hindrance to the development of NLP tasks such as spelling correction and grammatical error correction (GEC).
1 code implementation • IJCNLP 2019 • Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, Kentaro Inui
The incorporation of pseudo data in the training of grammatical error correction models has been one of the main factors in improving the performance of such models.
Ranked #13 on Grammatical Error Correction on CoNLL-2014 Shared Task
no code implementations • WS 2019 • Hiroki Asano, Masato Mita, Tomoya Mizumoto, Jun Suzuki
We introduce the AIP-Tohoku grammatical error correction (GEC) system for the BEA-2019 shared task in Track 1 (Restricted Track) and Track 2 (Unrestricted Track) using the same system architecture.
no code implementations • NAACL 2019 • Masato Mita, Tomoya Mizumoto, Masahiro Kaneko, Ryo Nagata, Kentaro Inui
This study explores the necessity of performing cross-corpora evaluation for grammatical error correction (GEC) models.