no code implementations • COLING 2022 • Riki Fujihara, Tatsuki Kuribayashi, Kaori Abe, Ryoko Tokuhisa, Kentaro Inui
Humans use different wordings depending on the context to facilitate efficient communication.
1 code implementation • EMNLP (ArgMining) 2021 • Keshav Singh, Farjana Sultana Mim, Naoya Inoue, Shoichi Naito, Kentaro Inui
Annotation of implicit reasoning (i. e., warrant) in arguments is a critical resource to train models in gaining deeper understanding and correct interpretation of arguments.
1 code implementation • LREC 2022 • Keshav Singh, Naoya Inoue, Farjana Sultana Mim, Shoichi Naito, Kentaro Inui
To solve this problem, we hypothesize that as human reasoning is guided by innate collection of domain-specific knowledge, it might be beneficial to create such a domain-specific corpus for machines.
1 code implementation • EMNLP 2021 • Kazuaki Hanawa, Ryo Nagata, Kentaro Inui
To shed light on these points, we investigate a wider range of methods for generating many feedback comments in this study.
no code implementations • EMNLP 2021 • Shun Kiyono, Sosuke Kobayashi, Jun Suzuki, Kentaro Inui
Position representation is crucial for building position-aware representations in Transformers.
no code implementations • COLING 2022 • Shuhei Kurita, Hiroki Ouchi, Kentaro Inui, Satoshi Sekine
Semantic Role Labeling (SRL) is the task of labeling semantic arguments for marked semantic predicates.
no code implementations • LANTERN (COLING) 2020 • Diana Galvan-Sosa, Jun Suzuki, Kyosuke Nishida, Koji Matsuda, Kentaro Inui
Despite recent achievements in natural language understanding, reasoning over commonsense knowledge still represents a big challenge to AI systems.
no code implementations • 17 Apr 2024 • Yukiko Ishizuki, Tatsuki Kuribayashi, Yuichiroh Matsubayashi, Ryohei Sasano, Kentaro Inui
Speakers sometimes omit certain arguments of a predicate in a sentence; such omission is especially frequent in pro-drop languages.
no code implementations • 19 Mar 2024 • Shiki Sato, Reina Akama, Jun Suzuki, Kentaro Inui
In this paper, we build a large dataset of response generation models' contradictions for the first time.
1 code implementation • 15 Mar 2024 • Benjamin Heinzerling, Kentaro Inui
Language models (LMs) can express factual knowledge involving numeric properties such as Karl Popper was born in 1902.
no code implementations • 6 Mar 2024 • Naoki Miura, Hiroaki Funayama, Seiya Kikuchi, Yuichiroh Matsubayashi, Yuya Iwase, Kentaro Inui
Using this dataset, we demonstrate the performance of baselines including finetuned BERT and GPT models with few-shot in-context learning.
1 code implementation • 22 Feb 2024 • Kosuke Matsuzaki, Masaya Taniguchi, Kentaro Inui, Keisuke Sakaguchi
We introduce a Japanese Morphology dataset, J-UniMorph, developed based on the UniMorph feature schema.
1 code implementation • 26 Oct 2023 • Go Kamoda, Benjamin Heinzerling, Keisuke Sakaguchi, Kentaro Inui
Factual probing is a method that uses prompts to test if a language model "knows" certain world knowledge facts.
1 code implementation • 24 Oct 2023 • Hiroto Kurita, Goro Kobayashi, Sho Yokoi, Kentaro Inui
The performance of sentence encoders can be significantly improved through the simple practice of fine-tuning using contrastive loss.
1 code implementation • 2 Aug 2023 • Yunmeng Li, Jun Suzuki, Makoto Morishita, Kaori Abe, Ryoko Tokuhisa, Ana Brassard, Kentaro Inui
In this paper, we describe the development of a communication support system that detects erroneous translations to facilitate crosslingual communications due to the limitations of current machine chat translation methods.
no code implementations • 28 Jul 2023 • Camélia Guerraoui, Paul Reisert, Naoya Inoue, Farjana Sultana Mim, Shoichi Naito, Jungmin Choi, Irfan Robbani, Wenzhi Wang, Kentaro Inui
The use of argumentation in education has been shown to improve critical thinking skills for end-users such as students, and computational models for argumentation have been developed to assist in this process.
no code implementations • 29 May 2023 • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui
Prediction head is a crucial component of Transformer language models.
no code implementations • EACL 2023 • Pride Kavumba, Ana Brassard, Benjamin Heinzerling, Kentaro Inui
Explanation prompts ask language models to not only assign a particular label to a giveninput, such as true, entailment, or contradiction in the case of natural language inference but also to generate a free-text explanation that supports this label.
Ranked #1 on Natural Language Inference on ANLI test
no code implementations • 25 Mar 2023 • Steven Coyne, Keisuke Sakaguchi, Diana Galvan-Sosa, Michael Zock, Kentaro Inui
GPT-3 and GPT-4 models are powerful, achieving high performance on a variety of Natural Language Processing tasks.
1 code implementation • 16 Feb 2023 • Yoichi Aoki, Keito Kudo, Tatsuki Kuribayashi, Ana Brassard, Masashi Yoshikawa, Keisuke Sakaguchi, Kentaro Inui
Neural reasoning accuracy improves when generating intermediate reasoning steps.
1 code implementation • 15 Feb 2023 • Keito Kudo, Yoichi Aoki, Tatsuki Kuribayashi, Ana Brassard, Masashi Yoshikawa, Keisuke Sakaguchi, Kentaro Inui
Compositionality is a pivotal property of symbolic reasoning.
1 code implementation • 1 Feb 2023 • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui
Transformers are ubiquitous in wide tasks.
1 code implementation • 17 Jan 2023 • Yuta Matsumoto, Benjamin Heinzerling, Masashi Yoshikawa, Kentaro Inui
Previous research has shown that information about intermediate values of these inputs can be extracted from the activations of the models, but it is unclear where that information is encoded and whether that information is indeed used during inference.
1 code implementation • 2 Nov 2022 • Qin Dai, Benjamin Heinzerling, Kentaro Inui
They either do not allow any sharing between the text encoder and the KG encoder at all, or, in case of models with KG-to-text attention, only share information in one direction.
1 code implementation • COLING 2022 • Yosuke Kishinami, Reina Akama, Shiki Sato, Ryoko Tokuhisa, Jun Suzuki, Kentaro Inui
Prior studies addressing target-oriented conversational tasks lack a crucial notion that has been intensively studied in the context of goal-oriented artificial intelligence agents, namely, planning.
1 code implementation • SIGDIAL (ACL) 2022 • Shiki Sato, Reina Akama, Hiroki Ouchi, Ryoko Tokuhisa, Jun Suzuki, Kentaro Inui
In this scenario, the quality of the n-best list considerably affects the occurrence of contradictions because the final response is chosen from this n-best list.
1 code implementation • NeurIPS 2023 • Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai, Xinyan Yu, Dragomir Radev, Noah A. Smith, Yejin Choi, Kentaro Inui
We introduce REALTIME QA, a dynamic question answering (QA) platform that announces questions and evaluates systems on a regular basis (weekly in this version).
no code implementations • 16 Jun 2022 • Hiroaki Funayama, Tasuku Sato, Yuichiroh Matsubayashi, Tomoya Mizumoto, Jun Suzuki, Kentaro Inui
Towards guaranteeing high-quality predictions, we present the first study of exploring the use of human-in-the-loop framework for minimizing the grading cost while guaranteeing the grading quality by allowing a SAS model to share the grading task with a human grader.
no code implementations • BigScience (ACL) 2022 • Sosuke Kobayashi, Shun Kiyono, Jun Suzuki, Kentaro Inui
Ensembling is a popular method used to improve performance as a last resort.
1 code implementation • 23 May 2022 • Masato Mita, Keisuke Sakaguchi, Masato Hagiwara, Tomoya Mizumoto, Jun Suzuki, Kentaro Inui
Natural language processing technology has rapidly improved automated grammatical error correction tasks, and the community begins to explore document-level revision as one of the next challenges.
1 code implementation • 23 May 2022 • Tatsuki Kuribayashi, Yohei Oseki, Ana Brassard, Kentaro Inui
Language models (LMs) have been used in cognitive modeling as well as engineering studies -- they compute information-theoretic complexity metrics that simulate humans' cognitive load during reading.
no code implementations • LREC 2022 • Farjana Sultana Mim, Naoya Inoue, Shoichi Naito, Keshav Singh, Kentaro Inui
Attacking is not always straightforward and often comprise complex rhetorical moves such that arguers might agree with a logic of an argument while attacking another logic.
1 code implementation • LREC 2022 • Shoichi Naito, Shintaro Sawada, Chihiro Nakagawa, Naoya Inoue, Kenshi Yamaguchi, Iori Shimizu, Farjana Sultana Mim, Keshav Singh, Kentaro Inui
In this paper, we define three criteria that a template set should satisfy: expressiveness, informativeness, and uniqueness, and verify the feasibility of creating a template set that satisfies these criteria as a first trial.
1 code implementation • LREC 2022 • Ana Brassard, Benjamin Heinzerling, Pride Kavumba, Kentaro Inui
We present Semi-Structured Explanations for COPA (COPA-SSE), a new crowdsourced dataset of 9, 747 semi-structured, English common sense explanations for Choice of Plausible Alternatives (COPA) questions.
no code implementations • 26 Oct 2021 • Keshav Singh, Naoya Inoue, Farjana Sultana Mim, Shoichi Naitoh, Kentaro Inui
Most of the existing work that focus on the identification of implicit knowledge in arguments generally represent implicit knowledge in the form of commonsense or factual knowledge.
no code implementations • 28 Sep 2021 • Hiroki Ouchi, Jun Suzuki, Sosuke Kobayashi, Sho Yokoi, Tatsuki Kuribayashi, Masashi Yoshikawa, Kentaro Inui
Interpretable rationales for model predictions are crucial in practical applications.
1 code implementation • EMNLP 2021 • Kosuke Yamada, Yuta Hitomi, Hideaki Tamori, Ryohei Sasano, Naoaki Okazaki, Kentaro Inui, Koichi Takeda
We also consider a new headline generation strategy that takes advantage of the controllable generation order of Transformer.
2 code implementations • EMNLP 2021 • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui
Transformer architecture has become ubiquitous in the natural language processing field.
1 code implementation • EMNLP 2021 • Naoya Inoue, Harsh Trivedi, Steven Sinha, Niranjan Balasubramanian, Kentaro Inui
Instead, we advocate for an abstractive approach, where we propose to generate a question-focused, abstractive summary of input paragraphs and then feed it to an RC system.
1 code implementation • 13 Sep 2021 • Shun Kiyono, Sosuke Kobayashi, Jun Suzuki, Kentaro Inui
Position representation is crucial for building position-aware representations in Transformers.
1 code implementation • Findings (ACL) 2021 • Hitomi Yanaka, Koji Mineshima, Kentaro Inui
We also find that the generalization performance to unseen combinations is better when the form of meaning representations is simpler.
1 code implementation • ACL 2021 • Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, Kentaro Inui
Overall, our results suggest that a cross-lingual evaluation will be necessary to construct human-like computational models.
no code implementations • NAACL 2021 • Pride Kavumba, Benjamin Heinzerling, Ana Brassard, Kentaro Inui
Here, we propose to explicitly learn a model that does well on both the easy test set with superficial cues and hard test set without superficial cues.
no code implementations • 16 Apr 2021 • Keshav Singh, Paul Reisert, Naoya Inoue, Kentaro Inui
We construct a preliminary dataset of 6, 000 warrants annotated over 600 arguments for 3 debatable topics.
1 code implementation • EMNLP 2021 • Ryuto Konno, Shun Kiyono, Yuichiroh Matsubayashi, Hiroki Ouchi, Kentaro Inui
Masked language models (MLMs) have contributed to drastic performance improvements with regard to zero anaphora resolution (ZAR).
1 code implementation • EACL 2021 • Qin Dai, Naoya Inoue, Ryo Takahashi, Kentaro Inui
This paper explores how the Distantly Supervised Relation Extraction (DS-RE) can benefit from the use of a Universal Graph (UG), the combination of a Knowledge Graph (KG) and a large-scale text collection.
1 code implementation • EACL 2021 • Hitomi Yanaka, Koji Mineshima, Kentaro Inui
Despite the recent success of deep neural networks in natural language processing, the extent to which they can demonstrate human-like generalization capacities for natural language understanding remains unclear.
no code implementations • EMNLP (sustainlp) 2020 • Sosuke Kobayashi, Sho Yokoi, Jun Suzuki, Kentaro Inui
Understanding the influence of a training instance on a neural network model leads to improving interpretability.
1 code implementation • COLING 2020 • Ryo Fujii, Masato Mita, Kaori Abe, Kazuaki Hanawa, Makoto Morishita, Jun Suzuki, Kentaro Inui
Neural Machine Translation (NMT) has shown drastic improvement in its quality when translating clean input, such as text from the news domain.
no code implementations • COLING 2020 • Takaki Otake, Sho Yokoi, Naoya Inoue, Ryo Takahashi, Tatsuki Kuribayashi, Kentaro Inui
Events in a narrative differ in salience: some are more important to the story than others.
no code implementations • COLING 2020 • Ryuto Konno, Yuichiroh Matsubayashi, Shun Kiyono, Hiroki Ouchi, Ryo Takahashi, Kentaro Inui
This study addresses two underexplored issues on CDA, that is, how to reduce the computational cost of data augmentation and how to ensure the quality of the generated data.
no code implementations • 13 Oct 2020 • Farjana Sultana Mim, Naoya Inoue, Paul Reisert, Hiroki Ouchi, Kentaro Inui
Existing approaches for automated essay scoring and document representation learning typically rely on discourse parsers to incorporate discourse structure into text representation.
no code implementations • EMNLP 2020 • Takumi Ito, Tatsuki Kuribayashi, Masatoshi Hidaka, Jun Suzuki, Kentaro Inui
Despite the current diversity and inclusion initiatives in the academic community, researchers with a non-native command of English still face significant obstacles when writing papers in English.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Masato Mita, Shun Kiyono, Masahiro Kaneko, Jun Suzuki, Kentaro Inui
Existing approaches for grammatical error correction (GEC) largely rely on supervised learning with manually created GEC datasets.
1 code implementation • EACL 2021 • Benjamin Heinzerling, Kentaro Inui
Pretrained language models have been suggested as a possible alternative or complement to structured knowledge bases.
no code implementations • EMNLP (NLP-COVID19) 2020 • Akiko Aizawa, Frederic Bergeron, Junjie Chen, Fei Cheng, Katsuhiko Hayashi, Kentaro Inui, Hiroyoshi Ito, Daisuke Kawahara, Masaru Kitsuregawa, Hirokazu Kiyomaru, Masaki Kobayashi, Takashi Kodama, Sadao Kurohashi, Qianying Liu, Masaki Matsubara, Yusuke Miyao, Atsuyuki Morishima, Yugo Murawaki, Kazumasa Omura, Haiyue Song, Eiichiro Sumita, Shinji Suzuki, Ribeka Tanaka, Yu Tanaka, Masashi Toyoda, Nobuhiro Ueda, Honai Ueoka, Masao Utiyama, Ying Zhong
The global pandemic of COVID-19 has made the public pay close attention to related news, covering various domains, such as sanitation, treatment, and effects on education.
no code implementations • ACL 2020 • Hiroaki Funayama, Shota Sasaki, Yuichiroh Matsubayashi, Tomoya Mizumoto, Jun Suzuki, Masato Mita, Kentaro Inui
We introduce a new task formulation of SAS that matches the actual usage.
2 code implementations • ICLR 2021 • Kazuaki Hanawa, Sho Yokoi, Satoshi Hara, Kentaro Inui
In this study, we investigated relevance metrics that can provide reasonable explanations to users.
1 code implementation • ACL 2020 • Takuma Kato, Kaori Abe, Hiroki Ouchi, Shumpei Miyawaki, Jun Suzuki, Kentaro Inui
In general, the labels used in sequence labeling consist of different types of elements.
1 code implementation • ACL 2020 • Masahiro Kaneko, Masato Mita, Shun Kiyono, Jun Suzuki, Kentaro Inui
The answer to this question is not as straightforward as one might expect because the previous common methods for incorporating a MLM into an EncDec model have potential drawbacks when applied to GEC.
Ranked #2 on Grammatical Error Correction on JFLEG
1 code implementation • ACL 2020 • Tatsuki Kuribayashi, Takumi Ito, Jun Suzuki, Kentaro Inui
We examine a methodology using neural language models (LMs) for analyzing the word order of language.
no code implementations • LREC 2020 • Ryo Nagata, Kentaro Inui, Shin{'}ichiro Ishikawa
In this paper, we report on datasets that we created for research in feedback comment generation {---} a task of automatically generating feedback comments such as a hint or an explanatory note for writing learning.
1 code implementation • EMNLP 2020 • Sho Yokoi, Ryo Takahashi, Reina Akama, Jun Suzuki, Kentaro Inui
Accordingly, we propose a method that first decouples word vectors into their norm and direction, and then computes alignment-based similarity using earth mover's distance (i. e., optimal transport cost), which we refer to as word rotator's distance.
1 code implementation • ACL 2020 • Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui
This indicates that the generalization ability of neural models is limited to cases where the syntactic structures are nearly the same as those in the training set.
1 code implementation • EMNLP 2020 • Reina Akama, Sho Yokoi, Jun Suzuki, Kentaro Inui
Large-scale dialogue datasets have recently become available for training neural dialogue agents.
1 code implementation • ACL 2020 • Shiki Sato, Reina Akama, Hiroki Ouchi, Jun Suzuki, Kentaro Inui
Existing automatic evaluation metrics for open-domain dialogue response generation systems correlate poorly with human evaluation.
1 code implementation • ACL 2020 • Hiroki Ouchi, Jun Suzuki, Sosuke Kobayashi, Sho Yokoi, Tatsuki Kuribayashi, Ryuto Konno, Kentaro Inui
Interpretable rationales for model predictions play a critical role in practical applications.
1 code implementation • EMNLP 2020 • Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, Kentaro Inui
Attention is a key component of Transformers, which have recently achieved considerable success in natural language processing.
no code implementations • 21 Nov 2019 • Saku Sugawara, Pontus Stenetorp, Kentaro Inui, Akiko Aizawa
Existing analysis work in machine reading comprehension (MRC) is largely concerned with evaluating the capabilities of systems.
no code implementations • WS 2019 • Tianqi Wang, Naoya Inoue, Hiroki Ouchi, Tomoya Mizumoto, Kentaro Inui
Most existing SAG systems predict scores based only on the answers, including the model used as base line in this paper, which gives the-state-of-the-art performance.
no code implementations • WS 2019 • Keshav Singh, Paul Reisert, Naoya Inoue, Pride Kavumba, Kentaro Inui
Recognizing the implicit link between a claim and a piece of evidence (i. e. warrant) is the key to improving the performance of evidence detection.
no code implementations • IJCNLP 2019 • Hiroki Ouchi, Jun Suzuki, Kentaro Inui
In transductive learning, an unlabeled test set is used for model training.
no code implementations • WS 2019 • Pride Kavumba, Naoya Inoue, Benjamin Heinzerling, Keshav Singh, Paul Reisert, Kentaro Inui
Pretrained language models, such as BERT and RoBERTa, have shown large improvements in the commonsense reasoning benchmark COPA.
1 code implementation • WS 2019 • Takumi Ito, Tatsuki Kuribayashi, Hayato Kobayashi, Ana Brassard, Masato Hagiwara, Jun Suzuki, Kentaro Inui
The writing process consists of several stages such as drafting, revising, editing, and proofreading.
no code implementations • ACL 2020 • Naoya Inoue, Pontus Stenetorp, Kentaro Inui
Recent studies have revealed that reading comprehension (RC) systems learn to exploit annotation artifacts and other biases in current datasets.
no code implementations • 8 Oct 2019 • Paul Reisert, Benjamin Heinzerling, Naoya Inoue, Shun Kiyono, Kentaro Inui
Counter-arguments (CAs), one form of constructive feedback, have been proven to be useful for critical thinking skills.
1 code implementation • IJCNLP 2019 • Xiaoyu Shen, Jun Suzuki, Kentaro Inui, Hui Su, Dietrich Klakow, Satoshi Sekine
As a result, the content to be described in the text cannot be explicitly controlled.
1 code implementation • IJCNLP 2019 • Masato Hagiwara, Takumi Ito, Tatsuki Kuribayashi, Jun Suzuki, Kentaro Inui
Language technologies play a key role in assisting people with their writing.
1 code implementation • IJCNLP 2019 • Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, Kentaro Inui
The incorporation of pseudo data in the training of grammatical error correction models has been one of the main factors in improving the performance of such models.
Ranked #10 on Grammatical Error Correction on CoNLL-2014 Shared Task
no code implementations • WS 2019 • Tomoya Mizumoto, Hiroki Ouchi, Yoriko Isobe, Paul Reisert, Ryo Nagata, Satoshi Sekine, Kentaro Inui
This paper provides an analytical assessment of student short answer responses with a view to potential benefits in pedagogical contexts.
no code implementations • ACL 2019 • Tatsuki Kuribayashi, Hiroki Ouchi, Naoya Inoue, Paul Reisert, Toshinori Miyoshi, Jun Suzuki, Kentaro Inui
For several natural language processing (NLP) tasks, span representation design is attracting considerable attention as a promising new technique; a common basis for an effective design has been established.
1 code implementation • ACL 2019 • Farjana Sultana Mim, Naoya Inoue, Paul Reisert, Hiroki Ouchi, Kentaro Inui
Existing document embedding approaches mainly focus on capturing sequences of words in documents.
1 code implementation • WS 2019 • Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, Johan Bos
Monotonicity reasoning is one of the important reasoning skills for any intelligent natural language inference (NLI) model in that it requires the ability to capture the interaction between lexical and syntactic structures.
1 code implementation • NAACL 2019 • Shota Sasaki, Jun Suzuki, Kentaro Inui
The idea of subword-based word embeddings has been proposed in the literature, mainly for solving the out-of-vocabulary (OOV) word problem observed in standard word-based word embeddings.
no code implementations • SEMEVAL 2019 • Kazuaki Hanawa, Shota Sasaki, Hiroki Ouchi, Jun Suzuki, Kentaro Inui
Our system achieved 80. 9{\%} accuracy on the test set for the formal run and got the 3rd place out of 42 teams.
1 code implementation • WS 2019 • Hono Shirai, Naoya Inoue, Jun Suzuki, Kentaro Inui
Specifically, we show how to adapt the targeted sentiment analysis task for pros/cons extraction in computer science papers and conduct an annotation study.
no code implementations • WS 2019 • Qin Dai, Naoya Inoue, Paul Reisert, Ryo Takahashi, Kentaro Inui
In this work, we firstly investigate the feasibility of this framework on scientific dataset, specifically on biomedical dataset.
1 code implementation • SEMEVAL 2019 • Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, Johan Bos
To investigate this issue, we introduce a new dataset, called HELP, for handling entailments with lexical and logical phenomena.
no code implementations • NAACL 2019 • Masato Mita, Tomoya Mizumoto, Masahiro Kaneko, Ryo Nagata, Kentaro Inui
This study explores the necessity of performing cross-corpora evaluation for grammatical error correction (GEC) models.
no code implementations • WS 2019 • Yuta Hitomi, Yuya Taguchi, Hideaki Tamori, Ko Kikuta, Jiro Nishitoba, Naoaki Okazaki, Kentaro Inui, Manabu Okumura
However, because there is no corpus of headlines of multiple lengths for a given article, previous research on controlling output length in headline generation has not discussed whether the system outputs could be adequately evaluated without multiple references of different lengths.
1 code implementation • WS 2018 • Paul Reisert, Naoya Inoue, Tatsuki Kuribayashi, Kentaro Inui
Most of the existing works on argument mining cast the problem of argumentative structure identification as classification tasks (e. g. attack-support relations, stance, explicit premise/claim).
no code implementations • WS 2018 • Shun Kiyono, Sho Takase, Jun Suzuki, Naoaki Okazaki, Kentaro Inui, Masaaki Nagata
Developing a method for understanding the inner workings of black-box neural methods is an important research endeavor.
1 code implementation • PACLIC 2018 • Tsubasa Tagami, Hiroki Ouchi, Hiroki Asano, Kazuaki Hanawa, Kaori Uchiyama, Kaito Suzuki, Kentaro Inui, Atsushi Komiya, Atsuo Fujimura, Hitofumi Yanai, Ryo Yamashita, Akinori Machino
We present a new task, suspicious news detection using micro blog text.
no code implementations • 13 Oct 2018 • Shun Kiyono, Jun Suzuki, Kentaro Inui
We also demonstrate that our method has the more data, better performance property with promising scalability to the amount of unlabeled data.
no code implementations • WS 2018 • Diana Galvan, Naoaki Okazaki, Koji Matsuda, Kentaro Inui
Temporal reasoning remains as an unsolved task for Natural Language Processing (NLP), particularly demonstrated in the clinical domain.
no code implementations • EMNLP 2018 • Sho Yokoi, Sosuke Kobayashi, Kenji Fukumizu, Jun Suzuki, Kentaro Inui
As well as deriving PMI from mutual information, we derive this new measure from the Hilbert--Schmidt independence criterion (HSIC); thus, we call the new measure the pointwise HSIC (PHSIC).
1 code implementation • EMNLP 2018 • Saku Sugawara, Kentaro Inui, Satoshi Sekine, Akiko Aizawa
From this study, we observed that (i) the baseline performances for the hard subsets remarkably degrade compared to those of entire datasets, (ii) hard questions require knowledge inference and multiple-sentence reasoning in comparison with easy questions, and (iii) multiple-choice questions tend to require a broader range of reasoning skills than answer extraction and description questions.
no code implementations • COLING 2018 • Akira Sasaki, Kazuaki Hanawa, Naoaki Okazaki, Kentaro Inui
This paper presents an approach to detect the stance of a user toward a topic based on their stances toward other topics and the social media posts of the user.
no code implementations • COLING 2018 • Yuichiroh Matsubayashi, Kentaro Inui
Capturing interactions among multiple predicate-argument structures (PASs) is a crucial issue in the task of analyzing PAS in Japanese.
no code implementations • NAACL 2018 • Shota Sasaki, Shuo Sun, Shigehiko Schamoni, Kevin Duh, Kentaro Inui
Cross-lingual information retrieval (CLIR) is a document retrieval task where the documents are written in a language different from that of the user{'}s query.
1 code implementation • NAACL 2018 • Kento Watanabe, Yuichiroh Matsubayashi, Satoru Fukayama, Masataka Goto, Kentaro Inui, Tomoyasu Nakano
This paper presents a novel, data-driven language model that produces entire lyrics for a given input melody.
1 code implementation • ACL 2018 • Ryo Takahashi, Ran Tian, Kentaro Inui
Embedding models for entities and relations are extremely useful for recovering missing facts in a knowledge base.
no code implementations • ACL 2018 • Reina Akama, Kento Watanabe, Sho Yokoi, Sosuke Kobayashi, Kentaro Inui
This paper presents the first study aimed at capturing stylistic similarity between words in an unsupervised manner.
no code implementations • 22 Dec 2017 • Shun Kiyono, Sho Takase, Jun Suzuki, Naoaki Okazaki, Kentaro Inui, Masaaki Nagata
The encoder-decoder model is widely used in natural language generation tasks.
no code implementations • 7 Dec 2017 • Paul Reisert, Naoya Inoue, Naoaki Okazaki, Kentaro Inui
Our coverage result of 74. 6% indicates that argumentative relations can reasonably be explained by our small pattern set.
no code implementations • IJCNLP 2017 • Reina Akama, Kazuaki Inada, Naoya Inoue, Sosuke Kobayashi, Kentaro Inui
We propose a novel, data-driven, and stylistically consistent dialog response generation system.
no code implementations • IJCNLP 2017 • Hiroki Asano, Tomoya Mizumoto, Kentaro Inui
In grammatical error correction (GEC), automatically evaluating system outputs requires gold-standard references, which must be created manually and thus tend to be both expensive and limited in coverage.
no code implementations • IJCNLP 2017 • Yuta Hitomi, Hideaki Tamori, Naoaki Okazaki, Kentaro Inui
This paper explores the idea of robot editors, automated proofreaders that enable journalists to improve the quality of their articles.
no code implementations • IJCNLP 2017 • Yuichiroh Matsubayashi, Kentaro Inui
The research trend in Japanese predicate-argument structure (PAS) analysis is shifting from pointwise prediction models with local features to global models designed to search for globally optimal solutions.
1 code implementation • IJCNLP 2017 • Sosuke Kobayashi, Naoaki Okazaki, Kentaro Inui
This study addresses the problem of identifying the meaning of unknown words or entities in a discourse with respect to the word embedding approaches used in neural language models.
no code implementations • WS 2017 • Hideaki Tamori, Yuta Hitomi, Naoaki Okazaki, Kentaro Inui
We address the issue of the quality of journalism and analyze daily article revision logs from a Japanese newspaper company.
1 code implementation • ACL 2016 • Sho Takase, Naoaki Okazaki, Kentaro Inui
Learning distributed representations for relation instances is a central technique in downstream NLP applications.
no code implementations • ACL 2017 • Akira Sasaki, Kazuaki Hanawa, Naoaki Okazaki, Kentaro Inui
We present in this paper our approach for modeling inter-topic preferences of Twitter users: for example, those who agree with the Trans-Pacific Partnership (TPP) also agree with free trade.
no code implementations • COLING 2016 • Kento Watanabe, Yuichiroh Matsubayashi, Naho Orita, Naoaki Okazaki, Kentaro Inui, Satoru Fukayama, Tomoyasu Nakano, Jordan Smith, Masataka Goto
This study proposes a computational model of the discourse segments in lyrics to understand and to model the structure of lyrics.
no code implementations • COLING 2016 • Naoya Inoue, Yuichiroh Matsubayashi, Masayuki Ono, Naoaki Okazaki, Kentaro Inui
This paper proposes a novel problem setting of selectional preference (SP) between a predicate and its arguments, called as context-sensitive SP (CSP).
1 code implementation • ACL 2016 • Ran Tian, Naoaki Okazaki, Kentaro Inui
This paper connects a vector-based composition model to a formal semantics, the Dependency-based Compositional Semantics (DCS).
1 code implementation • EACL 2017 • Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, Sebastian Riedel
In this work, we investigate several neural network architectures for fine-grained entity type classification.
no code implementations • LREC 2016 • Corentin Dumont, Ran Tian, Kentaro Inui
We chose a popular game called {`}Minecraft{'}, and created a QA corpus with a knowledge database related to this game and the ontology of a meaning representation that will be used to structure this database.
no code implementations • WS 2016 • Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, Sebastian Riedel
In this work we propose a novel attention-based neural network model for the task of fine-grained entity type classification that unlike previously proposed models recursively composes representations of entity mention contexts.
no code implementations • 26 Nov 2015 • Ran Tian, Naoaki Okazaki, Kentaro Inui
Additive composition (Foltz et al, 1998; Landauer and Dumais, 1997; Mitchell and Lapata, 2010) is a widely used method for computing meanings of phrases, which takes the average of vector representations of the constituent words.