Search Results for author: Satoshi Sekine

Found 32 papers, 6 papers with code

What Makes Reading Comprehension Questions Easier?

1 code implementation EMNLP 2018 Saku Sugawara, Kentaro Inui, Satoshi Sekine, Akiko Aizawa

From this study, we observed that (i) the baseline performances for the hard subsets remarkably degrade compared to those of entire datasets, (ii) hard questions require knowledge inference and multiple-sentence reasoning in comparison with easy questions, and (iii) multiple-choice questions tend to require a broader range of reasoning skills than answer extraction and description questions.

Machine Reading Comprehension Multiple-choice +1

Can neural networks understand monotonicity reasoning?

1 code implementation WS 2019 Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, Johan Bos

Monotonicity reasoning is one of the important reasoning skills for any intelligent natural language inference (NLI) model in that it requires the ability to capture the interaction between lexical and syntactic structures.

Data Augmentation Natural Language Inference

An Entity-Based approach to Answering Recurrent and Non-Recurrent Questions with Past Answers

no code implementations WS 2016 Anietie Andy, Mugizi Rwebangira, Satoshi Sekine

For unanswered questions that do not have a past resolved question with a shared need, we propose to use the best answer to a past resolved question with similar needs.

Community Question Answering Entity Linking

An Empirical Study on Fine-Grained Named Entity Recognition

no code implementations COLING 2018 Khai Mai, Thai-Hoang Pham, Minh Trung Nguyen, Tuan Duc Nguyen, Danushka Bollegala, Ryohei Sasano, Satoshi Sekine

However, there is little research on fine-grained NER (FG-NER), in which hundreds of named entity categories must be recognized, especially for non-English languages.

Chatbot named-entity-recognition +3

Analytic Score Prediction and Justification Identification in Automated Short Answer Scoring

no code implementations WS 2019 Tomoya Mizumoto, Hiroki Ouchi, Yoriko Isobe, Paul Reisert, Ryo Nagata, Satoshi Sekine, Kentaro Inui

This paper provides an analytical assessment of student short answer responses with a view to potential benefits in pedagogical contexts.

SHINRA: Structuring Wikipedia by Collaborative Contribution

no code implementations AKBC 2019 Satoshi Sekine, Akio Kobayashi, Kouta Nakayama

We believe this situation can be improved by the following changes: 1. designing the shared-task to construct knowledge base rather than evaluating only limited test data 2. making the outputs of all the systems open to public so that we can run ensemble learning to create the better results than the best systems 3. repeating the task so that we can run the task with the larger and better training data from the output of the previous task (bootstrapping and active learning) We conducted “SHINRA2018” with the above mentioned scheme and in this paper we report the results and the future directions of the project.

Active Learning Attribute +1

SHINRA2020-ML: Categorizing 30-language Wikipedia into fine-grained NE based on "``Resource by Collaborative Contribution" scheme

no code implementations AKBC 2021 Satoshi Sekine, Kouta Nakayama, Maya Ando, Yu Usami, Masako Nomoto, Koji Matsuda

In our "Resource by Collaborative Contribution (RbCC)" scheme, we conducted a shared task of structuring Wikipedia to attract participants but simultaneously submitted results are used to construct a knowledge base.

Ensemble Learning

Uncertainty Regularized Multi-Task Learning

no code implementations WASSA (ACL) 2022 Kourosh Meshgi, Maryam Sadat Mirzaei, Satoshi Sekine

By sharing parameters and providing task-independent shared features, multi-task deep neural networks are considered one of the most interesting ways for parallel learning from different tasks and domains.

Multi-Task Learning text-classification +1

Resource of Wikipedias in 31 Languages Categorized into Fine-Grained Named Entities

no code implementations COLING 2022 Satoshi Sekine, Kouta Nakayama, Masako Nomoto, Maya Ando, Asuka Sumida, Koji Matsuda

The training data were provided by Japanese categorization and the language links, and the task was to categorize the Wikipedia pages into 30 languages, with no language links from Japanese Wikipedia (20M pages in total).

Attribute Attribute Extraction +2

Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance

no code implementations22 Feb 2024 Ziqi Yin, Hao Wang, Kaito Horio, Daisuke Kawahara, Satoshi Sekine

We investigate the impact of politeness levels in prompts on the performance of large language models (LLMs).

Cannot find the paper you are looking for? You can Submit a new open access paper.