Search Results for author: Kazutoshi Shinoda

Found 8 papers, 3 papers with code

Can Question Generation Debias Question Answering Models? A Case Study on Question–Context Lexical Overlap

no code implementations EMNLP (MRQA) 2021 Kazutoshi Shinoda, Saku Sugawara, Akiko Aizawa

Question answering (QA) models for reading comprehension have been demonstrated to exploit unintended dataset biases such as question–context lexical overlap.

Data Augmentation Question Answering +3

Which Shortcut Solution Do Question Answering Models Prefer to Learn?

1 code implementation29 Nov 2022 Kazutoshi Shinoda, Saku Sugawara, Akiko Aizawa

We assume that the learnability of shortcuts, i. e., how easy it is to learn a shortcut, is useful to mitigate the problem.

Multiple-choice Question Answering +1

Look to the Right: Mitigating Relative Position Bias in Extractive Question Answering

no code implementations26 Oct 2022 Kazutoshi Shinoda, Saku Sugawara, Akiko Aizawa

Specifically, we find that when the relative positions in a training set are biased, the performance on examples with relative positions unseen during training is significantly degraded.

Extractive Question-Answering Position +1

Improving the Robustness to Variations of Objects and Instructions with a Neuro-Symbolic Approach for Interactive Instruction Following

no code implementations13 Oct 2021 Kazutoshi Shinoda, Yuki Takezawa, Masahiro Suzuki, Yusuke Iwasawa, Yutaka Matsuo

An interactive instruction following task has been proposed as a benchmark for learning to map natural language instructions and first-person vision into sequences of actions to interact with objects in 3D environments.

Instruction Following

Can Question Generation Debias Question Answering Models? A Case Study on Question-Context Lexical Overlap

1 code implementation23 Sep 2021 Kazutoshi Shinoda, Saku Sugawara, Akiko Aizawa

Question answering (QA) models for reading comprehension have been demonstrated to exploit unintended dataset biases such as question-context lexical overlap.

Data Augmentation Question Answering +3

Improving the Robustness of QA Models to Challenge Sets with Variational Question-Answer Pair Generation

1 code implementation ACL 2021 Kazutoshi Shinoda, Saku Sugawara, Akiko Aizawa

While most existing QAG methods aim to improve the quality of synthetic examples, we conjecture that diversity-promoting QAG can mitigate the sparsity of training sets and lead to better robustness.

Data Augmentation Machine Reading Comprehension +1

Multi-style Generative Reading Comprehension

no code implementations ACL 2019 Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, Junji Tomita

Second, whereas previous studies built a specific model for each answer style because of the difficulty of acquiring one general model, our approach learns multi-style answers within a model to improve the NLG capability for all styles involved.

Abstractive Text Summarization Question Answering +2

Cannot find the paper you are looking for? You can Submit a new open access paper.