no code implementations • SemEval (NAACL) 2022 • Gaku Morio, Hiroaki Ozaki, Atsuki Yamaguchi, Yasuhiro Sogawa
In this task, we have to parse opinions considering both structure- and context-dependent subjective aspects, which is different from typical dependency parsing.
no code implementations • SIGDIAL (ACL) 2022 • Amalia Adiba, Takeshi Homma, Yasuhiro Sogawa
Therefore, unlike previous studies, we propose a domain-adaptation framework of MRC under the assumption that the only available data in the target domain are human conversations between a user asking questions and an expert answering the questions.
no code implementations • SemEval (NAACL) 2022 • Atsuki Yamaguchi, Gaku Morio, Hiroaki Ozaki, Yasuhiro Sogawa
In this paper, we describe our system for SemEval-2022 Task 2: Multilingual Idiomaticity Detection and Sentence Embedding.
1 code implementation • 14 Nov 2023 • Yuichi Sasazawa, Kenichi Yokote, Osamu Imaichi, Yasuhiro Sogawa
We ranked the documents by BM25 and language models, and then re-ranks by a model ensemble or a larger language model for documents with high similarity to the query.
1 code implementation • 11 Aug 2023 • Terufumi Morishita, Gaku Morio, Atsuki Yamaguchi, Yasuhiro Sogawa
We rethink this and adopt a well-grounded set of deduction rules based on formal logic theory, which can derive any other deduction rules when combined in a multistep way.
1 code implementation • 6 Aug 2023 • Yuta Koreeda, Terufumi Morishita, Osamu Imaichi, Yasuhiro Sogawa
Writing a readme is a crucial aspect of software development as it plays a vital role in managing and reusing program code.
1 code implementation • 16 Jun 2023 • Takuro Fujii, Koki Shibata, Atsuki Yamaguchi, Terufumi Morishita, Yasuhiro Sogawa
This paper investigates the effect of tokenizers on the downstream performance of pretrained language models (PLMs) in scriptio continua languages where no explicit spaces exist between words, using Japanese as a case study.
1 code implementation • 18 May 2023 • Atsuki Yamaguchi, Hiroaki Ozaki, Terufumi Morishita, Gaku Morio, Yasuhiro Sogawa
Masked language modeling (MLM) is a widely used self-supervised pretraining objective, where a model needs to predict an original token that is replaced with a mask given contexts.
1 code implementation • 19 Apr 2023 • Yuichi Sasazawa, Terufumi Morishita, Hiroaki Ozaki, Osamu Imaichi, Yasuhiro Sogawa
In this paper, we tackle a novel task of controlling not only keywords but also the position of each keyword in the text generation.
no code implementations • 3 Mar 2023 • Yuta Koreeda, Ken-ichi Yokote, Hiroaki Ozaki, Atsuki Yamaguchi, Masaya Tsunokake, Yasuhiro Sogawa
Based on the multilingual, multi-task nature of the task and the low-resource setting, we investigated different cross-lingual and multi-task strategies for training the pretrained language models.
1 code implementation • 10 Jul 2017 • Wei Qian, Wending Li, Yasuhiro Sogawa, Ryohei Fujimaki, Xitong Yang, Ji Liu
Sparsity learning with known grouping structure has received considerable attention due to wide modern applications in high-dimensional data analysis.
Human Activity Recognition
Vocal Bursts Intensity Prediction