no code implementations • EMNLP (sdp) 2020 • Jiyi Li, Ayaka Sato, Kazuya Shimura, Fumiyo Fukumoto
To handle the small size of published datasets on the target aspect of scores, we propose a multi-task approach to leverage additional information from other aspects of scores for improving the performance of the target.
no code implementations • EMNLP 2020 • Wenshuo Yang, Jiyi Li, Fumiyo Fukumoto, Yanming Ye
The data imbalance problem is a crucial issue for the multi-label text classification.
Ranked #11 on Spectral Reconstruction on ARAD-1K
1 code implementation • 10 Apr 2024 • Xinfeng Wang, Fumiyo Fukumoto, Jin Cui, Yoshimi Suzuki, Jiyi Li, Dongjin Yu
To tackle the skewed distribution, we propose two strategies for disentangling interactions: (1) modeling individual biases to learn unbiased item embeddings, and (2) incorporating item popularity with positional encoding.
2 code implementations • 10 Apr 2024 • Xinfeng Wang, Fumiyo Fukumoto, Jin Cui, Yoshimi Suzuki, Dongjin Yu
In this paper, we propose a negative feedback-aware recommender model (NFARec) that maximizes the leverage of negative feedback.
no code implementations • 15 Mar 2024 • Jin Cui, Fumiyo Fukumoto, Xinfeng Wang, Yoshimi Suzuki, Jiyi Li, Noriko Tomuro, Wanzeng Kong
To address the issue of multiple aspect categories and sentiment entanglement, we propose a hierarchical disentanglement module to extract distinct categories and sentiment features.
1 code implementation • EMNLP 2021 • Zhiwei Zhang, Jiyi Li, Fumiyo Fukumoto, Yanming Ye
In addition, we enhance the information exchanges and constraints among tasks by proposing a regularization term between the sentence attention scores of abstract retrieval and the estimated outputs of rational selection.
no code implementations • COLING 2020 • Panitan Muangkammuen, Sheng Xu, Fumiyo Fukumoto, Kanda Runapongsa Saikaew, Jiyi Li
Local coherence relation between two phrases/sentences such as cause-effect and contrast gives a strong influence of whether a text is well-structured or not.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Panitan Muangkammuen, Fumiyo Fukumoto
Multi-task learning models, one of the deep learning techniques that have recently been applied to many NLP tasks, demonstrate the vast potential for AES.
no code implementations • WS 2020 • Chu Su, ong, Fumiyo Fukumoto, Xiaoxi Huang, Jiyi Li, Rongbo Wang, Zhiqun Chen
Machine metaphor understanding is one of the major topics in NLP.
no code implementations • LREC 2020 • Jiajun Xu, Kyosuke Masuda, Hiromitsu Nishizaki, Fumiyo Fukumoto, Yoshimi Suzuki
Therefore, this paper proposes a method of creating a semi-automatically constructed emotion corpus.
no code implementations • WS 2019 • Jiyi Li, Fumiyo Fukumoto
To ensure the quality of the crowdsourced data, people can assign multiple workers to one question and then aggregate the multiple answers with diverse quality into a golden one.
1 code implementation • ACL 2019 • Kazuya Shimura, Jiyi Li, Fumiyo Fukumoto
Distributions of the senses of words are often highly skewed and give a strong influence of the domain of a document.
1 code implementation • EMNLP 2018 • Kazuya Shimura, Jiyi Li, Fumiyo Fukumoto
The lower the HS level, the less the categorization performance.
no code implementations • LREC 2014 • Suguru Matsuyoshi, Ryo Otsuki, Fumiyo Fukumoto
As a foundation for developing a negation focus detector for Japanese, we have annotated textdata of {``}Rakuten Travel: User review data{''} and the newspaper subcorpus of the {``}Balanced Corpus of Contemporary Written Japanese{''} with labels proposed in our annotation scheme.