no code implementations • EMNLP (ArgMining) 2021 • Takahiro Kondo, Koki Washio, Katsuhiko Hayashi, Yusuke Miyao
We propose a methodology for representing the reasoning structure of arguments using Bayesian networks and predicate logic facilitated by argumentation schemes.
1 code implementation • 17 Jan 2025 • Kazuma Onishi, Katsuhiko Hayashi
Many current high-performance XML models are composed of a lot of hyperparameters, which complicates the tuning process.
no code implementations • 5 Jan 2025 • Takashi Harada, Takehiro Motomitsu, Katsuhiko Hayashi, Yusuke Sakai, Hidetaka Kamigaito
In recent years, there has been a notable increase in research on machine learning models for music retrieval and generation systems that are capable of taking natural language sentences as inputs.
no code implementations • 29 Dec 2024 • Shintaro Ozaki, Yuta Kato, Siyuan Feng, Masayo Tomita, Kazuki Hayashi, Ryoma Obara, Masafumi Oyamada, Katsuhiko Hayashi, Hidetaka Kamigaito, Taro Watanabe
Retrieval Augmented Generation (RAG) complements the knowledge of Large Language Models (LLMs) by leveraging external information to enhance response accuracy for queries.
no code implementations • 26 Dec 2024 • Siyuan Feng, Teruya Yoshinaga, Katsuhiko Hayashi, Koki Washio, Hidetaka Kamigaito
Today, manga has gained worldwide popularity.
no code implementations • 19 Oct 2024 • Hidetaka Kamigaito, Hiroyuki Deguchi, Yusuke Sakai, Katsuhiko Hayashi, Taro Watanabe
We also introduce a new MBR approach, Metric-augmented MBR (MAMBR), which increases diversity by adjusting the behavior of utility functions without altering the pseudo-references.
no code implementations • 3 Sep 2024 • Shintaro Ozaki, Kazuki Hayashi, Yusuke Sakai, Hidetaka Kamigaito, Katsuhiko Hayashi, Taro Watanabe
As the performance of Large-scale Vision Language Models (LVLMs) improves, they are increasingly capable of responding in multiple languages, and there is an expectation that the demand for explanations generated by LVLMs will grow.
1 code implementation • 8 Jul 2024 • Ken Nishida, Kojiro Machi, Kazuma Onishi, Katsuhiko Hayashi, Hidetaka Kamigaito
The extreme multi-label classification~(XMC) task involves learning a classifier that can predict from a large label set the most relevant subset of labels for a data instance.
Extreme Multi-Label Classification MUlTI-LABEL-ClASSIFICATION +1
1 code implementation • 5 Jul 2024 • Xincan Feng, Hidetaka Kamigaito, Katsuhiko Hayashi, Taro Watanabe
This paper provides theoretical interpretations of the smoothing methods for the NS loss in KGE and induces a new NS loss, Triplet Adaptive Negative Sampling (TANS), that can cover the characteristics of the conventional smoothing methods.
no code implementations • 29 Feb 2024 • Kazuki Hayashi, Yusuke Sakai, Hidetaka Kamigaito, Katsuhiko Hayashi, Taro Watanabe
To address this issue, we propose a new task: the artwork explanation generation task, along with its evaluation dataset and metric for quantitatively assessing the understanding and utilization of knowledge about artworks.
no code implementations • 19 Feb 2024 • Kazuki Hayashi, Kazuma Onishi, Toma Suzuki, Yusuke Ide, Seiji Gobara, Shigeki Saito, Yusuke Sakai, Hidetaka Kamigaito, Katsuhiko Hayashi, Taro Watanabe
We validate it using a dataset of images from 15 categories, each with five critic review texts and annotated rankings in both English and Japanese, totaling over 2, 000 data instances.
no code implementations • 15 Nov 2023 • Yusuke Sakai, Hidetaka Kamigaito, Katsuhiko Hayashi, Taro Watanabe
Knowledge Graph Completion (KGC) is a task that infers unseen relationships between entities in a KG.
1 code implementation • 17 Sep 2023 • Xincan Feng, Hidetaka Kamigaito, Katsuhiko Hayashi, Taro Watanabe
Subsampling is effective in Knowledge Graph Embedding (KGE) for reducing overfitting caused by the sparsity in Knowledge Graph (KG) datasets.
no code implementations • 15 Aug 2023 • Katsuhiko Hayashi, Kazuma Onishi
Recently, in the field of recommendation systems, linear regression (autoencoder) models have been investigated as a way to learn item similarity.
no code implementations • 14 Jun 2023 • Katsuhiko Hayashi
Wikipedia has high-quality articles on a variety of topics and has been used in diverse research areas.
1 code implementation • 3 Jun 2023 • Hidetaka Kamigaito, Katsuhiko Hayashi, Taro Watanabe
This task consists of two parts: the first is to generate a table containing knowledge about an entity and its related image, and the second is to generate an image from an entity with a caption and a table containing related knowledge of the entity.
no code implementations • 13 Sep 2022 • Hidetaka Kamigaito, Katsuhiko Hayashi
In this article, we explain the recent advance of subsampling methods in knowledge graph embedding (KGE) starting from the original one used in word2vec.
1 code implementation • 21 Jun 2022 • Hidetaka Kamigaito, Katsuhiko Hayashi
To solve this problem, we theoretically analyzed NS loss to assist hyperparameter tuning and understand the better use of the NS loss in KGE learning.
no code implementations • 29 Sep 2021 • Hidetaka Kamigaito, Katsuhiko Hayashi
On the other hand, properties of the NS loss function that are considered important for learning, such as the relationship between the noise distribution and the number of negative samples, have not been investigated theoretically.
1 code implementation • ACL 2021 • Hidetaka Kamigaito, Katsuhiko Hayashi
In knowledge graph embedding, the theoretical relationship between the softmax cross-entropy and negative sampling loss functions has not been investigated.
Ranked #15 on Link Prediction on FB15k-237
no code implementations • Findings of the Association for Computational Linguistics 2020 • Katsuhiko Hayashi, Koki Kishimoto, Masashi Shimbo
This paper presents a simple and effective discrete optimization method for training binarized knowledge graph embedding model B-CP.
no code implementations • EMNLP (NLP-COVID19) 2020 • Akiko Aizawa, Frederic Bergeron, Junjie Chen, Fei Cheng, Katsuhiko Hayashi, Kentaro Inui, Hiroyoshi Ito, Daisuke Kawahara, Masaru Kitsuregawa, Hirokazu Kiyomaru, Masaki Kobayashi, Takashi Kodama, Sadao Kurohashi, Qianying Liu, Masaki Matsubara, Yusuke Miyao, Atsuyuki Morishima, Yugo Murawaki, Kazumasa Omura, Haiyue Song, Eiichiro Sumita, Shinji Suzuki, Ribeka Tanaka, Yu Tanaka, Masashi Toyoda, Nobuhiro Ueda, Honai Ueoka, Masao Utiyama, Ying Zhong
The global pandemic of COVID-19 has made the public pay close attention to related news, covering various domains, such as sanitation, treatment, and effects on education.
no code implementations • LREC 2020 • Namgi Han, Katsuhiko Hayashi, Yusuke Miyao
Many researchers have tried to predict the accuracies of extrinsic evaluation by using intrinsic evaluation to evaluate word embedding.
no code implementations • 4 Dec 2019 • Koki Kishimoto, Katsuhiko Hayashi, Genki Akai, Masashi Shimbo
Methods based on vector embeddings of knowledge graphs have been actively pursued as a promising approach to knowledge graph completion. However, embedding models generate storage-inefficient representations, particularly when the number of entities and relations, and the dimensionality of the real-valued embedding vectors are large.
no code implementations • IJCNLP 2019 • Katsuhiko Hayashi, Masashi Shimbo
Although they perform well in predicting atomic relations, composite relations (relation paths) cannot be modeled naturally by the product of relation matrices, as the product of diagonal matrices is commutative and hence invariant with the order of relations.
2 code implementations • 8 Feb 2019 • Koki Kishimoto, Katsuhiko Hayashi, Genki Akai, Masashi Shimbo, Kazunori Komatani
This limitation is expected to become more stringent as existing knowledge graphs, which are already huge, keep steadily growing in scale.
1 code implementation • PACLIC 2018 • Tomoki Matsuno, Katsuhiko Hayashi, Takahiro Ishihara, Hitoshi Manabe, Yuji Matsumoto
Currently, the biaffine classifier has been attracting attention as a method to introduce an attention mechanism into the modeling of binary relations.
no code implementations • 25 Aug 2018 • Hitoshi Manabe, Katsuhiko Hayashi, Masashi Shimbo
Embedding-based methods for knowledge base completion (KBC) learn representations of entities and relations in a vector space, along with the scoring function to estimate the likelihood of relations between entities.
no code implementations • NAACL 2018 • Hidetaka Kamigaito, Katsuhiko Hayashi, Tsutomu Hirao, Masaaki Nagata
To solve this problem, we propose a higher-order syntactic attention network (HiSAN) that can handle higher-order dependency features as an attention distribution on LSTM hidden states.
Ranked #3 on Sentence Compression on Google Dataset
no code implementations • NAACL 2018 • Takahiro Ishihara, Katsuhiko Hayashi, Hitoshi Manabe, Masashi Shimbo, Masaaki Nagata
Although neural tensor networks (NTNs) have been successful in many NLP tasks, they require a large number of parameters to be estimated, which often leads to overfitting and a long training time.
no code implementations • IJCNLP 2017 • Hidetaka Kamigaito, Katsuhiko Hayashi, Tsutomu Hirao, Hiroya Takamura, Manabu Okumura, Masaaki Nagata
The sequence-to-sequence (Seq2Seq) model has been successfully applied to machine translation (MT).
no code implementations • WS 2017 • Takaaki Tanaka, Katsuhiko Hayashi, Masaaki Nagata
We introduce the following hierarchical word structures to dependency parsing in Japanese: morphological units (a short unit word, SUW) and syntactic units (a long unit word, LUW).
no code implementations • EACL 2017 • Katsuhiko Hayashi, Masaaki Nagata
This paper presents an efficient and optimal parsing algorithm for probabilistic context-free grammars (PCFGs).
no code implementations • ACL 2017 • Katsuhiko Hayashi, Masashi Shimbo
We show the equivalence of two state-of-the-art link prediction/knowledge graph completion methods: Nickel et al's holographic embedding and Trouillon et al.'s complex embedding.
no code implementations • WS 2016 • Katsuhiko Hayashi, Tsutomu Hirao, Masaaki Nagata
Ranked #5 on Discourse Parsing on RST-DT (RST-Parseval (Full) metric)
no code implementations • TACL 2013 • Katsuhiko Hayashi, Shuhei Kondo, Yuji Matsumoto
This paper proposes a discriminative forest reranking algorithm for dependency parsing that can be seen as a form of efficient stacked parsing.