no code implementations • CVPR 2023 • Yutaro Shigeto, Masashi Shimbo, Yuya Yoshikawa, Akikazu Takeuchi
Barlow Twins and VICReg are self-supervised representation learning models that use regularizers to decorrelate features.
no code implementations • 1 Nov 2021 • Yushi Hirose, Masashi Shimbo, Taro Watanabe
For knowledge graph completion, two major types of prediction models exist: one based on graph embeddings, and the other based on relation path rule induction.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Katsuhiko Hayashi, Koki Kishimoto, Masashi Shimbo
This paper presents a simple and effective discrete optimization method for training binarized knowledge graph embedding model B-CP.
no code implementations • 4 Dec 2019 • Koki Kishimoto, Katsuhiko Hayashi, Genki Akai, Masashi Shimbo
Methods based on vector embeddings of knowledge graphs have been actively pursued as a promising approach to knowledge graph completion. However, embedding models generate storage-inefficient representations, particularly when the number of entities and relations, and the dimensionality of the real-valued embedding vectors are large.
no code implementations • IJCNLP 2019 • Katsuhiko Hayashi, Masashi Shimbo
Although they perform well in predicting atomic relations, composite relations (relation paths) cannot be modeled naturally by the product of relation matrices, as the product of diagonal matrices is commutative and hence invariant with the order of relations.
2 code implementations • 8 Feb 2019 • Koki Kishimoto, Katsuhiko Hayashi, Genki Akai, Masashi Shimbo, Kazunori Komatani
This limitation is expected to become more stringent as existing knowledge graphs, which are already huge, keep steadily growing in scale.
no code implementations • 25 Aug 2018 • Hitoshi Manabe, Katsuhiko Hayashi, Masashi Shimbo
Embedding-based methods for knowledge base completion (KBC) learn representations of entities and relations in a vector space, along with the scoring function to estimate the likelihood of relations between entities.
1 code implementation • ACL 2018 • Van-Thuy Phi, Joan Santoso, Masashi Shimbo, Yuji Matsumoto
This paper addresses the tasks of automatic seed selection for bootstrapping relation extraction, and noise reduction for distantly supervised relation extraction.
no code implementations • 11 Jun 2018 • Yutaro Shigeto, Masashi Shimbo, Yuji Matsumoto
This paper proposes an inexpensive way to learn an effective dissimilarity function to be used for $k$-nearest neighbor ($k$-NN) classification.
no code implementations • NAACL 2018 • Takahiro Ishihara, Katsuhiko Hayashi, Hitoshi Manabe, Masashi Shimbo, Masaaki Nagata
Although neural tensor networks (NTNs) have been successful in many NLP tasks, they require a large number of parameters to be estimated, which often leads to overfitting and a long training time.
1 code implementation • 18 Jun 2017 • Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, Yuji Matsumoto
Knowledge base completion (KBC) aims to predict missing information in a knowledge base. In this paper, we address the out-of-knowledge-base (OOKB) entity problem in KBC:how to answer queries concerning test entities not observed at training time.
no code implementations • 22 Feb 2017 • Ai Azuma, Masashi Shimbo, Yuji Matsumoto
back propagation) on computation graphs with addition and multiplication, and so on.
no code implementations • ACL 2017 • Katsuhiko Hayashi, Masashi Shimbo
We show the equivalence of two state-of-the-art link prediction/knowledge graph completion methods: Nickel et al's holographic embedding and Trouillon et al.'s complex embedding.
no code implementations • 3 Jul 2015 • Yutaro Shigeto, Ikumi Suzuki, Kazuo Hara, Masashi Shimbo, Yuji Matsumoto
This paper discusses the effect of hubness in zero-shot learning, when ridge regression is used to find a mapping between the example space to the label space.
no code implementations • 7 Dec 2012 • Ilkka Kivimäki, Masashi Shimbo, Marco Saerens
In particular, we see that the results obtained with the free energy distance are among the best in all the experiments.