Search Results for author: Ikuya Yamada

Found 18 papers, 12 papers with code

EASE: Entity-Aware Contrastive Learning of Sentence Embedding

1 code implementation9 May 2022 Sosuke Nishikawa, Ryokan Ri, Ikuya Yamada, Yoshimasa Tsuruoka, Isao Echizen

We present EASE, a novel method for learning sentence embeddings via contrastive learning between sentences and their related entities.

Contrastive Learning Semantic Textual Similarity +3

A Multilingual Bag-of-Entities Model for Zero-Shot Cross-Lingual Text Classification

no code implementations15 Oct 2021 Sosuke Nishikawa, Ikuya Yamada, Yoshimasa Tsuruoka, Isao Echizen

We present a multilingual bag-of-entities model that effectively boosts the performance of zero-shot cross-lingual text classification by extending a multilingual pre-trained language model (e. g., M-BERT).

Classification Entity Typing +3

mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models

1 code implementation ACL 2022 Ryokan Ri, Ikuya Yamada, Yoshimasa Tsuruoka

We train a multilingual language model with 24 languages with entity representations and show the model consistently outperforms word-based pretrained models in various cross-lingual transfer tasks.

 Ranked #1 on Cross-Lingual Question Answering on XQuAD (Average F1 metric, using extra training data)

Cross-Lingual Question Answering Cross-Lingual Transfer +2

Efficient Passage Retrieval with Hashing for Open-domain Question Answering

1 code implementation ACL 2021 Ikuya Yamada, Akari Asai, Hannaneh Hajishirzi

Most state-of-the-art open-domain question answering systems use a neural retrieval model to encode passages into continuous vectors and extract them from a knowledge source.

Open-Domain Question Answering Passage Retrieval

Neural Attentive Bag-of-Entities Model for Text Classification

3 code implementations CONLL 2019 Ikuya Yamada, Hiroyuki Shindo

This study proposes a Neural Attentive Bag-of-Entities model, which is a neural network model that performs text classification using entities in a knowledge base.

Classification General Classification +2

Wikipedia2Vec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from Wikipedia

no code implementations EMNLP 2020 Ikuya Yamada, Akari Asai, Jin Sakuma, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji, Yuji Matsumoto

The embeddings of entities in a large knowledge base (e. g., Wikipedia) are highly beneficial for solving various natural language tasks that involve real world knowledge.

Representation Learning of Entities and Documents from Knowledge Base Descriptions

2 code implementations COLING 2018 Ikuya Yamada, Hiroyuki Shindo, Yoshiyasu Takefuji

In this paper, we describe TextEnt, a neural network model that learns distributed representations of entities and documents directly from a knowledge base (KB).

Entity Typing General Classification +2

Studio Ousia's Quiz Bowl Question Answering System

no code implementations23 Mar 2018 Ikuya Yamada, Ryuji Tamaki, Hiroyuki Shindo, Yoshiyasu Takefuji

In this chapter, we describe our question answering system, which was the winning system at the Human-Computer Question Answering (HCQA) Competition at the Thirty-first Annual Conference on Neural Information Processing Systems (NIPS).

Information Retrieval Question Answering

Segment-Level Neural Conditional Random Fields for Named Entity Recognition

no code implementations IJCNLP 2017 Motoki Sato, Hiroyuki Shindo, Ikuya Yamada, Yuji Matsumoto

We present Segment-level Neural CRF, which combines neural networks with a linear chain CRF for segment-level sequence modeling tasks such as named entity recognition (NER) and syntactic chunking.

Chunking Morphological Tagging +3

Ensemble of Neural Classifiers for Scoring Knowledge Base Triples

1 code implementation15 Mar 2017 Ikuya Yamada, Motoki Sato, Hiroyuki Shindo

This paper describes our approach for the triple scoring task at the WSDM Cup 2017.

Entity Retrieval

Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation

1 code implementation CONLL 2016 Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji

The KB graph model learns the relatedness of entities using the link structure of the KB, whereas the anchor context model aims to align vectors such that similar words and entities occur close to one another in the vector space by leveraging KB anchors and their context words.

Entity Disambiguation Entity Linking

Cannot find the paper you are looking for? You can Submit a new open access paper.