Search Results for author: Ikuya Yamada

Found 21 papers, 14 papers with code

Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation

1 code implementation CONLL 2016 Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji

The KB graph model learns the relatedness of entities using the link structure of the KB, whereas the anchor context model aims to align vectors such that similar words and entities occur close to one another in the vector space by leveraging KB anchors and their context words.

Entity Disambiguation Entity Linking

Segment-Level Neural Conditional Random Fields for Named Entity Recognition

no code implementations IJCNLP 2017 Motoki Sato, Hiroyuki Shindo, Ikuya Yamada, Yuji Matsumoto

We present Segment-level Neural CRF, which combines neural networks with a linear chain CRF for segment-level sequence modeling tasks such as named entity recognition (NER) and syntactic chunking.

Chunking Morphological Tagging +3

Studio Ousia's Quiz Bowl Question Answering System

no code implementations23 Mar 2018 Ikuya Yamada, Ryuji Tamaki, Hiroyuki Shindo, Yoshiyasu Takefuji

In this chapter, we describe our question answering system, which was the winning system at the Human-Computer Question Answering (HCQA) Competition at the Thirty-first Annual Conference on Neural Information Processing Systems (NIPS).

BIG-bench Machine Learning Information Retrieval +2

Representation Learning of Entities and Documents from Knowledge Base Descriptions

2 code implementations COLING 2018 Ikuya Yamada, Hiroyuki Shindo, Yoshiyasu Takefuji

In this paper, we describe TextEnt, a neural network model that learns distributed representations of entities and documents directly from a knowledge base (KB).

Entity Typing General Classification +3

Wikipedia2Vec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from Wikipedia

no code implementations EMNLP 2020 Ikuya Yamada, Akari Asai, Jin Sakuma, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji, Yuji Matsumoto

The embeddings of entities in a large knowledge base (e. g., Wikipedia) are highly beneficial for solving various natural language tasks that involve real world knowledge.

World Knowledge

Neural Attentive Bag-of-Entities Model for Text Classification

3 code implementations CONLL 2019 Ikuya Yamada, Hiroyuki Shindo

This study proposes a Neural Attentive Bag-of-Entities model, which is a neural network model that performs text classification using entities in a knowledge base.

General Classification Question Answering +1

Efficient Passage Retrieval with Hashing for Open-domain Question Answering

1 code implementation ACL 2021 Ikuya Yamada, Akari Asai, Hannaneh Hajishirzi

Most state-of-the-art open-domain question answering systems use a neural retrieval model to encode passages into continuous vectors and extract them from a knowledge source.

Natural Questions Open-Domain Question Answering +3

mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models

2 code implementations ACL 2022 Ryokan Ri, Ikuya Yamada, Yoshimasa Tsuruoka

We train a multilingual language model with 24 languages with entity representations and show the model consistently outperforms word-based pretrained models in various cross-lingual transfer tasks.

 Ranked #1 on Cross-Lingual Question Answering on XQuAD (Average F1 metric, using extra training data)

Cross-Lingual Question Answering Cross-Lingual Transfer +1

A Multilingual Bag-of-Entities Model for Zero-Shot Cross-Lingual Text Classification

no code implementations15 Oct 2021 Sosuke Nishikawa, Ikuya Yamada, Yoshimasa Tsuruoka, Isao Echizen

We present a multilingual bag-of-entities model that effectively boosts the performance of zero-shot cross-lingual text classification by extending a multilingual pre-trained language model (e. g., M-BERT).

Entity Typing Language Modelling +3

EASE: Entity-Aware Contrastive Learning of Sentence Embedding

1 code implementation NAACL 2022 Sosuke Nishikawa, Ryokan Ri, Ikuya Yamada, Yoshimasa Tsuruoka, Isao Echizen

We present EASE, a novel method for learning sentence embeddings via contrastive learning between sentences and their related entities.

Clustering Contrastive Learning +6

MIA 2022 Shared Task: Evaluating Cross-lingual Open-Retrieval Question Answering for 16 Diverse Languages

no code implementations NAACL (MIA) 2022 Akari Asai, Shayne Longpre, Jungo Kasai, Chia-Hsuan Lee, Rui Zhang, Junjie Hu, Ikuya Yamada, Jonathan H. Clark, Eunsol Choi

We present the results of the Workshop on Multilingual Information Access (MIA) 2022 Shared Task, evaluating cross-lingual open-retrieval question answering (QA) systems in 16 typologically diverse languages.

Question Answering Retrieval

LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation

1 code implementation18 Feb 2024 Ikuya Yamada, Ryokan Ri

In this study, we introduce LEIA, a language adaptation tuning method that utilizes Wikipedia entity names aligned across languages.

Cross-Lingual Transfer Data Augmentation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.