Search Results for author: Akari Asai

Found 17 papers, 10 papers with code

MIA 2022 Shared Task: Evaluating Cross-lingual Open-Retrieval Question Answering for 16 Diverse Languages

no code implementations2 Jul 2022 Akari Asai, Shayne Longpre, Jungo Kasai, Chia-Hsuan Lee, Rui Zhang, Junjie Hu, Ikuya Yamada, Jonathan H. Clark, Eunsol Choi

We present the results of the Workshop on Multilingual Information Access (MIA) 2022 Shared Task, evaluating cross-lingual open-retrieval question answering (QA) systems in 16 typologically diverse languages.

Question Answering

Attentional Mixtures of Soft Prompt Tuning for Parameter-efficient Multi-task Knowledge Sharing

1 code implementation24 May 2022 Akari Asai, Mohammadreza Salehi, Matthew E. Peters, Hannaneh Hajishirzi

This work introduces ATTEMPT (Attentional Mixture of Prompt Tuning), a new modular, multi-task, and parameter-efficient language model (LM) tuning approach that combines knowledge transferred across different tasks via a mixture of soft prompts while keeping original LM unchanged.

Language Modelling Multi-Task Learning

Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks

1 code implementation16 Dec 2021 Akari Asai, Matt Gardner, Hannaneh Hajishirzi

We introduce a multi-task learning framework to jointly generate the final output and predict the evidentiality of each passage, leveraging a new task-agnostic method to obtain silver evidentiality labels for supervision.

Fact Verification Multi-Task Learning +1

One Question Answering Model for Many Languages with Cross-lingual Dense Passage Retrieval

1 code implementation NeurIPS 2021 Akari Asai, Xinyan Yu, Jungo Kasai, Hannaneh Hajishirzi

We present Cross-lingual Open-Retrieval Answer Generation (CORA), the first unified many-to-many question answering (QA) model that can answer questions across many languages, even for ones without language-specific annotated data or knowledge sources.

Answer Generation Passage Retrieval +2

Efficient Passage Retrieval with Hashing for Open-domain Question Answering

1 code implementation ACL 2021 Ikuya Yamada, Akari Asai, Hannaneh Hajishirzi

Most state-of-the-art open-domain question answering systems use a neural retrieval model to encode passages into continuous vectors and extract them from a knowledge source.

Open-Domain Question Answering Passage Retrieval

Challenges in Information-Seeking QA: Unanswerable Questions and Paragraph Retrieval

no code implementations ACL 2021 Akari Asai, Eunsol Choi

However, datasets containing information-seeking queries where evidence documents are provided after the queries are written independently remain challenging.

Language Modelling Pretrained Language Models +2

Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT

no code implementations27 Feb 2020 Lichao Sun, Kazuma Hashimoto, Wenpeng Yin, Akari Asai, Jia Li, Philip Yu, Caiming Xiong

There is an increasing amount of literature that claims the brittleness of deep neural networks in dealing with adversarial examples that are created maliciously.

Question Answering Sentiment Analysis

Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering

2 code implementations ICLR 2020 Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, Caiming Xiong

Answering questions that require multi-hop reasoning at web-scale necessitates retrieving multiple evidence documents, one of which often has little lexical or semantic relationship to the question.

Question Answering

Wikipedia2Vec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from Wikipedia

no code implementations EMNLP 2020 Ikuya Yamada, Akari Asai, Jin Sakuma, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji, Yuji Matsumoto

The embeddings of entities in a large knowledge base (e. g., Wikipedia) are highly beneficial for solving various natural language tasks that involve real world knowledge.

Multilingual Extractive Reading Comprehension by Runtime Machine Translation

1 code implementation10 Sep 2018 Akari Asai, Akiko Eriguchi, Kazuma Hashimoto, Yoshimasa Tsuruoka

Given a target language without RC training data and a pivot language with RC training data (e. g. English), our method leverages existing RC resources in the pivot language by combining a competitive RC model in the pivot language with an attentive Neural Machine Translation (NMT) model.

Machine Translation Reading Comprehension +1

HappyDB: A Corpus of 100,000 Crowdsourced Happy Moments

2 code implementations LREC 2018 Akari Asai, Sara Evensen, Behzad Golshan, Alon Halevy, Vivian Li, Andrei Lopatenko, Daniela Stepanov, Yoshihiko Suhara, Wang-Chiew Tan, Yinzhan Xu

The science of happiness is an area of positive psychology concerned with understanding what behaviors make people happy in a sustainable fashion.

Art Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.