Search Results for author: Jun Araki

Found 23 papers, 10 papers with code

Learning to Filter Context for Retrieval-Augmented Generation

1 code implementation14 Nov 2023 Zhiruo Wang, Jun Araki, Zhengbao Jiang, Md Rizwan Parvez, Graham Neubig

To alleviate these problems, we propose FILCO, a method that improves the quality of the context provided to the generator by (1) identifying useful context based on lexical and information-theoretic approaches, and (2) training context filtering models that can filter retrieved contexts at test time.

Extractive Question-Answering Fact Verification +2

Knowledge-grounded Natural Language Recommendation Explanation

no code implementations30 Aug 2023 Anthony Colas, Jun Araki, Zhengyu Zhou, Bingqing Wang, Zhe Feng

Explanations accompanied by a recommendation can assist users in understanding the decision made by recommendation systems, which in turn increases a user's confidence and trust in the system.

Collaborative Filtering Explainable Recommendation +1

SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for Classification in Low-Resource Domains

1 code implementation14 Feb 2023 Koustava Goswami, Lukas Lange, Jun Araki, Heike Adel

Prompting pre-trained language models leads to promising results across natural language processing tasks but is less effective when applied in low-resource domains, due to the domain gap between the pre-training data and the downstream task.

Language Modelling text-classification +1

Retrieval as Attention: End-to-end Learning of Retrieval and Reading within a Single Transformer

1 code implementation5 Dec 2022 Zhengbao Jiang, Luyu Gao, Jun Araki, Haibo Ding, Zhiruo Wang, Jamie Callan, Graham Neubig

Systems for knowledge-intensive tasks such as open-domain question answering (QA) usually consist of two stages: efficient retrieval of relevant documents from a large corpus and detailed reading of the selected documents to generate answers.

Open-Domain Question Answering Passage Retrieval +1

Understanding and Improving Zero-shot Multi-hop Reasoning in Generative Question Answering

no code implementations COLING 2022 Zhengbao Jiang, Jun Araki, Haibo Ding, Graham Neubig

In sum, these results demonstrate that multi-hop reasoning does not emerge naturally in generative QA models, but can be encouraged by advances in training or modeling techniques.

Generative Question Answering

Explicitly Capturing Relations between Entity Mentions via Graph Neural Networks for Domain-specific Named Entity Recognition

1 code implementation ACL 2021 Pei Chen, Haibo Ding, Jun Araki, Ruihong Huang

Named entity recognition (NER) is well studied for the general domain, and recent systems have achieved human-level performance for identifying common entity types.

named-entity-recognition Named Entity Recognition +1

How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering

1 code implementation2 Dec 2020 Zhengbao Jiang, Jun Araki, Haibo Ding, Graham Neubig

We examine this question from the point of view of calibration, the property of a probabilistic model's predicted probabilities actually being well correlated with the probabilities of correctness.

Common Sense Reasoning Question Answering

X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models

1 code implementation EMNLP 2020 Zhengbao Jiang, Antonios Anastasopoulos, Jun Araki, Haibo Ding, Graham Neubig

We further propose a code-switching-based method to improve the ability of multilingual LMs to access knowledge, and verify its effectiveness on several benchmark languages.

Retrieval

How Can We Know What Language Models Know?

1 code implementation TACL 2020 Zhengbao Jiang, Frank F. Xu, Jun Araki, Graham Neubig

Recent work has presented intriguing results examining the knowledge contained in language models (LM) by having the LM fill in the blanks of prompts such as "Obama is a _ by profession".

Generalizing Natural Language Analysis through Span-relation Representations

3 code implementations ACL 2020 Zhengbao Jiang, Wei Xu, Jun Araki, Graham Neubig

Natural language processing covers a wide variety of tasks predicting syntax, semantics, and information content, and usually each type of output is generated with specially designed architectures.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +8

Open-Domain Event Detection using Distant Supervision

1 code implementation COLING 2018 Jun Araki, Teruko Mitamura

This paper introduces open-domain event detection, a new event detection paradigm to address issues of prior work on restricted domains and event annotation.

Event Detection Open-Domain Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.