no code implementations • 28 Feb 2025 • Kangda Wei, Zhengyu Zhou, Bingqing Wang, Jun Araki, Lukas Lange, Ruihong Huang, Zhe Feng
In recent years, online lecture videos have become an increasingly popular resource for acquiring new knowledge.
no code implementations • 8 Dec 2023 • Mobashir Sadat, Zhengyu Zhou, Lukas Lange, Jun Araki, Arsalan Gundroo, Bingqing Wang, Rakesh R Menon, Md Rizwan Parvez, Zhe Feng
Hallucination is a well-known phenomenon in text generated by large language models (LLMs).
2 code implementations • 14 Nov 2023 • Zhiruo Wang, Jun Araki, Zhengbao Jiang, Md Rizwan Parvez, Graham Neubig
To alleviate these problems, we propose FILCO, a method that improves the quality of the context provided to the generator by (1) identifying useful context based on lexical and information-theoretic approaches, and (2) training context filtering models that can filter retrieved contexts at test time.
no code implementations • 30 Aug 2023 • Anthony Colas, Jun Araki, Zhengyu Zhou, Bingqing Wang, Zhe Feng
Explanations accompanied by a recommendation can assist users in understanding the decision made by recommendation systems, which in turn increases a user's confidence and trust in the system.
1 code implementation • 14 Feb 2023 • Koustava Goswami, Lukas Lange, Jun Araki, Heike Adel
Prompting pre-trained language models leads to promising results across natural language processing tasks but is less effective when applied in low-resource domains, due to the domain gap between the pre-training data and the downstream task.
1 code implementation • 5 Dec 2022 • Zhengbao Jiang, Luyu Gao, Jun Araki, Haibo Ding, Zhiruo Wang, Jamie Callan, Graham Neubig
Systems for knowledge-intensive tasks such as open-domain question answering (QA) usually consist of two stages: efficient retrieval of relevant documents from a large corpus and detailed reading of the selected documents to generate answers.
Ranked #1 on
Passage Retrieval
on Natural Questions
no code implementations • COLING 2022 • Zhengbao Jiang, Jun Araki, Haibo Ding, Graham Neubig
In sum, these results demonstrate that multi-hop reasoning does not emerge naturally in generative QA models, but can be encouraged by advances in training or modeling techniques.
1 code implementation • ACL 2021 • Pei Chen, Haibo Ding, Jun Araki, Ruihong Huang
Named entity recognition (NER) is well studied for the general domain, and recent systems have achieved human-level performance for identifying common entity types.
1 code implementation • 2 Dec 2020 • Zhengbao Jiang, Jun Araki, Haibo Ding, Graham Neubig
We examine this question from the point of view of calibration, the property of a probabilistic model's predicted probabilities actually being well correlated with the probabilities of correctness.
1 code implementation • EMNLP 2020 • Zhengbao Jiang, Antonios Anastasopoulos, Jun Araki, Haibo Ding, Graham Neubig
We further propose a code-switching-based method to improve the ability of multilingual LMs to access knowledge, and verify its effectiveness on several benchmark languages.
1 code implementation • AKBC 2020 • Zhengbao Jiang, Jun Araki, Donghan Yu, Ruohong Zhang, Wei Xu, Yiming Yang, Graham Neubig
We propose several methods that incorporate both structured and textual information to represent relations for this task.
1 code implementation • TACL 2020 • Zhengbao Jiang, Frank F. Xu, Jun Araki, Graham Neubig
Recent work has presented intriguing results examining the knowledge contained in language models (LM) by having the LM fill in the blanks of prompts such as "Obama is a _ by profession".
3 code implementations • ACL 2020 • Zhengbao Jiang, Wei Xu, Jun Araki, Graham Neubig
Natural language processing covers a wide variety of tasks predicting syntax, semantics, and information content, and usually each type of output is generated with specially designed architectures.
Ranked #1 on
Relation Extraction
on WLPC
Aspect-Based Sentiment Analysis
Aspect-Based Sentiment Analysis (ABSA)
+8
1 code implementation • COLING 2018 • Jun Araki, Teruko Mitamura
This paper introduces open-domain event detection, a new event detection paradigm to address issues of prior work on restricted domains and event annotation.
no code implementations • COLING 2016 • Jun Araki, Dheeraj Rajagopal, Sreecharan Sankaranarayanan, Susan Holm, Yukari Yamakawa, Teruko Mitamura
We present a novel approach to automated question generation that improves upon prior work both from a technology perspective and from an assessment perspective.
no code implementations • LREC 2014 • Jun Araki, Zhengzhong Liu, Eduard Hovy, Teruko Mitamura
First, we introduce a multiclass logistic regression model that can detect subevent relations in addition to full coreference.
no code implementations • LREC 2014 • Zhengzhong Liu, Jun Araki, Eduard Hovy, Teruko Mitamura
Event coreference is an important task for full text analysis.