Search Results for author: Bingsheng Yao

Found 24 papers, 8 papers with code

Frustratingly Hard Evidence Retrieval for QA Over Books

no code implementations WS 2020 Xiangyang Mou, Mo Yu, Bingsheng Yao, Chenghao Yang, Xiaoxiao Guo, Saloni Potdar, Hui Su

A lot of progress has been made to improve question answering (QA) in recent years, but the special problem of QA over narrative book stories has not been explored in-depth.

Question Answering Retrieval

Narrative Question Answering with Cutting-Edge Open-Domain QA Techniques: A Comprehensive Study

3 code implementations7 Jun 2021 Xiangyang Mou, Chenghao Yang, Mo Yu, Bingsheng Yao, Xiaoxiao Guo, Saloni Potdar, Hui Su

Recent advancements in open-domain question answering (ODQA), i. e., finding answers from large open-domain corpus like Wikipedia, have led to human-level performance on many datasets.

Open-Domain Question Answering

StoryBuddy: A Human-AI Collaborative Chatbot for Parent-Child Interactive Storytelling with Flexible Parental Involvement

1 code implementation13 Feb 2022 Zheng Zhang, Ying Xu, Yanhao Wang, Bingsheng Yao, Daniel Ritchie, Tongshuang Wu, Mo Yu, Dakuo Wang, Toby Jia-Jun Li

Despite its benefits for children's skill development and parent-child bonding, many parents do not often engage in interactive storytelling by having story-related dialogues with their child due to limited availability or challenges in coming up with appropriate questions.

Chatbot

Efficient Long Sequence Encoding via Synchronization

no code implementations15 Mar 2022 Xiangyang Mou, Mo Yu, Bingsheng Yao, Lifu Huang

Pre-trained Transformer models have achieved successes in a wide range of NLP tasks, but are inefficient when dealing with long input sequences.

GEMv2: Multilingual NLG Benchmarking in a Single Line of Code

no code implementations22 Jun 2022 Sebastian Gehrmann, Abhik Bhattacharjee, Abinaya Mahendiran, Alex Wang, Alexandros Papangelis, Aman Madaan, Angelina McMillan-Major, Anna Shvets, Ashish Upadhyay, Bingsheng Yao, Bryan Wilie, Chandra Bhagavatula, Chaobin You, Craig Thomson, Cristina Garbacea, Dakuo Wang, Daniel Deutsch, Deyi Xiong, Di Jin, Dimitra Gkatzia, Dragomir Radev, Elizabeth Clark, Esin Durmus, Faisal Ladhak, Filip Ginter, Genta Indra Winata, Hendrik Strobelt, Hiroaki Hayashi, Jekaterina Novikova, Jenna Kanerva, Jenny Chim, Jiawei Zhou, Jordan Clive, Joshua Maynez, João Sedoc, Juraj Juraska, Kaustubh Dhole, Khyathi Raghavi Chandu, Laura Perez-Beltrachini, Leonardo F. R. Ribeiro, Lewis Tunstall, Li Zhang, Mahima Pushkarna, Mathias Creutz, Michael White, Mihir Sanjay Kale, Moussa Kamal Eddine, Nico Daheim, Nishant Subramani, Ondrej Dusek, Paul Pu Liang, Pawan Sasanka Ammanamanchi, Qi Zhu, Ratish Puduppully, Reno Kriz, Rifat Shahriyar, Ronald Cardenas, Saad Mahamood, Salomey Osei, Samuel Cahyawijaya, Sanja Štajner, Sebastien Montella, Shailza, Shailza Jolly, Simon Mille, Tahmid Hasan, Tianhao Shen, Tosin Adewumi, Vikas Raunak, Vipul Raheja, Vitaly Nikolaev, Vivian Tsai, Yacine Jernite, Ying Xu, Yisi Sang, Yixin Liu, Yufang Hou

This problem is especially pertinent in natural language generation which requires ever-improving suites of datasets, metrics, and human evaluation to make definitive claims.

Benchmarking Text Generation

NECE: Narrative Event Chain Extraction Toolkit

no code implementations17 Aug 2022 Guangxuan Xu, Paulina Toro Isaza, Moshi Li, Akintoye Oloko, Bingsheng Yao, Cassia Sanctos, Aminat Adebiyi, Yufang Hou, Nanyun Peng, Dakuo Wang

To understand a narrative, it is essential to comprehend the temporal event flows, especially those associated with main characters; however, this can be challenging with lengthy and unstructured narrative texts.

Question Answering

Beyond Labels: Empowering Human Annotators with Natural Language Explanations through a Novel Active-Learning Architecture

1 code implementation22 May 2023 Bingsheng Yao, Ishan Jindal, Lucian Popa, Yannis Katsis, Sayan Ghosh, Lihong He, Yuxuan Lu, Shashank Srivastava, Yunyao Li, James Hendler, Dakuo Wang

Our AL architecture leverages an explanation-generation model to produce explanations guided by human explanations, a prediction model that utilizes generated explanations toward prediction faithfully, and a novel data diversity-based AL sampling strategy that benefits from the explanation annotations.

Active Learning Decision Making +2

Mental-LLM: Leveraging Large Language Models for Mental Health Prediction via Online Text Data

1 code implementation26 Jul 2023 Xuhai Xu, Bingsheng Yao, Yuanzhe Dong, Saadia Gabriel, Hong Yu, James Hendler, Marzyeh Ghassemi, Anind K. Dey, Dakuo Wang

More importantly, our experiments show that instruction finetuning can significantly boost the performance of LLMs for all tasks simultaneously.

Language Modelling

Talk2Care: Facilitating Asynchronous Patient-Provider Communication with Large-Language-Model

no code implementations17 Sep 2023 Ziqi Yang, Xuhai Xu, Bingsheng Yao, Shao Zhang, Ethan Rogers, Stephen Intille, Nawar Shara, Guodong Gordon Gao, Dakuo Wang

(2) For health providers, we built an LLM-based dashboard to summarize and present important health information based on older adults' conversations with the VA. We further conducted two user studies with older adults and providers to evaluate the usability of the system.

Language Modelling Large Language Model

'Don't Get Too Technical with Me': A Discourse Structure-Based Framework for Science Journalism

1 code implementation23 Oct 2023 Ronald Cardenas, Bingsheng Yao, Dakuo Wang, Yufang Hou

Science journalism refers to the task of reporting technical findings of a scientific paper as a less technical news article to the general public audience.

Human Still Wins over LLM: An Empirical Study of Active Learning on Domain-Specific Annotation Tasks

no code implementations16 Nov 2023 Yuxuan Lu, Bingsheng Yao, Shao Zhang, Yun Wang, Peng Zhang, Tun Lu, Toby Jia-Jun Li, Dakuo Wang

Large Language Models (LLMs) have demonstrated considerable advances, and several claims have been made about their exceeding human performance.

Active Learning

More Samples or More Prompt Inputs? Exploring Effective In-Context Sampling for LLM Few-Shot Prompt Engineering

no code implementations16 Nov 2023 Bingsheng Yao, Guiming Chen, Ruishi Zou, Yuxuan Lu, Jiachen Li, Shao Zhang, Sijia Liu, James Hendler, Dakuo Wang

While most existing works on LLM prompt-engineering focus only on how to select a better set of data samples inside one single prompt input (In-Context Learning or ICL), why can't we design and leverage multiple prompt inputs together to further improve the LLM performance?

In-Context Learning Prompt Engineering

FairytaleCQA: Integrating a Commonsense Knowledge Graph into Children's Storybook Narratives

no code implementations16 Nov 2023 Jiaju Chen, Yuxuan Lu, Shao Zhang, Bingsheng Yao, Yuanzhe Dong, Ying Xu, Yunyao Li, Qianwen Wang, Dakuo Wang, Yuling Sun

AI models (including LLM) often rely on narrative question-answering (QA) datasets to provide customized QA functionalities to support downstream children education applications; however, existing datasets only include QA pairs that are grounded within the given storybook content, but children can learn more when teachers refer the storybook content to real-world knowledge (e. g., commonsense knowledge).

Question Answering World Knowledge

Bergeron: Combating Adversarial Attacks through a Conscience-Based Alignment Framework

1 code implementation16 Nov 2023 Matthew Pisano, Peter Ly, Abraham Sanders, Bingsheng Yao, Dakuo Wang, Tomek Strzalkowski, Mei Si

To help mitigate this issue, we introduce Bergeron: a framework designed to improve the robustness of LLMs against attacks without any additional parameter fine-tuning.

Human-Centered Privacy Research in the Age of Large Language Models

no code implementations3 Feb 2024 Tianshi Li, Sauvik Das, Hao-Ping Lee, Dakuo Wang, Bingsheng Yao, Zhiping Zhang

The emergence of large language models (LLMs), and their increased use in user-facing systems, has led to substantial privacy concerns.

Memorization

Cannot find the paper you are looking for? You can Submit a new open access paper.