no code implementations • 16 Nov 2023 • Jiaju Chen, Yuxuan Lu, Shao Zhang, Bingsheng Yao, Yuanzhe Dong, Ying Xu, Yunyao Li, Qianwen Wang, Dakuo Wang, Yuling Sun
AI models (including LLM) often rely on narrative question-answering (QA) datasets to provide customized QA functionalities to support downstream children education applications; however, existing datasets only include QA pairs that are grounded within the given storybook content, but children can learn more when teachers refer the storybook content to real-world knowledge (e. g., commonsense knowledge).
1 code implementation • 26 Jul 2023 • Xuhai Xu, Bingsheng Yao, Yuanzhe Dong, Saadia Gabriel, Hong Yu, James Hendler, Marzyeh Ghassemi, Anind K. Dey, Dakuo Wang
More importantly, our experiments show that instruction finetuning can significantly boost the performance of LLMs for all tasks simultaneously.
1 code implementation • 20 Mar 2022 • Yu Qing Zhou, Xixuan Julie Liu, Yuanzhe Dong
In this paper, we show that our combination of best architecture and data augmentation techniques achieves a 53. 477 F1 score in the out-of-domain evaluation, which is a 9. 52% performance gain over the baseline.