Search Results for author: Yingxiu Zhao

Found 9 papers, 4 papers with code

Tapilot-Crossing: Benchmarking and Evolving LLMs Towards Interactive Data Analysis Agents

no code implementations8 Mar 2024 Jinyang Li, Nan Huo, Yan Gao, Jiayi Shi, Yingxiu Zhao, Ge Qu, Yurong Wu, Chenhao Ma, Jian-Guang Lou, Reynold Cheng

The challenges and costs of collecting realistic interactive logs for data analysis hinder the quantitative evaluation of Large Language Model (LLM) agents in this task.

Benchmarking Decision Making +2

A Preliminary Study of the Intrinsic Relationship between Complexity and Alignment

1 code implementation10 Aug 2023 Yingxiu Zhao, Bowen Yu, Binyuan Hui, Haiyang Yu, Fei Huang, Yongbin Li, Nevin L. Zhang

Training large language models (LLMs) with open-domain instruction data has yielded remarkable success in aligning to end tasks and human preferences.

Causal Document-Grounded Dialogue Pre-training

1 code implementation18 May 2023 Yingxiu Zhao, Bowen Yu, Haiyang Yu, Bowen Li, Jinyang Li, Chao Wang, Fei Huang, Yongbin Li, Nevin L. Zhang

To tackle this issue, we are the first to present a causally-complete dataset construction strategy for building million-level DocGD pre-training corpora.

Semi-Supervised Lifelong Language Learning

1 code implementation23 Nov 2022 Yingxiu Zhao, Yinhe Zheng, Bowen Yu, Zhiliang Tian, Dongkyu Lee, Jian Sun, Haiyang Yu, Yongbin Li, Nevin L. Zhang

In this paper, we explore a novel setting, semi-supervised lifelong language learning (SSLL), where a model learns sequentially arriving language tasks with both labeled and unlabeled data.

Transfer Learning

Hard Gate Knowledge Distillation -- Leverage Calibration for Robust and Reliable Language Model

no code implementations22 Oct 2022 Dongkyu Lee, Zhiliang Tian, Yingxiu Zhao, Ka Chun Cheung, Nevin L. Zhang

The question is answered in our work with the concept of model calibration; we view a teacher model not only as a source of knowledge but also as a gauge to detect miscalibration of a student.

Knowledge Distillation Language Modelling +2

SeqPATE: Differentially Private Text Generation via Knowledge Distillation

no code implementations29 Sep 2021 Zhiliang Tian, Yingxiu Zhao, Ziyue Huang, Yu-Xiang Wang, Nevin Zhang, He He

Differentially private (DP) learning algorithms provide guarantees on identifying the existence of a training sample from model outputs.

Knowledge Distillation Sentence +2

Cannot find the paper you are looking for? You can Submit a new open access paper.