Search Results for author: Chengyuan Liu

Found 6 papers, 2 papers with code

Evolving Knowledge Distillation with Large Language Models and Active Learning

no code implementations11 Mar 2024 Chengyuan Liu, Yangyang Kang, Fubang Zhao, Kun Kuang, Zhuoren Jiang, Changlong Sun, Fei Wu

In this paper, we propose EvoKD: Evolving Knowledge Distillation, which leverages the concept of active learning to interactively enhance the process of data generation using large language models, simultaneously improving the task capabilities of small domain model (student model).

Active Learning Knowledge Distillation +5

Goal-Oriented Prompt Attack and Safety Evaluation for LLMs

1 code implementation21 Sep 2023 Chengyuan Liu, Fubang Zhao, Lizhi Qing, Yangyang Kang, Changlong Sun, Kun Kuang, Fei Wu

There are several black-box attack methods, such as Prompt Attack, which can change the behaviour of LLMs and induce LLMs to generate unexpected answers with harmful contents.

Investigating the Robustness of Natural Language Generation from Logical Forms via Counterfactual Samples

2 code implementations16 Oct 2022 Chengyuan Liu, Leilei Gan, Kun Kuang, Fei Wu

To verify this hypothesis, we manually construct a set of counterfactual samples, which modify the original logical forms to generate counterfactual logical forms with rarely co-occurred table headers and logical operators.

counterfactual Logical Reasoning +1

ALL-IN-ONE: Multi-Task Learning BERT models for Evaluating Peer Assessments

no code implementations8 Oct 2021 Qinjin Jia, Jialin Cui, Yunkai Xiao, Chengyuan Liu, Parvez Rashid, Edward F. Gehringer

Peer assessment has been widely applied across diverse academic fields over the last few decades and has demonstrated its effectiveness.

Multi-Task Learning

Convolutional Recurrent Neural Networks for Glucose Prediction

no code implementations9 Jul 2018 Kezhi Li, John Daniels, Chengyuan Liu, Pau Herrero, Pantelis Georgiou

In addition, the model provides competitive performance in providing effective prediction horizon ($PH_{eff}$) with minimal time lag both in a simulated patient dataset ($PH_{eff}$ = 29. 0$\pm$0. 7 for 30-min and $PH_{eff}$ = 49. 8$\pm$2. 9 for 60-min) and in a real patient dataset ($PH_{eff}$ = 19. 3$\pm$3. 1 for 30-min and $PH_{eff}$ = 29. 3$\pm$9. 4 for 60-min).

Management

Cannot find the paper you are looking for? You can Submit a new open access paper.