Search Results for author: Hongyi Yuan

Found 19 papers, 15 papers with code

Speculative Contrastive Decoding

no code implementations15 Nov 2023 Hongyi Yuan, Keming Lu, Fei Huang, Zheng Yuan, Chang Zhou

Large language models~(LLMs) exhibit exceptional performance in language tasks, yet their auto-regressive inference is limited due to high computational requirements and is sub-optimal due to the exposure bias.

Routing to the Expert: Efficient Reward-guided Ensemble of Large Language Models

no code implementations15 Nov 2023 Keming Lu, Hongyi Yuan, Runji Lin, Junyang Lin, Zheng Yuan, Chang Zhou, Jingren Zhou

Zooter shows computation efficiency in inference as it introduces only a minor computation overhead of a routing function compared with reward model ranking methods.

TAG

How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition

2 code implementations9 Oct 2023 Guanting Dong, Hongyi Yuan, Keming Lu, Chengpeng Li, Mingfeng Xue, Dayiheng Liu, Wei Wang, Zheng Yuan, Chang Zhou, Jingren Zhou

We propose four intriguing research questions to explore the association between model performance and various factors including data amount, composition ratio, model size and SFT strategies.

Code Generation Instruction Following +2

Query and Response Augmentation Cannot Help Out-of-domain Math Reasoning Generalization

1 code implementation9 Oct 2023 Chengpeng Li, Zheng Yuan, Hongyi Yuan, Guanting Dong, Keming Lu, Jiancan Wu, Chuanqi Tan, Xiang Wang, Chang Zhou

In this paper, we conduct an investigation for such data augmentation in math reasoning and are intended to answer: (1) What strategies of data augmentation are more effective; (2) What is the scaling relationship between the amount of augmented data and model performance; and (3) Can data augmentation incentivize generalization to out-of-domain mathematical reasoning tasks?

Ranked #50 on Math Word Problem Solving on MATH (using extra training data)

Arithmetic Reasoning Data Augmentation +3

#InsTag: Instruction Tagging for Analyzing Supervised Fine-tuning of Large Language Models

1 code implementation14 Aug 2023 Keming Lu, Hongyi Yuan, Zheng Yuan, Runji Lin, Junyang Lin, Chuanqi Tan, Chang Zhou, Jingren Zhou

Based on this observation, we propose a data selector based on InsTag to select 6K diverse and complex samples from open-source datasets and fine-tune models on InsTag-selected data.

Instruction Following TAG

Scaling Relationship on Learning Mathematical Reasoning with Large Language Models

1 code implementation3 Aug 2023 Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou, Jingren Zhou

We find with augmented samples containing more distinct reasoning paths, RFT improves mathematical reasoning performance more for LLMs.

Ranked #100 on Arithmetic Reasoning on GSM8K (using extra training data)

Arithmetic Reasoning GSM8K +1

RRHF: Rank Responses to Align Language Models with Human Feedback without tears

1 code implementation11 Apr 2023 Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, Fei Huang

Reinforcement Learning from Human Feedback (RLHF) facilitates the alignment of large language models with human preferences, significantly enhancing the quality of interactions between humans and models.

Language Modelling Large Language Model

Revisiting Automatic Question Summarization Evaluation in the Biomedical Domain

no code implementations18 Mar 2023 Hongyi Yuan, Yaoyun Zhang, Fei Huang, Songfang Huang

To better understand whether commonly used evaluation metrics are capable of evaluating automatic summarization in the biomedical domain, we conduct human evaluations of summarization quality from four different aspects of a biomedical question summarization task.

Text Generation

Exploring Partial Knowledge Base Inference in Biomedical Entity Linking

1 code implementation18 Mar 2023 Hongyi Yuan, Keming Lu, Zheng Yuan

Biomedical entity linking (EL) consists of named entity recognition (NER) and named entity disambiguation (NED).

Entity Disambiguation Entity Linking +3

How well do Large Language Models perform in Arithmetic tasks?

1 code implementation16 Mar 2023 Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang

Large language models have emerged abilities including chain-of-thought to answer math word problems step by step.

Math

EHRDiff: Exploring Realistic EHR Synthesis with Diffusion Models

1 code implementation10 Mar 2023 Hongyi Yuan, Songchi Zhou, Sheng Yu

Electronic health records (EHR) contain a wealth of biomedical information, serving as valuable resources for the development of precision medicine systems.

Generative Adversarial Network Image Generation

RAMM: Retrieval-augmented Biomedical Visual Question Answering with Multi-modal Pre-training

1 code implementation1 Mar 2023 Zheng Yuan, Qiao Jin, Chuanqi Tan, Zhengyun Zhao, Hongyi Yuan, Fei Huang, Songfang Huang

We propose to retrieve similar image-text pairs based on ITC from pretraining datasets and introduce a novel retrieval-attention module to fuse the representation of the image and the question with the retrieved images and texts.

Question Answering Retrieval +1

HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation

1 code implementation17 Dec 2022 Hongyi Yuan, Zheng Yuan, Chuanqi Tan, Fei Huang, Songfang Huang

Unlike previous works that only add noise to inputs or parameters, we argue that the hidden representations of Transformers layers convey more diverse and meaningful language information.

Language Modelling Natural Language Inference

Generative Biomedical Entity Linking via Knowledge Base-Guided Pre-training and Synonyms-Aware Fine-tuning

1 code implementation NAACL 2022 Hongyi Yuan, Zheng Yuan, Sheng Yu

Entities lie in the heart of biomedical natural language understanding, and the biomedical entity linking (EL) task remains challenging due to the fine-grained and diversiform concept names.

Entity Linking Natural Language Understanding

BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model

1 code implementation BioNLP (ACL) 2022 Hongyi Yuan, Zheng Yuan, Ruyi Gan, Jiaxing Zhang, Yutao Xie, Sheng Yu

Furthermore, we conduct ablation studies on the pretraining tasks for BioBART and find that sentence permutation has negative effects on downstream tasks.

Entity Linking Language Modelling +6

BIOS: An Algorithmically Generated Biomedical Knowledge Graph

no code implementations18 Mar 2022 Sheng Yu, Zheng Yuan, Jun Xia, Shengxuan Luo, Huaiyuan Ying, Sihang Zeng, Jingyi Ren, Hongyi Yuan, Zhengyun Zhao, Yucong Lin, Keming Lu, Jing Wang, Yutao Xie, Heung-Yeung Shum

For decades, these knowledge graphs have been developed via expert curation; however, this method can no longer keep up with today's AI development, and a transition to algorithmically generated BioMedKGs is necessary.

BIG-bench Machine Learning Knowledge Graphs +3

Efficient Symptom Inquiring and Diagnosis via Adaptive Alignment of Reinforcement Learning and Classification

1 code implementation1 Dec 2021 Hongyi Yuan, Sheng Yu

To address this issue, we devise an adaptive mechanism to align reinforcement learning and classification methods using distribution entropy as the medium.

Decision Making Medical Diagnosis +2

Cannot find the paper you are looking for? You can Submit a new open access paper.