Search Results for author: Qingfeng Sun

Found 9 papers, 6 papers with code

WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct

1 code implementation18 Aug 2023 Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, JianGuang Lou, Chongyang Tao, Xiubo Geng, QIngwei Lin, Shifeng Chen, Dongmei Zhang

Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model.

GSM8K Math +1

WizardCoder: Empowering Code Large Language Models with Evol-Instruct

1 code implementation14 Jun 2023 Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, QIngwei Lin, Daxin Jiang

Moreover, our model even outperforms the largest closed LLMs, Anthropic's Claude and Google's Bard, on HumanEval and HumanEval+.

Code Generation

Self-Supervised Multi-Modal Sequential Recommendation

1 code implementation26 Apr 2023 Kunzhe Song, Qingfeng Sun, Can Xu, Kai Zheng, Yaming Yang

To address this issue, we propose a dual-tower retrieval architecture for sequence recommendation.

Contrastive Learning Retrieval +1

WizardLM: Empowering Large Language Models to Follow Complex Instructions

5 code implementations24 Apr 2023 Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, Daxin Jiang

In this paper, we show an avenue for creating large amounts of instruction data with varying levels of complexity using LLM instead of humans.

Instruction Following

Multimodal Dialogue Response Generation

no code implementations ACL 2022 Qingfeng Sun, Yujing Wang, Can Xu, Kai Zheng, Yaming Yang, Huang Hu, Fei Xu, Jessica Zhang, Xiubo Geng, Daxin Jiang

In such a low-resource setting, we devise a novel conversational agent, Divter, in order to isolate parameters that depend on multimodal dialogues from the entire generation model.

Dialogue Generation Response Generation +1

Hierarchical Attention Prototypical Networks for Few-Shot Text Classification

no code implementations IJCNLP 2019 Shengli Sun, Qingfeng Sun, Kevin Zhou, Tengchao Lv

Most of the current effective methods for text classification tasks are based on large-scale labeled data and a great number of parameters, but when the supervised training data are few and difficult to be collected, these models are not available.

Few-Shot Text Classification General Classification +2

Cannot find the paper you are looking for? You can Submit a new open access paper.