Search Results for author: Can Zu

Found 5 papers, 2 papers with code

LongAgent: Scaling Language Models to 128k Context through Multi-Agent Collaboration

1 code implementation18 Feb 2024 Jun Zhao, Can Zu, Hao Xu, Yi Lu, wei he, Yiwen Ding, Tao Gui, Qi Zhang, Xuanjing Huang

Large language models (LLMs) have demonstrated impressive performance in understanding language and executing complex reasoning tasks.

Multi-hop Question Answering Question Answering +1

Advancing Translation Preference Modeling with RLHF: A Step Towards Cost-Effective Solution

no code implementations18 Feb 2024 Nuo Xu, Jun Zhao, Can Zu, Sixian Li, Lu Chen, Zhihao Zhang, Rui Zheng, Shihan Dou, Wenjuan Qin, Tao Gui, Qi Zhang, Xuanjing Huang

To address this issue, we propose a cost-effective preference learning strategy, optimizing reward models by distinguishing between human and machine translations.

Machine Translation Translation

A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models

no code implementations18 Mar 2023 Junjie Ye, Xuanting Chen, Nuo Xu, Can Zu, Zekai Shao, Shichun Liu, Yuhan Cui, Zeyang Zhou, Chao Gong, Yang shen, Jie zhou, Siming Chen, Tao Gui, Qi Zhang, Xuanjing Huang

GPT series models, such as GPT-3, CodeX, InstructGPT, ChatGPT, and so on, have gained considerable attention due to their exceptional natural language processing capabilities.

Natural Language Understanding

How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language Understanding Tasks

no code implementations1 Mar 2023 Xuanting Chen, Junjie Ye, Can Zu, Nuo Xu, Rui Zheng, Minlong Peng, Jie zhou, Tao Gui, Qi Zhang, Xuanjing Huang

The GPT-3. 5 models have demonstrated impressive performance in various Natural Language Processing (NLP) tasks, showcasing their strong understanding and reasoning capabilities.

Natural Language Inference Natural Language Understanding +1

Cannot find the paper you are looking for? You can Submit a new open access paper.