Search Results for author: Enyu Zhou

Found 12 papers, 8 papers with code

RMB: Comprehensively Benchmarking Reward Models in LLM Alignment

1 code implementation13 Oct 2024 Enyu Zhou, Guodong Zheng, Binghai Wang, Zhiheng Xi, Shihan Dou, Rong Bao, Wei Shen, Limao Xiong, Jessica Fan, Yurong Mou, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang

However, the current evaluation of RMs may not directly correspond to their alignment performance due to the limited distribution of evaluation data and evaluation methods that are not closely related to alignment objectives.

Benchmarking

SafeAligner: Safety Alignment against Jailbreak Attacks via Response Disparity Guidance

1 code implementation26 Jun 2024 Caishuang Huang, Wanxu Zhao, Rui Zheng, Huijie Lv, WenYu Zhan, Shihan Dou, Sixian Li, Xiao Wang, Enyu Zhou, Junjie Ye, Yuming Yang, Tao Gui, Qi Zhang, Xuanjing Huang

As the development of large language models (LLMs) rapidly advances, securing these models effectively without compromising their utility has become a pivotal area of research.

Safety Alignment

Aligning Large Language Models from Self-Reference AI Feedback with one General Principle

1 code implementation17 Jun 2024 Rong Bao, Rui Zheng, Shihan Dou, Xiao Wang, Enyu Zhou, Bo wang, Qi Zhang, Liang Ding, DaCheng Tao

In aligning large language models (LLMs), utilizing feedback from existing advanced AI rather than humans is an important method to scale supervisory signals.

Position

LoRAMoE: Alleviate World Knowledge Forgetting in Large Language Models via MoE-Style Plugin

1 code implementation15 Dec 2023 Shihan Dou, Enyu Zhou, Yan Liu, Songyang Gao, Jun Zhao, Wei Shen, Yuhao Zhou, Zhiheng Xi, Xiao Wang, Xiaoran Fan, ShiLiang Pu, Jiang Zhu, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang

Supervised fine-tuning (SFT) is a crucial step for large language models (LLMs), enabling them to align with human instructions and enhance their capabilities in downstream tasks.

Language Modelling Multi-Task Learning +1

RealBehavior: A Framework for Faithfully Characterizing Foundation Models' Human-like Behavior Mechanisms

no code implementations17 Oct 2023 Enyu Zhou, Rui Zheng, Zhiheng Xi, Songyang Gao, Xiaoran Fan, Zichu Fei, Jingting Ye, Tao Gui, Qi Zhang, Xuanjing Huang

Reports of human-like behaviors in foundation models are growing, with psychological theories providing enduring tools to investigate these behaviors.

Global Matching with Overlapping Attention for Optical Flow Estimation

1 code implementation CVPR 2022 Shiyu Zhao, Long Zhao, Zhixing Zhang, Enyu Zhou, Dimitris Metaxas

In this paper, inspired by the traditional matching-optimization methods where matching is introduced to handle large displacements before energy-based optimizations, we introduce a simple but effective global matching step before the direct regression and develop a learning-based matching-optimization framework, namely GMFlowNet.

Optical Flow Estimation regression

Semi-synthesis: A fast way to produce effective datasets for stereo matching

no code implementations26 Jan 2021 Ju He, Enyu Zhou, Liusheng Sun, Fei Lei, Chenyang Liu, Wenxiu Sun

Though synthetic dataset is proposed to fill the gaps of large data demand, the fine-tuning on real dataset is still needed due to the domain variances between synthetic data and real data.

Stereo Matching

Cannot find the paper you are looking for? You can Submit a new open access paper.