Search Results for author: Shihan Dou

Found 34 papers, 25 papers with code

Improving RL Exploration for LLM Reasoning through Retrospective Replay

no code implementations19 Apr 2025 Shihan Dou, Muling Wu, Jingwen Xu, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang

We observe that for complex problems, during the early stages of training, the model exhibits strong exploratory capabilities and can identify promising solution ideas.

Code Generation Mathematical Reasoning +1

Measuring Data Diversity for Instruction Tuning: A Systematic Analysis and A Reliable Metric

1 code implementation24 Feb 2025 Yuming Yang, Yang Nan, Junjie Ye, Shihan Dou, Xiao Wang, Shuo Li, Huijie Lv, Tao Gui, Qi Zhang, Xuanjing Huang

To address this, we systematically analyze 11 existing diversity measurement methods by assessing their correlation with model performance through extensive fine-tuning experiments.

Diversity

DocFusion: A Unified Framework for Document Parsing Tasks

1 code implementation17 Dec 2024 Mingxu Chai, Ziyu Shen, Chong Zhang, Yue Zhang, Xiao Wang, Shihan Dou, Jihua Kang, Jiazheng Zhang, Qi Zhang

Document parsing is essential for analyzing complex document structures and extracting fine-grained information, supporting numerous downstream applications.

Multi-Programming Language Sandbox for LLMs

1 code implementation30 Oct 2024 Shihan Dou, Jiazheng Zhang, Jianxiang Zang, Yunbo Tao, Weikang Zhou, Haoxiang Jia, Shichun Liu, Yuming Yang, Zhiheng Xi, Shenxi Wu, Shaoqing Zhang, Muling Wu, Changze Lv, Limao Xiong, WenYu Zhan, Lin Zhang, Rongxiang Weng, Jingang Wang, Xunliang Cai, Yueming Wu, Ming Wen, Rui Zheng, Tao Ji, Yixin Cao, Tao Gui, Xipeng Qiu, Qi Zhang, Xuanjing Huang

We introduce MPLSandbox, an out-of-the-box multi-programming language sandbox designed to provide unified and comprehensive feedback from compiler and analysis tools for Large Language Models (LLMs).

RMB: Comprehensively Benchmarking Reward Models in LLM Alignment

1 code implementation13 Oct 2024 Enyu Zhou, Guodong Zheng, Binghai Wang, Zhiheng Xi, Shihan Dou, Rong Bao, Wei Shen, Limao Xiong, Jessica Fan, Yurong Mou, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang

However, the current evaluation of RMs may not directly correspond to their alignment performance due to the limited distribution of evaluation data and evaluation methods that are not closely related to alignment objectives.

Benchmarking

SafeAligner: Safety Alignment against Jailbreak Attacks via Response Disparity Guidance

1 code implementation26 Jun 2024 Caishuang Huang, Wanxu Zhao, Rui Zheng, Huijie Lv, WenYu Zhan, Shihan Dou, Sixian Li, Xiao Wang, Enyu Zhou, Junjie Ye, Yuming Yang, Tao Gui, Qi Zhang, Xuanjing Huang

As the development of large language models (LLMs) rapidly advances, securing these models effectively without compromising their utility has become a pivotal area of research.

Safety Alignment

Aligning Large Language Models from Self-Reference AI Feedback with one General Principle

1 code implementation17 Jun 2024 Rong Bao, Rui Zheng, Shihan Dou, Xiao Wang, Enyu Zhou, Bo wang, Qi Zhang, Liang Ding, DaCheng Tao

In aligning large language models (LLMs), utilizing feedback from existing advanced AI rather than humans is an important method to scale supervisory signals.

Position

CodeChameleon: Personalized Encryption Framework for Jailbreaking Large Language Models

1 code implementation26 Feb 2024 Huijie Lv, Xiao Wang, Yuansen Zhang, Caishuang Huang, Shihan Dou, Junjie Ye, Tao Gui, Qi Zhang, Xuanjing Huang

Adversarial misuse, particularly through `jailbreaking' that circumvents a model's safety and ethical protocols, poses a significant challenge for Large Language Models (LLMs).

Code Completion Response Generation

Advancing Translation Preference Modeling with RLHF: A Step Towards Cost-Effective Solution

no code implementations18 Feb 2024 Nuo Xu, Jun Zhao, Can Zu, Sixian Li, Lu Chen, Zhihao Zhang, Rui Zheng, Shihan Dou, Wenjuan Qin, Tao Gui, Qi Zhang, Xuanjing Huang

To address this issue, we propose a cost-effective preference learning strategy, optimizing reward models by distinguishing between human and machine translations.

Machine Translation Translation

Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning

1 code implementation8 Feb 2024 Zhiheng Xi, Wenxiang Chen, Boyang Hong, Senjie Jin, Rui Zheng, wei he, Yiwen Ding, Shichun Liu, Xin Guo, Junzhe Wang, Honglin Guo, Wei Shen, Xiaoran Fan, Yuhao Zhou, Shihan Dou, Xiao Wang, Xinbo Zhang, Peng Sun, Tao Gui, Qi Zhang, Xuanjing Huang

In this paper, we propose R$^3$: Learning Reasoning through Reverse Curriculum Reinforcement Learning (RL), a novel method that employs only outcome supervision to achieve the benefits of process supervision for large language models.

GSM8K reinforcement-learning +1

Linear Alignment: A Closed-form Solution for Aligning Human Preferences without Tuning and Feedback

1 code implementation21 Jan 2024 Songyang Gao, Qiming Ge, Wei Shen, Shihan Dou, Junjie Ye, Xiao Wang, Rui Zheng, Yicheng Zou, Zhi Chen, Hang Yan, Qi Zhang, Dahua Lin

This reliance limits the applicability of RLHF and hinders the development of professional assistants tailored to diverse human preferences.

Form

Revisiting Jailbreaking for Large Language Models: A Representation Engineering Perspective

no code implementations12 Jan 2024 Tianlong Li, Zhenghua Wang, Wenhao Liu, Muling Wu, Shihan Dou, Changze Lv, Xiaohua Wang, Xiaoqing Zheng, Xuanjing Huang

The recent surge in jailbreaking attacks has revealed significant vulnerabilities in Large Language Models (LLMs) when exposed to malicious inputs.

Prompt Engineering

ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios

1 code implementation1 Jan 2024 Junjie Ye, Guanyu Li, Songyang Gao, Caishuang Huang, Yilong Wu, Sixian Li, Xiaoran Fan, Shihan Dou, Tao Ji, Qi Zhang, Tao Gui, Xuanjing Huang

Existing evaluations of tool learning primarily focus on validating the alignment of selected tools for large language models (LLMs) with expected outcomes.

LoRAMoE: Alleviate World Knowledge Forgetting in Large Language Models via MoE-Style Plugin

1 code implementation15 Dec 2023 Shihan Dou, Enyu Zhou, Yan Liu, Songyang Gao, Jun Zhao, Wei Shen, Yuhao Zhou, Zhiheng Xi, Xiao Wang, Xiaoran Fan, ShiLiang Pu, Jiang Zhu, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang

Supervised fine-tuning (SFT) is a crucial step for large language models (LLMs), enabling them to align with human instructions and enhance their capabilities in downstream tasks.

Language Modelling Mixture-of-Experts +2

Tailoring Personality Traits in Large Language Models via Unsupervisedly-Built Personalized Lexicons

no code implementations25 Oct 2023 Tianlong Li, Shihan Dou, Changze Lv, Wenhao Liu, Jianhan Xu, Muling Wu, Zixuan Ling, Xiaoqing Zheng, Xuanjing Huang

Users can utilize UBPL to adjust the probability vectors of predicted words in the decoding phase of LLMs, thus influencing the personality expression of LLMs.

Loose lips sink ships: Mitigating Length Bias in Reinforcement Learning from Human Feedback

no code implementations8 Oct 2023 Wei Shen, Rui Zheng, WenYu Zhan, Jun Zhao, Shihan Dou, Tao Gui, Qi Zhang, Xuanjing Huang

Reinforcement learning from human feedback serves as a crucial bridge, aligning large language models with human and societal values.

Language Modeling Language Modelling

On the Universal Adversarial Perturbations for Efficient Data-free Adversarial Detection

1 code implementation27 Jun 2023 Songyang Gao, Shihan Dou, Qi Zhang, Xuanjing Huang, Jin Ma, Ying Shan

Detecting adversarial samples that are carefully crafted to fool the model is a critical step to socially-secure applications.

text-classification Text Classification

DSRM: Boost Textual Adversarial Training with Distribution Shift Risk Minimization

1 code implementation27 Jun 2023 Songyang Gao, Shihan Dou, Yan Liu, Xiao Wang, Qi Zhang, Zhongyu Wei, Jin Ma, Ying Shan

Adversarial training is one of the best-performing methods in improving the robustness of deep language models.

CausalAPM: Generalizable Literal Disentanglement for NLU Debiasing

no code implementations4 May 2023 Songyang Gao, Shihan Dou, Junjie Shan, Qi Zhang, Xuanjing Huang

Dataset bias, i. e., the over-reliance on dataset-specific literal heuristics, is getting increasing attention for its detrimental effect on the generalization ability of NLU models.

Causal Inference Disentanglement +2

Kernel-Whitening: Overcome Dataset Bias with Isotropic Sentence Embedding

2 code implementations14 Oct 2022 Songyang Gao, Shihan Dou, Qi Zhang, Xuanjing Huang

Dataset bias has attracted increasing attention recently for its detrimental effect on the generalization ability of fine-tuned models.

Sentence Sentence Embedding +2

VulCNN: An Image-inspired Scalable Vulnerability Detection System

1 code implementation International Conference on Software Engineering 2022 Yueming Wu, Deqing Zou, Shihan Dou, Wei Yang, Duo Xu, Hai Jin

Furthermore, we conduct a case study on more than 25 million lines of code and the result indicates that VulCNN has the ability to detect large-scale vulnerability.

Image Classification Vulnerability Detection

Decorrelate Irrelevant, Purify Relevant: Overcome Textual Spurious Correlations from a Feature Perspective

2 code implementations COLING 2022 Shihan Dou, Rui Zheng, Ting Wu, Songyang Gao, Junjie Shan, Qi Zhang, Yueming Wu, Xuanjing Huang

Most of the existing debiasing methods often identify and weaken these samples with biased features (i. e., superficial surface features that cause such spurious correlations).

Fact Verification Natural Language Inference +1

Cannot find the paper you are looking for? You can Submit a new open access paper.