Search Results for author: Lichang Chen

Found 21 papers, 10 papers with code

Spectrum AUC Difference (SAUCD): Human-aligned 3D Shape Evaluation

no code implementations3 Mar 2024 Tianyu Luan, Zhong Li, Lele Chen, Xuan Gong, Lichang Chen, Yi Xu, Junsong Yuan

Then, we calculate the Area Under the Curve (AUC) difference between the two spectrums, so that each frequency band that captures either the overall or detailed shape is equitably considered.

Can LLMs Speak For Diverse People? Tuning LLMs via Debate to Generate Controllable Controversial Statements

1 code implementation16 Feb 2024 Ming Li, Jiuhai Chen, Lichang Chen, Tianyi Zhou

To examine DEBATunE, we curate the largest dataset of debate topics so far, which covers 710 controversial topics and corresponding arguments for each topic.

Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning

2 code implementations15 Feb 2024 Ming Li, Lichang Chen, Jiuhai Chen, Shwai He, Jiuxiang Gu, Tianyi Zhou

Instruction tuning is critical to large language models (LLMs) for achieving better instruction following and task adaptation capabilities but its success heavily relies on the training data quality.

Data Augmentation Instruction Following

ODIN: Disentangled Reward Mitigates Hacking in RLHF

no code implementations11 Feb 2024 Lichang Chen, Chen Zhu, Davit Soselia, Jiuhai Chen, Tianyi Zhou, Tom Goldstein, Heng Huang, Mohammad Shoeybi, Bryan Catanzaro

In this work, we study the issue of reward hacking on the response length, a challenge emerging in Reinforcement Learning from Human Feedback (RLHF) on LLMs.

GPT-4 Vision on Medical Image Classification -- A Case Study on COVID-19 Dataset

no code implementations27 Oct 2023 Ruibo Chen, Tianyi Xiong, Yihan Wu, Guodong Liu, Zhengmian Hu, Lichang Chen, Yanshuo Chen, Chenxi Liu, Heng Huang

This technical report delves into the application of GPT-4 Vision (GPT-4V) in the nuanced realm of COVID-19 image classification, leveraging the transformative potential of in-context learning to enhance diagnostic processes.

Image Classification In-Context Learning +1

AlpaCare:Instruction-tuned Large Language Models for Medical Application

1 code implementation23 Oct 2023 Xinlu Zhang, Chenxin Tian, Xianjun Yang, Lichang Chen, Zekun Li, Linda Ruth Petzold

Instruction-finetuning (IFT) has become crucial in aligning Large Language Models (LLMs) with diverse human needs and has shown great potential in medical applications.

Instruction Following

Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning

2 code implementations18 Oct 2023 Ming Li, Lichang Chen, Jiuhai Chen, Shwai He, Heng Huang, Jiuxiang Gu, Tianyi Zhou

Recent advancements in Large Language Models (LLMs) have expanded the horizons of natural language understanding and generation.

Natural Language Understanding

Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection

1 code implementation31 Jul 2023 Jun Yan, Vikas Yadav, Shiyang Li, Lichang Chen, Zheng Tang, Hai Wang, Vijay Srinivasan, Xiang Ren, Hongxia Jin

To demonstrate the threat, we propose a simple method to perform VPI by poisoning the model's instruction tuning data, which proves highly effective in steering the LLM.

Backdoor Attack

AlpaGasus: Training A Better Alpaca with Fewer Data

3 code implementations17 Jul 2023 Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin

Large language models (LLMs) strengthen instruction-following capability through instruction-finetuning (IFT) on supervised instruction/response data.

Instruction Following

InstructZero: Efficient Instruction Optimization for Black-Box Large Language Models

1 code implementation5 Jun 2023 Lichang Chen, Jiuhai Chen, Tom Goldstein, Heng Huang, Tianyi Zhou

Large language models~(LLMs) are instruction followers, but it can be challenging to find the best instruction for different situations, especially for black-box LLMs on which backpropagation is forbidden.

Bayesian Optimization

Prompting Language-Informed Distribution for Compositional Zero-Shot Learning

no code implementations23 May 2023 Wentao Bao, Lichang Chen, Heng Huang, Yu Kong

Orthogonal to the existing literature of soft, hard, or distributional prompts, our method advocates prompting the LLM-supported class distribution that leads to a better zero-shot generalization.

Compositional Zero-Shot Learning Informativeness +1

Backdoor Learning on Sequence to Sequence Models

no code implementations3 May 2023 Lichang Chen, Minhao Cheng, Heng Huang

Backdoor learning has become an emerging research area towards building a trustworthy machine learning system.

Machine Translation Sentence +3

PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer

no code implementations3 May 2023 Lichang Chen, Heng Huang, Minhao Cheng

To address this critical problem, we first investigate and find that the loss landscape of vanilla prompt tuning is precipitous when it is visualized, where a slight change of input data can cause a big fluctuation in the loss landscape.

Natural Language Understanding

When do you need Chain-of-Thought Prompting for ChatGPT?

no code implementations6 Apr 2023 Jiuhai Chen, Lichang Chen, Heng Huang, Tianyi Zhou

However, it is not clear whether CoT is still effective on more recent instruction finetuned (IFT) LLMs such as ChatGPT.

Arithmetic Reasoning Memorization

How Many Demonstrations Do You Need for In-context Learning?

no code implementations14 Mar 2023 Jiuhai Chen, Lichang Chen, Chen Zhu, Tianyi Zhou

Moreover, ICL (with and w/o CoT) using only one correct demo significantly outperforms all-demo ICL adopted by most previous works, indicating the weakness of LLMs in finding correct demo(s) for input queries, which is difficult to evaluate on the biased datasets.

In-Context Learning

Task-Aware Sampling Layer for Point-Wise Analysis

no code implementations9 Jul 2021 Yiqun Lin, Lichang Chen, Haibin Huang, Chongyang Ma, Xiaoguang Han, Shuguang Cui

Sampling, grouping, and aggregation are three important components in the multi-scale analysis of point clouds.

Keypoint Detection Point Cloud Completion +1

Graph Edit Distance Reward: Learning to Edit Scene Graph

no code implementations ECCV 2020 Lichang Chen, Guosheng Lin, Shijie Wang, Qingyao Wu

Scene Graph, as a vital tool to bridge the gap between language domain and image domain, has been widely adopted in the cross-modality task like VQA.

Graph Matching Image Retrieval +2

Cannot find the paper you are looking for? You can Submit a new open access paper.