Search Results for author: Shizhe Diao

Found 29 papers, 28 papers with code

LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning

1 code implementation26 Mar 2024 Rui Pan, Xiang Liu, Shizhe Diao, Renjie Pi, Jipeng Zhang, Chi Han, Tong Zhang

Attempting to complement this deficiency, we investigate layerwise properties of LoRA on fine-tuning tasks and observe an uncommon skewness of weight norms across different layers.

GSM8K Language Modelling +1

Arithmetic Control of LLMs for Diverse User Preferences: Directional Preference Alignment with Multi-Objective Rewards

1 code implementation28 Feb 2024 Haoxiang Wang, Yong Lin, Wei Xiong, Rui Yang, Shizhe Diao, Shuang Qiu, Han Zhao, Tong Zhang

Additionally, DPA models user preferences as directions (i. e., unit vectors) in the reward space to achieve user-dependent preference control.

Can We Verify Step by Step for Incorrect Answer Detection?

1 code implementation16 Feb 2024 Xin Xu, Shizhe Diao, Can Yang, Yang Wang

Chain-of-Thought (CoT) prompting has marked a significant advancement in enhancing the reasoning capabilities of large language models (LLMs).

The Instinctive Bias: Spurious Images lead to Hallucination in MLLMs

1 code implementation6 Feb 2024 Tianyang Han, Qing Lian, Rui Pan, Renjie Pi, Jipeng Zhang, Shizhe Diao, Yong Lin, Tong Zhang

In this paper, we identify a typical class of inputs that baffles MLLMs, which consist of images that are highly relevant but inconsistent with answers, causing MLLMs to suffer from hallucination.


ConstraintChecker: A Plugin for Large Language Models to Reason on Commonsense Knowledge Bases

1 code implementation25 Jan 2024 Quyet V. Do, Tianqing Fang, Shizhe Diao, Zhaowei Wang, Yangqiu Song

When considering a new knowledge instance, ConstraintChecker employs a rule-based module to produce a list of constraints, then it uses a zero-shot learning module to check whether this knowledge instance satisfies all constraints.

Prompt Engineering Zero-Shot Learning

R-Tuning: Teaching Large Language Models to Refuse Unknown Questions

1 code implementation16 Nov 2023 Hanning Zhang, Shizhe Diao, Yong Lin, Yi R. Fung, Qing Lian, Xingyao Wang, Yangyi Chen, Heng Ji, Tong Zhang

This approach is formalized by first identifying the knowledge gap between parametric knowledge and the instruction tuning data.

Hallucination Sentence

Plum: Prompt Learning using Metaheuristic

1 code implementation14 Nov 2023 Rui Pan, Shuo Xing, Shizhe Diao, Wenhe Sun, Xiang Liu, Kashun Shum, Renjie Pi, Jipeng Zhang, Tong Zhang

Since the emergence of large language models, prompt learning has become a popular method for optimizing and customizing these models.

Image Generation

MarineGPT: Unlocking Secrets of Ocean to the Public

1 code implementation20 Oct 2023 Ziqiang Zheng, Jipeng Zhang, Tuan-Anh Vu, Shizhe Diao, Yue Him Wong Tim, Sai-Kit Yeung

Large language models (LLMs), such as ChatGPT/GPT-4, have proven to be powerful tools in promoting the user experience as an AI assistant.

Language Modelling

Mitigating the Alignment Tax of RLHF

no code implementations12 Sep 2023 Yong Lin, Hangyu Lin, Wei Xiong, Shizhe Diao, Jianmeng Liu, Jipeng Zhang, Rui Pan, Haoxiang Wang, Wenbin Hu, Hanning Zhang, Hanze Dong, Renjie Pi, Han Zhao, Nan Jiang, Heng Ji, Yuan YAO, Tong Zhang

Building on the analysis and the observation that averaging different layers of the transformer leads to significantly different reward-tax trade-offs, we propose Adaptive Model Averaging (AMA) to adaptively find various combination ratios of model layers.

Common Sense Reasoning Continual Learning

LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models

1 code implementation21 Jun 2023 Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang

As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial.

Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models Memories

1 code implementation8 Jun 2023 Shizhe Diao, Tianyang Xu, Ruijia Xu, Jiawei Wang, Tong Zhang

Pre-trained language models (PLMs) demonstrate excellent abilities to understand texts in the generic domain while struggling in a specific domain.

Domain Adaptation

On the Difference of BERT-style and CLIP-style Text Encoders

1 code implementation6 Jun 2023 Zhihong Chen, Guiming Hardy Chen, Shizhe Diao, Xiang Wan, Benyou Wang

Masked language modeling (MLM) has been one of the most popular pretraining recipes in natural language processing, e. g., BERT, one of the representative models.

Language Modelling Masked Language Modeling +1

DetGPT: Detect What You Need via Reasoning

1 code implementation23 May 2023 Renjie Pi, Jiahui Gao, Shizhe Diao, Rui Pan, Hanze Dong, Jipeng Zhang, Lewei Yao, Jianhua Han, Hang Xu, Lingpeng Kong, Tong Zhang

Overall, our proposed paradigm and DetGPT demonstrate the potential for more sophisticated and intuitive interactions between humans and machines.

Autonomous Driving Object +2

RAFT: Reward rAnked FineTuning for Generative Foundation Model Alignment

1 code implementation13 Apr 2023 Hanze Dong, Wei Xiong, Deepanshu Goyal, Yihan Zhang, Winnie Chow, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, Tong Zhang

Utilizing a reward model and a sufficient number of samples, our approach selects the high-quality samples, discarding those that exhibit undesired behavior, and subsequently enhancing the model by fine-tuning on these filtered samples.


Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data

2 code implementations24 Feb 2023 Kashun Shum, Shizhe Diao, Tong Zhang

However, most CoT studies rely on carefully designed human-annotated rational chains to prompt LLMs, posing challenges for real-world applications where labeled data is available without rational chains.

Arithmetic Reasoning Language Modelling

Active Prompting with Chain-of-Thought for Large Language Models

2 code implementations23 Feb 2023 Shizhe Diao, Pengcheng Wang, Yong Lin, Tong Zhang

For this purpose, we propose a solution to the key problem of determining which questions are the most important and helpful ones to annotate from a pool of task-specific queries.

Active Learning Zero-Shot Learning

Hashtag-Guided Low-Resource Tweet Classification

1 code implementation20 Feb 2023 Shizhe Diao, Sedrick Scott Keh, Liangming Pan, Zhiliang Tian, Yan Song, Tong Zhang

Social media classification tasks (e. g., tweet sentiment analysis, tweet stance detection) are challenging because social media posts are typically short, informal, and ambiguous.

Classification Sentiment Analysis +1

Towards Unifying Medical Vision-and-Language Pre-training via Soft Prompts

1 code implementation ICCV 2023 Zhihong Chen, Shizhe Diao, Benyou Wang, Guanbin Li, Xiang Wan

Medical vision-and-language pre-training (Med-VLP) has shown promising improvements on many downstream medical tasks owing to its applicability to extracting generic representations from medical images and texts.

Image Retrieval Image-text Classification +7

Normalizing Flow with Variational Latent Representation

1 code implementation21 Nov 2022 Hanze Dong, Shizhe Diao, Weizhong Zhang, Tong Zhang

The resulting method is significantly more powerful than the standard normalization flow approach for generating data distributions with multiple modes.

Write and Paint: Generative Vision-Language Models are Unified Modal Learners

1 code implementation15 Jun 2022 Shizhe Diao, Wangchunshu Zhou, Xinsong Zhang, Jiawei Wang

In this work, we disclose the potential of symmetric generative vision-language pre-training in learning to write and paint concurrently, and propose a new unified modal model, named DaVinci, trained with prefix language modeling and prefix image modeling, a simple generative self-supervised objective on image-text pairs.

Language Modelling Text Generation +1

VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models

1 code implementation30 May 2022 Wangchunshu Zhou, Yan Zeng, Shizhe Diao, Xinsong Zhang

We release the VLUE benchmark to promote research on building vision-language models that generalize well to more diverse images and concepts unseen during pre-training, and are practical in terms of efficiency-performance trade-off.

Vietnamese Language Models Vietnamese Natural Language Understanding +1

Black-box Prompt Learning for Pre-trained Language Models

1 code implementation21 Jan 2022 Shizhe Diao, Zhichao Huang, Ruijia Xu, Xuechun Li, Yong Lin, Xiao Zhou, Tong Zhang

Particularly, instead of fine-tuning the model in the cloud, we adapt PLMs by prompt learning, which efficiently optimizes only a few parameters of the discrete prompts.

text-classification Text Classification

Efficient Neural Network Training via Forward and Backward Propagation Sparsification

1 code implementation NeurIPS 2021 Xiao Zhou, Weizhong Zhang, Zonghao Chen, Shizhe Diao, Tong Zhang

For the latter step, instead of using the chain rule based gradient estimators as in existing methods, we propose a variance reduced policy gradient estimator, which only requires two forward passes without backward propagation, thus achieving completely sparse training.

Efficient Neural Network

Keyphrase Generation with Cross-Document Attention

1 code implementation21 Apr 2020 Shizhe Diao, Yan Song, Tong Zhang

Keyphrase generation aims to produce a set of phrases summarizing the essentials of a given document.

Keyphrase Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.