Search Results for author: Guangyuan Shi

Found 5 papers, 3 papers with code

EasyGen: Easing Multimodal Generation with a Bidirectional Conditional Diffusion Model and LLMs

1 code implementation13 Oct 2023 Xiangyu Zhao, Bo Liu, Qijiong Liu, Guangyuan Shi, Xiao-Ming Wu

We present EasyGen, an efficient model designed to enhance multimodal understanding and generation by harnessing the capabilities of diffusion models and large language models (LLMs).

multimodal generation Text Generation +1

Recon: Reducing Conflicting Gradients from the Root for Multi-Task Learning

1 code implementation22 Feb 2023 Guangyuan Shi, Qimai Li, Wenlong Zhang, Jiaxin Chen, Xiao-Ming Wu

Our experiments show that such a simple approach can greatly reduce the occurrence of conflicting gradients in the remaining shared layers and achieve better performance, with only a slight increase in model parameters in many cases.

Multi-Task Learning

A Closer Look at Blind Super-Resolution: Degradation Models, Baselines, and Performance Upper Bounds

no code implementations10 May 2022 Wenlong Zhang, Guangyuan Shi, Yihao Liu, Chao Dong, Xiao-Ming Wu

The recently proposed practical degradation model includes a full spectrum of degradation types, but only considers complex cases that use all degradation types in the degradation process, while ignoring many important corner cases that are common in the real world.

Blind Super-Resolution Super-Resolution

Overcoming Catastrophic Forgetting in Incremental Few-Shot Learning by Finding Flat Minima

1 code implementation NeurIPS 2021 Guangyuan Shi, Jiaxin Chen, Wenlong Zhang, Li-Ming Zhan, Xiao-Ming Wu

Our study shows that existing methods severely suffer from catastrophic forgetting, a well-known problem in incremental learning, which is aggravated due to data scarcity and imbalance in the few-shot setting.

Few-Shot Class-Incremental Learning Few-Shot Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.