Search Results for author: Pengyang Ling

Found 9 papers, 7 papers with code

ReasonPix2Pix: Instruction Reasoning Dataset for Advanced Image Editing

no code implementations18 May 2024 Ying Jin, Pengyang Ling, Xiaoyi Dong, Pan Zhang, Jiaqi Wang, Dahua Lin

Instruction-based image editing focuses on equipping a generative model with the capacity to adhere to human-written instructions for editing images.

Seed Optimization with Frozen Generator for Superior Zero-shot Low-light Enhancement

no code implementations15 Feb 2024 Yuxuan Gu, Yi Jin, Ben Wang, Zhixiang Wei, Xiaoxiao Ma, Pengyang Ling, Haoxuan Wang, Huaian Chen, Enhong Chen

In this work, we observe that the generators, which are pre-trained on massive natural images, inherently hold the promising potential for superior low-light image enhancement against varying scenarios. Specifically, we embed a pre-trained generator to Retinex model to produce reflectance maps with enhanced detail and vividness, thereby recovering features degraded by low-light conditions. Taking one step further, we introduce a novel optimization strategy, which backpropagates the gradients to the input seeds rather than the parameters of the low-light enhancement model, thus intactly retaining the generative knowledge learned from natural images and achieving faster convergence speed.

Low-Light Image Enhancement

Masked Pre-trained Model Enables Universal Zero-shot Denoiser

1 code implementation26 Jan 2024 Xiaoxiao Ma, Zhixiang Wei, Yi Jin, Pengyang Ling, Tianle Liu, Ben Wang, Junkang Dai, Huaian Chen, Enhong Chen

In this work, we observe that the model, which is trained on vast general images using masking strategy, has been naturally embedded with the distribution knowledge regarding natural images, and thus spontaneously attains the underlying potential for strong image denoising.

Image Denoising

Stronger Fewer & Superior: Harnessing Vision Foundation Models for Domain Generalized Semantic Segmentation

1 code implementation CVPR 2024 Zhixiang Wei, Lin Chen, Yi Jin, Xiaoxiao Ma, Tianle Liu, Pengyang Ling, Ben Wang, Huaian Chen, Jinjin Zheng

Driven by the motivation that Leveraging Stronger pre-trained models and Fewer trainable parameters for Superior generalizability we introduce a robust fine-tuning approach namely "Rein" to parameter-efficiently harness VFMs for DGSS.

Domain Generalization Semantic Segmentation

Stronger, Fewer, & Superior: Harnessing Vision Foundation Models for Domain Generalized Semantic Segmentation

1 code implementation7 Dec 2023 Zhixiang Wei, Lin Chen, Yi Jin, Xiaoxiao Ma, Tianle Liu, Pengyang Ling, Ben Wang, Huaian Chen, Jinjin Zheng

Driven by the motivation that Leveraging Stronger pre-trained models and Fewer trainable parameters for Superior generalizability, we introduce a robust fine-tuning approach, namely Rein, to parameter-efficiently harness VFMs for DGSS.

Domain Generalization +1

Disentangle then Parse:Night-time Semantic Segmentation with Illumination Disentanglement

1 code implementation18 Jul 2023 Zhixiang Wei, Lin Chen, Tao Tu, Huaian Chen, Pengyang Ling, Yi Jin

2) Based on the observation that the illumination component can serve as a cue for some semantically confused regions, we further introduce an Illumination-Aware Parser (IAParser) to explicitly learn the correlation between semantics and lighting, and aggregate the illumination features to yield more precise predictions.

Disentanglement Segmentation +1

FreeDrag: Feature Dragging for Reliable Point-based Image Editing

1 code implementation CVPR 2024 Pengyang Ling, Lin Chen, Pan Zhang, Huaian Chen, Yi Jin, Jinjin Zheng

To serve the intricate and varied demands of image editing, precise and flexible manipulation in image content is indispensable.

Point Tracking

Disentangle then Parse: Night-time Semantic Segmentation with Illumination Disentanglement

1 code implementation ICCV 2023 Zhixiang Wei, Lin Chen, Tao Tu, Pengyang Ling, Huaian Chen, Yi Jin

2) Based on the observation that the illumination component can serve as a cue for some semantically confused regions, we further introduce an Illumination-Aware Parser (IAParser) to explicitly learn the correlation between semantics and lighting, and aggregate the illumination features to yield more precise predictions.

Disentanglement Segmentation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.