Search Results for author: Gongye Liu

Found 4 papers, 3 papers with code

CGB-DM: Content and Graphic Balance Layout Generation with Transformer-based Diffusion Model

no code implementations21 Jul 2024 Yu Li, Yifan Chen, Gongye Liu, Jie Wu, Yujiu Yang

We find that these methods overly focus on content information and lack constraints on layout spatial structure, resulting in an imbalance of learning content-aware and graphic-aware features.

Blocking

ChartMimic: Evaluating LMM's Cross-Modal Reasoning Capability via Chart-to-Code Generation

1 code implementation14 Jun 2024 Chufan Shi, Cheng Yang, Yaxin Liu, Bo Shui, Junjie Wang, Mohan Jing, Linran Xu, Xinyu Zhu, Siheng Li, Yuxiang Zhang, Gongye Liu, Xiaomei Nie, Deng Cai, Yujiu Yang

We introduce a new benchmark, ChartMimic, aimed at assessing the visually-grounded code generation capabilities of large multimodal models (LMMs).

Code Generation

StyleCrafter: Enhancing Stylized Text-to-Video Generation with Style Adapter

2 code implementations1 Dec 2023 Gongye Liu, Menghan Xia, Yong Zhang, Haoxin Chen, Jinbo Xing, Yibo Wang, Xintao Wang, Yujiu Yang, Ying Shan

To address these challenges, we introduce StyleCrafter, a generic method that enhances pre-trained T2V models with a style control adapter, enabling video generation in any style by providing a reference image.

Disentanglement Text-to-Video Generation +1

Accelerating Diffusion Models for Inverse Problems through Shortcut Sampling

1 code implementation26 May 2023 Gongye Liu, Haoze Sun, Jiayi Li, Fei Yin, Yujiu Yang

To derive the transitional state during the forward process, we introduce Distortion Adaptive Inversion.

Colorization Deblurring +1

Cannot find the paper you are looking for? You can Submit a new open access paper.