3 code implementations • 25 May 2023 • Yuxin Zhang, WeiMing Dong, Fan Tang, Nisha Huang, Haibin Huang, Chongyang Ma, Tong-Yee Lee, Oliver Deussen, Changsheng Xu
Personalizing generative models offers a way to guide image generation with user-provided references.
1 code implementation • CVPR 2023 • Tianyun Yang, Danding Wang, Fan Tang, Xinying Zhao, Juan Cao, Sheng Tang
In this study, we focus on a challenging task, namely Open-Set Model Attribution (OSMA), to simultaneously attribute images to known models and identify those from unknown ones.
1 code implementation • 9 Mar 2023 • Yuxin Zhang, Fan Tang, WeiMing Dong, Haibin Huang, Chongyang Ma, Tong-Yee Lee, Changsheng Xu
Our framework consists of three key components, i. e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
1 code implementation • 23 Feb 2023 • Nisha Huang, Fan Tang, WeiMing Dong, Tong-Yee Lee, Changsheng Xu
Different from current mask-based image editing methods, we propose a novel region-aware diffusion model (RDM) for entity-level image editing, which could automatically locate the region of interest and replace it following given text prompts.
1 code implementation • CVPR 2023 • Yuxin Zhang, Nisha Huang, Fan Tang, Haibin Huang, Chongyang Ma, WeiMing Dong, Changsheng Xu
Our key idea is to learn artistic style directly from a single painting and then guide the synthesis without providing complex textual descriptions.
1 code implementation • 19 Nov 2022 • Nisha Huang, Yuxin Zhang, Fan Tang, Chongyang Ma, Haibin Huang, Yong Zhang, WeiMing Dong, Changsheng Xu
Despite the impressive results of arbitrary image-guided style transfer methods, text-driven image stylization has recently been proposed for transferring a natural image into the stylized one according to textual descriptions of the target style provided by the user.
1 code implementation • 27 Sep 2022 • Nisha Huang, Fan Tang, WeiMing Dong, Changsheng Xu
Extensive experimental results on the quality and quantity of the generated digital art paintings confirm the effectiveness of the combination of the diffusion model and multimodal guidance.
1 code implementation • CVPR 2023 • Dihe Huang, Ying Chen, Shang Xu, Yong liu, Wenlong Wu, Yikang Ding, Chengjie Wang, Fan Tang
The detector-free feature matching approaches are currently attracting great attention thanks to their excellent performance.
1 code implementation • 19 May 2022 • Yuxin Zhang, Fan Tang, WeiMing Dong, Haibin Huang, Chongyang Ma, Tong-Yee Lee, Changsheng Xu
Our framework consists of three key components, i. e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
1 code implementation • CVPR 2022 • Hanjun Li, Xingjia Pan, Ke Yan, Fan Tang, Wei-Shi Zheng
Object detection under imperfect data receives great attention recently.
1 code implementation • 26 Jan 2022 • Chengcheng Ma, Xingjia Pan, Qixiang Ye, Fan Tang, WeiMing Dong, Changsheng Xu
Semi-supervised object detection has recently achieved substantial progress.
2 code implementations • CVPR 2022 • Yingying Deng, Fan Tang, WeiMing Dong, Chongyang Ma, Xingjia Pan, Lei Wang, Changsheng Xu
The goal of image style transfer is to render an image with artistic features guided by a style reference while maintaining the original content.
1 code implementation • ICCV 2021 • Shulan Ruan, Yong Zhang, Kun Zhang, Yanbo Fan, Fan Tang, Qi Liu, Enhong Chen
Text-to-image synthesis refers to generating an image from a given text description, the key goal of which lies in photo realism and semantic consistency.
3 code implementations • 30 May 2021 • Yingying Deng, Fan Tang, WeiMing Dong, Chongyang Ma, Xingjia Pan, Lei Wang, Changsheng Xu
The goal of image style transfer is to render an image with artistic features guided by a style reference while maintaining the original content.
1 code implementation • CVPR 2021 • Xingjia Pan, Yingguo Gao, Zhiwen Lin, Fan Tang, WeiMing Dong, Haolei Yuan, Feiyue Huang, Changsheng Xu
Weakly supervised object localization(WSOL) remains an open problem given the deficiency of finding object extent information using a classification network.
no code implementations • 17 Sep 2020 • Yingying Deng, Fan Tang, Wei-Ming Dong, Haibin Huang, Chongyang Ma, Changsheng Xu
Towards this end, we propose Multi-Channel Correction network (MCCNet), which can be trained to fuse the exemplar style features and input content features for efficient style transfer while naturally maintaining the coherence of input videos.
no code implementations • 2 Jun 2020 • Minxuan Lin, Fan Tang, Wei-Ming Dong, Xiao Li, Chongyang Ma, Changsheng Xu
Currently, there are few methods that can perform both multimodal and multi-domain stylization simultaneously.
2 code implementations • 27 May 2020 • Yingying Deng, Fan Tang, Wei-Ming Dong, Wen Sun, Feiyue Huang, Changsheng Xu
Arbitrary style transfer is a significant topic with research value and application prospect.
no code implementations • 26 Feb 2020 • Minxuan Lin, Yingying Deng, Fan Tang, Wei-Ming Dong, Changsheng Xu
Controllable painting generation plays a pivotal role in image stylization.
no code implementations • 12 Feb 2018 • Fan Tang, Wei-Ming Dong, Yiping Meng, Chongyang Ma, Fuzhang Wu, Xinrui Li, Tong-Yee Lee
In this work, we introduce the notion of image retargetability to describe how well a particular image can be handled by content-aware image retargeting.