Search Results for author: Fan Tang

Found 20 papers, 16 papers with code

Progressive Open Space Expansion for Open-Set Model Attribution

1 code implementation CVPR 2023 Tianyun Yang, Danding Wang, Fan Tang, Xinying Zhao, Juan Cao, Sheng Tang

In this study, we focus on a challenging task, namely Open-Set Model Attribution (OSMA), to simultaneously attribute images to known models and identify those from unknown ones.

Open Set Learning

A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive Learning

1 code implementation9 Mar 2023 Yuxin Zhang, Fan Tang, WeiMing Dong, Haibin Huang, Chongyang Ma, Tong-Yee Lee, Changsheng Xu

Our framework consists of three key components, i. e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.

Contrastive Learning Representation Learning +1

Region-Aware Diffusion for Zero-shot Text-driven Image Editing

1 code implementation23 Feb 2023 Nisha Huang, Fan Tang, WeiMing Dong, Tong-Yee Lee, Changsheng Xu

Different from current mask-based image editing methods, we propose a novel region-aware diffusion model (RDM) for entity-level image editing, which could automatically locate the region of interest and replace it following given text prompts.

Image Manipulation

Inversion-Based Style Transfer with Diffusion Models

1 code implementation CVPR 2023 Yuxin Zhang, Nisha Huang, Fan Tang, Haibin Huang, Chongyang Ma, WeiMing Dong, Changsheng Xu

Our key idea is to learn artistic style directly from a single painting and then guide the synthesis without providing complex textual descriptions.

Denoising Style Transfer

DiffStyler: Controllable Dual Diffusion for Text-Driven Image Stylization

1 code implementation19 Nov 2022 Nisha Huang, Yuxin Zhang, Fan Tang, Chongyang Ma, Haibin Huang, Yong Zhang, WeiMing Dong, Changsheng Xu

Despite the impressive results of arbitrary image-guided style transfer methods, text-driven image stylization has recently been proposed for transferring a natural image into the stylized one according to textual descriptions of the target style provided by the user.

Denoising Image Stylization

Draw Your Art Dream: Diverse Digital Art Synthesis with Multimodal Guided Diffusion

1 code implementation27 Sep 2022 Nisha Huang, Fan Tang, WeiMing Dong, Changsheng Xu

Extensive experimental results on the quality and quantity of the generated digital art paintings confirm the effectiveness of the combination of the diffusion model and multimodal guidance.

Adaptive Assignment for Geometry Aware Local Feature Matching

1 code implementation CVPR 2023 Dihe Huang, Ying Chen, Shang Xu, Yong liu, Wenlong Wu, Yikang Ding, Chengjie Wang, Fan Tang

The detector-free feature matching approaches are currently attracting great attention thanks to their excellent performance.

Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning

1 code implementation19 May 2022 Yuxin Zhang, Fan Tang, WeiMing Dong, Haibin Huang, Chongyang Ma, Tong-Yee Lee, Changsheng Xu

Our framework consists of three key components, i. e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.

Contrastive Learning Image Stylization +1

StyTr2: Image Style Transfer With Transformers

2 code implementations CVPR 2022 Yingying Deng, Fan Tang, WeiMing Dong, Chongyang Ma, Xingjia Pan, Lei Wang, Changsheng Xu

The goal of image style transfer is to render an image with artistic features guided by a style reference while maintaining the original content.

Style Transfer

DAE-GAN: Dynamic Aspect-aware GAN for Text-to-Image Synthesis

1 code implementation ICCV 2021 Shulan Ruan, Yong Zhang, Kun Zhang, Yanbo Fan, Fan Tang, Qi Liu, Enhong Chen

Text-to-image synthesis refers to generating an image from a given text description, the key goal of which lies in photo realism and semantic consistency.

Image Generation Sentence Embedding +1

StyTr$^2$: Image Style Transfer with Transformers

3 code implementations30 May 2021 Yingying Deng, Fan Tang, WeiMing Dong, Chongyang Ma, Xingjia Pan, Lei Wang, Changsheng Xu

The goal of image style transfer is to render an image with artistic features guided by a style reference while maintaining the original content.

Style Transfer

Unveiling the Potential of Structure Preserving for Weakly Supervised Object Localization

1 code implementation CVPR 2021 Xingjia Pan, Yingguo Gao, Zhiwen Lin, Fan Tang, WeiMing Dong, Haolei Yuan, Feiyue Huang, Changsheng Xu

Weakly supervised object localization(WSOL) remains an open problem given the deficiency of finding object extent information using a classification network.

Classification General Classification +1

Arbitrary Video Style Transfer via Multi-Channel Correlation

no code implementations17 Sep 2020 Yingying Deng, Fan Tang, Wei-Ming Dong, Haibin Huang, Chongyang Ma, Changsheng Xu

Towards this end, we propose Multi-Channel Correction network (MCCNet), which can be trained to fuse the exemplar style features and input content features for efficient style transfer while naturally maintaining the coherence of input videos.

Style Transfer Video Style Transfer

Distribution Aligned Multimodal and Multi-Domain Image Stylization

no code implementations2 Jun 2020 Minxuan Lin, Fan Tang, Wei-Ming Dong, Xiao Li, Chongyang Ma, Changsheng Xu

Currently, there are few methods that can perform both multimodal and multi-domain stylization simultaneously.

Image Stylization

Arbitrary Style Transfer via Multi-Adaptation Network

2 code implementations27 May 2020 Yingying Deng, Fan Tang, Wei-Ming Dong, Wen Sun, Feiyue Huang, Changsheng Xu

Arbitrary style transfer is a significant topic with research value and application prospect.

Disentanglement Style Transfer

Image Retargetability

no code implementations12 Feb 2018 Fan Tang, Wei-Ming Dong, Yiping Meng, Chongyang Ma, Fuzhang Wu, Xinrui Li, Tong-Yee Lee

In this work, we introduce the notion of image retargetability to describe how well a particular image can be handled by content-aware image retargeting.

Image Retargeting

Cannot find the paper you are looking for? You can Submit a new open access paper.