Search Results for author: Fan Tang

Found 27 papers, 19 papers with code

Image Retargetability

no code implementations12 Feb 2018 Fan Tang, Wei-Ming Dong, Yiping Meng, Chongyang Ma, Fuzhang Wu, Xinrui Li, Tong-Yee Lee

In this work, we introduce the notion of image retargetability to describe how well a particular image can be handled by content-aware image retargeting.

Image Retargeting

Arbitrary Style Transfer via Multi-Adaptation Network

2 code implementations27 May 2020 Yingying Deng, Fan Tang, Wei-Ming Dong, Wen Sun, Feiyue Huang, Changsheng Xu

Arbitrary style transfer is a significant topic with research value and application prospect.

Disentanglement Style Transfer

Distribution Aligned Multimodal and Multi-Domain Image Stylization

no code implementations2 Jun 2020 Minxuan Lin, Fan Tang, Wei-Ming Dong, Xiao Li, Chongyang Ma, Changsheng Xu

Currently, there are few methods that can perform both multimodal and multi-domain stylization simultaneously.

Image Stylization

Arbitrary Video Style Transfer via Multi-Channel Correlation

no code implementations17 Sep 2020 Yingying Deng, Fan Tang, Wei-Ming Dong, Haibin Huang, Chongyang Ma, Changsheng Xu

Towards this end, we propose Multi-Channel Correction network (MCCNet), which can be trained to fuse the exemplar style features and input content features for efficient style transfer while naturally maintaining the coherence of input videos.

Style Transfer Video Style Transfer

Unveiling the Potential of Structure Preserving for Weakly Supervised Object Localization

1 code implementation CVPR 2021 Xingjia Pan, Yingguo Gao, Zhiwen Lin, Fan Tang, WeiMing Dong, Haolei Yuan, Feiyue Huang, Changsheng Xu

Weakly supervised object localization(WSOL) remains an open problem given the deficiency of finding object extent information using a classification network.

Classification General Classification +3

StyTr$^2$: Image Style Transfer with Transformers

4 code implementations30 May 2021 Yingying Deng, Fan Tang, WeiMing Dong, Chongyang Ma, Xingjia Pan, Lei Wang, Changsheng Xu

The goal of image style transfer is to render an image with artistic features guided by a style reference while maintaining the original content.

Style Transfer

DAE-GAN: Dynamic Aspect-aware GAN for Text-to-Image Synthesis

1 code implementation ICCV 2021 Shulan Ruan, Yong Zhang, Kun Zhang, Yanbo Fan, Fan Tang, Qi Liu, Enhong Chen

Text-to-image synthesis refers to generating an image from a given text description, the key goal of which lies in photo realism and semantic consistency.

Image Generation Sentence +2

StyTr2: Image Style Transfer With Transformers

3 code implementations CVPR 2022 Yingying Deng, Fan Tang, WeiMing Dong, Chongyang Ma, Xingjia Pan, Lei Wang, Changsheng Xu

The goal of image style transfer is to render an image with artistic features guided by a style reference while maintaining the original content.

Style Transfer

Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning

1 code implementation19 May 2022 Yuxin Zhang, Fan Tang, WeiMing Dong, Haibin Huang, Chongyang Ma, Tong-Yee Lee, Changsheng Xu

Our framework consists of three key components, i. e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.

Contrastive Learning Image Stylization +1

Adaptive Assignment for Geometry Aware Local Feature Matching

1 code implementation CVPR 2023 Dihe Huang, Ying Chen, Shang Xu, Yong liu, Wenlong Wu, Yikang Ding, Chengjie Wang, Fan Tang

The detector-free feature matching approaches are currently attracting great attention thanks to their excellent performance.

Feature Correlation

Draw Your Art Dream: Diverse Digital Art Synthesis with Multimodal Guided Diffusion

1 code implementation27 Sep 2022 Nisha Huang, Fan Tang, WeiMing Dong, Changsheng Xu

Extensive experimental results on the quality and quantity of the generated digital art paintings confirm the effectiveness of the combination of the diffusion model and multimodal guidance.

DiffStyler: Controllable Dual Diffusion for Text-Driven Image Stylization

1 code implementation19 Nov 2022 Nisha Huang, Yuxin Zhang, Fan Tang, Chongyang Ma, Haibin Huang, Yong Zhang, WeiMing Dong, Changsheng Xu

Despite the impressive results of arbitrary image-guided style transfer methods, text-driven image stylization has recently been proposed for transferring a natural image into a stylized one according to textual descriptions of the target style provided by the user.

Denoising Image Stylization

Inversion-Based Style Transfer with Diffusion Models

1 code implementation CVPR 2023 Yuxin Zhang, Nisha Huang, Fan Tang, Haibin Huang, Chongyang Ma, WeiMing Dong, Changsheng Xu

Our key idea is to learn artistic style directly from a single painting and then guide the synthesis without providing complex textual descriptions.

Denoising Style Transfer +1

Region-Aware Diffusion for Zero-shot Text-driven Image Editing

1 code implementation23 Feb 2023 Nisha Huang, Fan Tang, WeiMing Dong, Tong-Yee Lee, Changsheng Xu

Different from current mask-based image editing methods, we propose a novel region-aware diffusion model (RDM) for entity-level image editing, which could automatically locate the region of interest and replace it following given text prompts.

Image Manipulation

A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive Learning

1 code implementation9 Mar 2023 Yuxin Zhang, Fan Tang, WeiMing Dong, Haibin Huang, Chongyang Ma, Tong-Yee Lee, Changsheng Xu

Our framework consists of three key components, i. e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.

Contrastive Learning Representation Learning +1

Progressive Open Space Expansion for Open-Set Model Attribution

1 code implementation CVPR 2023 Tianyun Yang, Danding Wang, Fan Tang, Xinying Zhao, Juan Cao, Sheng Tang

In this study, we focus on a challenging task, namely Open-Set Model Attribution (OSMA), to simultaneously attribute images to known models and identify those from unknown ones.

Attribute Open Set Learning

ProSpect: Prompt Spectrum for Attribute-Aware Personalization of Diffusion Models

3 code implementations25 May 2023 Yuxin Zhang, WeiMing Dong, Fan Tang, Nisha Huang, Haibin Huang, Chongyang Ma, Tong-Yee Lee, Oliver Deussen, Changsheng Xu

We apply ProSpect in various personalized attribute-aware image generation applications, such as image-guided or text-driven manipulations of materials, style, and layout, achieving previously unattainable results from a single image input without fine-tuning the diffusion models.

Attribute Disentanglement +1

Dance Your Latents: Consistent Dance Generation through Spatial-temporal Subspace Attention Guided by Motion Flow

no code implementations20 Oct 2023 Haipeng Fang, Zhihao Sun, Ziyao Huang, Fan Tang, Juan Cao, Sheng Tang

The advancement of generative AI has extended to the realm of Human Dance Generation, demonstrating superior generative capacities.

$Z^*$: Zero-shot Style Transfer via Attention Rearrangement

no code implementations25 Nov 2023 Yingying Deng, Xiangyu He, Fan Tang, WeiMing Dong

Despite the remarkable progress in image style transfer, formulating style in the context of art is inherently subjective and challenging.

Style Transfer

Adversarial Robust Memory-Based Continual Learner

no code implementations29 Nov 2023 Xiaoyue Mi, Fan Tang, Zonghan Yang, Danding Wang, Juan Cao, Peng Li, Yang Liu

Despite the remarkable advances that have been made in continual learning, the adversarial vulnerability of such methods has not been fully discussed.

Adversarial Robustness Continual Learning

Topology-Preserving Adversarial Training

no code implementations29 Nov 2023 Xiaoyue Mi, Fan Tang, Yepeng Weng, Danding Wang, Juan Cao, Sheng Tang, Peng Li, Yang Liu

Despite the effectiveness in improving the robustness of neural networks, adversarial training has suffered from the natural accuracy degradation problem, i. e., accuracy on natural samples has reduced significantly.

MotionCrafter: One-Shot Motion Customization of Diffusion Models

1 code implementation8 Dec 2023 Yuxin Zhang, Fan Tang, Nisha Huang, Haibin Huang, Chongyang Ma, WeiMing Dong, Changsheng Xu

The essence of a video lies in its dynamic motions, including character actions, object movements, and camera movements.

Disentanglement Motion Disentanglement +3

CreativeSynth: Creative Blending and Synthesis of Visual Arts based on Multimodal Diffusion

1 code implementation25 Jan 2024 Nisha Huang, WeiMing Dong, Yuxin Zhang, Fan Tang, Ronghui Li, Chongyang Ma, Xiu Li, Changsheng Xu

Large-scale text-to-image generative models have made impressive strides, showcasing their ability to synthesize a vast array of high-quality images.

Image Generation Style Transfer

Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework

1 code implementation25 Mar 2024 Ziyao Huang, Fan Tang, Yong Zhang, Xiaodong Cun, Juan Cao, Jintao Li, Tong-Yee Lee

We adopt a two-stage training strategy for the diffusion model, effectively binding movements with specific appearances.

Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.