no code implementations • 13 Mar 2024 • Tianyi Chu, Wei Xing, Jiafu Chen, Zhizhong Wang, Jiakai Sun, Lei Zhao, Haibo Chen, Huaizhong Lin
Given that many deterministic conditional image generative models have been able to produce high-quality yet fixed results, we raise an intriguing question: is it possible for pre-trained deterministic conditional image generative models to generate diverse results without changing network structures or parameters?
no code implementations • 13 Mar 2024 • Jiafu Chen, Wei Xing, Jiakai Sun, Tianyi Chu, Yiling Huang, Boyan Ji, Lei Zhao, Huaizhong Lin, Haibo Chen, Zhizhong Wang
3D scene stylization refers to transform the appearance of a 3D scene to match a given style image, ensuring that images rendered from different viewpoints exhibit the same style as the given style image, while maintaining the 3D consistency of the stylized scene.
no code implementations • ICCV 2023 • Zhizhong Wang, Lei Zhao, Wei Xing
Our work provides new insights into the C-S disentanglement in style transfer and demonstrates the potential of diffusion models for learning well-disentangled C-S characteristics.
1 code implementation • 23 Mar 2023 • Zhiwen Zuo, Lei Zhao, Ailin Li, Zhizhong Wang, Zhanjie Zhang, Jiafu Chen, Wei Xing, Dongming Lu
By combining SCAT with standard global adversarial training, the new adversarial training framework exhibits the following three advantages simultaneously: (1) the global consistency of the repaired image, (2) the local fine texture details of the repaired image, and (3) the flexibility of handling images with free-form holes.
no code implementations • ICCV 2023 • Tianyi Chu, Jiafu Chen, Jiakai Sun, Shuobin Lian, Zhizhong Wang, Zhiwen Zuo, Lei Zhao, Wei Xing, Dongming Lu
Recently proposed image inpainting method LaMa builds its network upon Fast Fourier Convolution (FFC), which was originally proposed for high-level vision tasks like image classification.
1 code implementation • 28 Nov 2022 • Zhizhong Wang, Lei Zhao, Zhiwen Zuo, Ailin Li, Haibo Chen, Wei Xing, Dongming Lu
The style encoder, coupled with a modulator, encodes the style image into learnable dual-modulation signals that modulate both intermediate features and convolutional filters of the decoder, thus injecting more sophisticated and flexible style signals to guide the stylizations.
1 code implementation • 27 Aug 2022 • Zhizhong Wang, Zhanjie Zhang, Lei Zhao, Zhiwen Zuo, Ailin Li, Wei Xing, Dongming Lu
Specifically, our approach introduces an aesthetic discriminator to learn the universal human-delightful aesthetic features from a large corpus of artist-created paintings.
1 code implementation • 6 Dec 2021 • Zhizhong Wang, Lei Zhao, Haibo Chen, Ailin Li, Zhiwen Zuo, Wei Xing, Dongming Lu
In addition, we also introduce a novel learning-free view-specific texture reformation (VSTR) operation with a new semantic map guidance strategy to achieve more accurate semantic-guided and structure-preserved texture transfer.
1 code implementation • NeurIPS 2021 • Haibo Chen, Lei Zhao, Zhizhong Wang, Huiming Zhang, Zhiwen Zuo, Ailin Li, Wei Xing, Dongming Lu
Although existing artistic style transfer methods have achieved significant improvement with deep neural networks, they still suffer from artifacts such as disharmonious colors and repetitive patterns.
no code implementations • CVPR 2021 • Haibo Chen, Lei Zhao, Zhizhong Wang, Huiming Zhang, Zhiwen Zuo, Ailin Li, Wei Xing, Dongming Lu
Artistic style transfer is an image editing task that aims at repainting everyday photographs with learned artistic styles.
no code implementations • 16 Jan 2021 • Zhizhong Wang, Lei Zhao, Haibo Chen, Zhiwen Zuo, Ailin Li, Wei Xing, Dongming Lu
Gram-based and patch-based approaches are two important research lines of style transfer.
no code implementations • ICCV 2021 • Haibo Chen, Lei Zhao, Huiming Zhang, Zhizhong Wang, Zhiwen Zuo, Ailin Li, Wei Xing, Dongming Lu
Image style transfer aims to transfer the styles of artworks onto arbitrary photographs to create novel artistic images.
no code implementations • 8 Aug 2020 • Zhiwen Zuo, Lei Zhao, Zhizhong Wang, Haibo Chen, Ailin Li, Qijiang Xu, Wei Xing, Dongming Lu
Multimodal image-to-image translation (I2IT) aims to learn a conditional distribution that explores multiple possible images in the target domain given an input image in the source domain.
no code implementations • ICLR 2020 • Zhiwen Zuo, Lei Zhao, Huiming Zhang, Qihang Mo, Haibo Chen, Zhizhong Wang, Ailin Li, Lihong Qiu, Wei Xing, Dongming Lu
Generative Adversarial Networks (GANs) have shown impressive results in modeling distributions over complicated manifolds such as those of natural images.
no code implementations • ICLR 2020 • Zhizhong Wang, Lei Zhao, Qihang Mo, Sihuan Lin, Zhiwen Zuo, Wei Xing, Dongming Lu
This could help improve the quality and flexibility, and guide us to find domain-independent approaches.
2 code implementations • CVPR 2020 • Zhizhong Wang, Lei Zhao, Haibo Chen, Lihong Qiu, Qihang Mo, Sihuan Lin, Wei Xing, Dongming Lu
Image style transfer is an underdetermined problem, where a large number of solutions can satisfy the same constraint (the content and style).
1 code implementation • 18 Nov 2018 • Zhizhong Wang, Lei Zhao, Wei Xing, Dongming Lu
Our approach is not only flexible to adjust the trade-off between content and style, but also controllable between global and local.