Search Results for author: Zongze Wu

Found 6 papers, 5 papers with code

Variational Distillation for Multi-View Learning

3 code implementations20 Jun 2022 Xudong Tian, Zhizhong Zhang, Cong Wang, Wensheng Zhang, Yanyun Qu, Lizhuang Ma, Zongze Wu, Yuan Xie, DaCheng Tao

Information Bottleneck (IB) based multi-view learning provides an information theoretic principle for seeking shared information contained in heterogeneous data descriptions.

MULTI-VIEW LEARNING Representation Learning

Third Time's the Charm? Image and Video Editing with StyleGAN3

1 code implementation31 Jan 2022 Yuval Alaluf, Or Patashnik, Zongze Wu, Asif Zamir, Eli Shechtman, Dani Lischinski, Daniel Cohen-Or

In particular, we demonstrate that while StyleGAN3 can be trained on unaligned data, one can still use aligned data for training, without hindering the ability to generate unaligned imagery.

Disentanglement Image Generation

StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery

5 code implementations ICCV 2021 Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, Dani Lischinski

Inspired by the ability of StyleGAN to generate highly realistic images in a variety of domains, much recent work has focused on understanding how to use the latent spaces of StyleGAN to manipulate generated and real images.

 Ranked #1 on Image Manipulation on 10-Monty-Hall (using extra training data)

Image Manipulation

StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation

5 code implementations CVPR 2021 Zongze Wu, Dani Lischinski, Eli Shechtman

Manipulation of visual attributes via these StyleSpace controls is shown to be better disentangled than via those proposed in previous works.

Image Generation

Maximum Correntropy Unscented Filter

no code implementations26 Aug 2016 Xi Liu, Badong Chen, Bin Xu, Zongze Wu, Paul Honeine

To improve the robustness of the UKF against impulsive noises, a new filter for nonlinear systems is proposed in this work, namely the maximum correntropy unscented filter (MCUF).

Cannot find the paper you are looking for? You can Submit a new open access paper.