Search Results for author: Yuwang Wang

Found 10 papers, 5 papers with code

Retriever: Learning Content-Style Representation as a Token-Level Bipartite Graph

2 code implementations ICLR 2022 Dacheng Yin, Xuanchi Ren, Chong Luo, Yuwang Wang, Zhiwei Xiong, Wenjun Zeng

Last, an innovative link attention module serves as the decoder to reconstruct data from the decomposed content and style, with the help of the linking keys.

Quantization Style Transfer +1

Understanding Mobile GUI: from Pixel-Words to Screen-Sentences

no code implementations25 May 2021 Jingwen Fu, Xiaoyi Zhang, Yuwang Wang, Wenjun Zeng, Sam Yang, Grayson Hilliard

A dataset, RICO-PW, of screenshots with Pixel-Words annotations is built based on the public RICO dataset, which will be released to help to address the lack of high-quality training data in this area.

S2R-DepthNet: Learning a Generalizable Depth-specific Structural Representation

1 code implementation CVPR 2021 Xiaotian Chen, Yuwang Wang, Xuejin Chen, Wenjun Zeng

S2R-DepthNet consists of: a) a Structure Extraction (STE) module which extracts a domaininvariant structural representation from an image by disentangling the image into domain-invariant structure and domain-specific style components, b) a Depth-specific Attention (DSA) module, which learns task-specific knowledge to suppress depth-irrelevant structures for better depth estimation and generalization, and c) a depth prediction module (DP) to predict depth from the depth-specific representation.

Domain Generalization Monocular Depth Estimation +1

Rethinking Content and Style: Exploring Bias for Unsupervised Disentanglement

1 code implementation21 Feb 2021 Xuanchi Ren, Tao Yang, Yuwang Wang, Wenjun Zeng

From the unsupervised disentanglement perspective, we rethink content and style and propose a formulation for unsupervised C-S disentanglement based on our assumption that different factors are of different importance and popularity for image reconstruction, which serves as a data bias.

3D Reconstruction Disentanglement +3

Learning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrastive Learning View

2 code implementations ICLR 2022 Xuanchi Ren, Tao Yang, Yuwang Wang, Wenjun Zeng

Based on this observation, we argue that it is possible to mitigate the trade-off by $(i)$ leveraging the pretrained generative models with high generation quality, $(ii)$ focusing on discovering the traversal directions as factors for disentangled representation learning.

Contrastive Learning Disentanglement

Towards Building A Group-based Unsupervised Representation Disentanglement Framework

1 code implementation ICLR 2022 Tao Yang, Xuanchi Ren, Yuwang Wang, Wenjun Zeng, Nanning Zheng

We then propose a model, based on existing VAE-based methods, to tackle the unsupervised learning problem of the framework.

Disentanglement

Blind Quality Assessment for Image Superresolution Using Deep Two-Stream Convolutional Networks

no code implementations13 Apr 2020 Wei Zhou, Qiuping Jiang, Yuwang Wang, Zhibo Chen, Weiping Li

Numerous image superresolution (SR) algorithms have been proposed for reconstructing high-resolution (HR) images from input images with lower spatial resolutions.

Image Quality Assessment

Moving Indoor: Unsupervised Video Depth Learning in Challenging Environments

no code implementations ICCV 2019 Junsheng Zhou, Yuwang Wang, Kaihuai Qin, Wen-Jun Zeng

Our experimental evaluation demonstrates that the result of our method is comparable to fully supervised methods on the NYU Depth V2 benchmark.

Optical Flow Estimation

Unsupervised High-Resolution Depth Learning From Videos With Dual Networks

no code implementations ICCV 2019 Junsheng Zhou, Yuwang Wang, Kaihuai Qin, Wen-Jun Zeng

Unsupervised depth learning takes the appearance difference between a target view and a view synthesized from its adjacent frame as supervisory signal.

Frame Monocular Depth Estimation

Adversarial View-Consistent Learning for Monocular Depth Estimation

no code implementations4 Aug 2019 Yixuan Liu, Yuwang Wang, Shengjin Wang

To this end, we first design a differentiable depth map warping operation, which is end-to-end trainable, and then propose a pose generator to generate novel views for a given image in an adversarial manner.

Monocular Depth Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.