Search Results for author: Huiling Zhou

Found 7 papers, 3 papers with code

Single Stage Virtual Try-on via Deformable Attention Flows

1 code implementation19 Jul 2022 Shuai Bai, Huiling Zhou, Zhikang Li, Chang Zhou, Hongxia Yang

Virtual try-on aims to generate a photo-realistic fitting result given an in-shop garment and a reference person image.

Image Animation Virtual Try-on

M6-Fashion: High-Fidelity Multi-modal Image Generation and Editing

no code implementations24 May 2022 Zhikang Li, Huiling Zhou, Shuai Bai, Peike Li, Chang Zhou, Hongxia Yang

The fashion industry has diverse applications in multi-modal image generation and editing.

Image Generation

In-N-Out Generative Learning for Dense Unsupervised Video Segmentation

1 code implementation29 Mar 2022 Xiao Pan, Peike Li, Zongxin Yang, Huiling Zhou, Chang Zhou, Hongxia Yang, Jingren Zhou, Yi Yang

By contrast, pixel-level optimization is more explicit, however, it is sensitive to the visual quality of training data and is not robust to object deformation.

Contrastive Learning Semantic Segmentation +3

Cross-domain User Preference Learning for Cold-start Recommendation

no code implementations7 Dec 2021 Huiling Zhou, Jie Liu, Zhikang Li, Jin Yu, Hongxia Yang

With user history represented by a domain-aware sequential model, a frequency encoder is applied to the underlying tags for user content preference learning.

Recommendation Systems

M6: A Chinese Multimodal Pretrainer

no code implementations1 Mar 2021 Junyang Lin, Rui Men, An Yang, Chang Zhou, Ming Ding, Yichang Zhang, Peng Wang, Ang Wang, Le Jiang, Xianyan Jia, Jie Zhang, Jianwei Zhang, Xu Zou, Zhikang Li, Xiaodong Deng, Jie Liu, Jinbao Xue, Huiling Zhou, Jianxin Ma, Jin Yu, Yong Li, Wei Lin, Jingren Zhou, Jie Tang, Hongxia Yang

In this work, we construct the largest dataset for multimodal pretraining in Chinese, which consists of over 1. 9TB images and 292GB texts that cover a wide range of domains.

Image Generation

Network Clustering for Multi-task Learning

no code implementations22 Jan 2021 Dehong Gao, Wenjing Yang, Huiling Zhou, Yi Wei, Yi Hu, Hao Wang

The majority of current MTL studies adopt the hard parameter sharing structure, where hard layers tend to learn general representations over all tasks and specific layers are prone to learn specific representations for each task.

Document Classification Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.