1 code implementation • CVPR 2024 • Shiyi Zhang, Sule Bai, Guangyi Chen, Lei Chen, Jiwen Lu, Junle Wang, Yansong Tang
NAE is a more challenging task because it requires both narrative flexibility and evaluation rigor.
no code implementations • 10 Mar 2024 • Chenxing Gao, Hang Zhou, Junqing Yu, Yuteng Ye, Jiale Cai, Junle Wang, Wei Yang
Understanding the mechanisms behind Vision Transformer (ViT), particularly its vulnerability to adversarial perturba tions, is crucial for addressing challenges in its real-world applications.
1 code implementation • CVPR 2024 • Jiong Wang, Fengyu Yang, Wenbo Gou, Bingliang Li, Danqi Yan, Ailing Zeng, Yijun Gao, Junle Wang, Yanqing Jing, Ruimao Zhang
To facilitate the development of 3D pose estimation, we present FreeMan, the first large-scale, multi-view dataset collected under the real-world conditions.
1 code implementation • 23 Aug 2023 • Siyue Yao, MingJie Sun, Bingliang Li, Fengyu Yang, Junle Wang, Ruimao Zhang
In this paper, we introduce a novel multi-dancer synthesis task called partner dancer generation, which involves synthesizing virtual human dancers capable of performing dance with users.
1 code implementation • 23 Aug 2023 • Ziyu Yang, Sucheng Ren, Zongwei Wu, Nanxuan Zhao, Junle Wang, Jing Qin, Shengfeng He
Non-photorealistic videos are in demand with the wave of the metaverse, but lack of sufficient research studies.
no code implementations • ICCV 2023 • Luoyuan Xu, Tao Guan, Yuesong Wang, Wenkai Liu, Zhaojie Zeng, Junle Wang, Wei Yang
There is an emerging effort to combine the two popular 3D frameworks using Multi-View Stereo (MVS) and Neural Implicit Surfaces (NIS) with a specific focus on the few-shot / sparse view setting.
1 code implementation • CVPR 2023 • Lingteng Qiu, GuanYing Chen, Jiapeng Zhou, Mutian Xu, Junle Wang, Xiaoguang Han
To address the above limitations, in this paper, we formulate this task as an optimization problem of 3D garment feature curves and surface reconstruction from monocular video.
no code implementations • CVPR 2023 • Jie Yang, Chaoqun Wang, Zhen Li, Junle Wang, Ruimao Zhang
This paper presents Scalable Semantic Transfer (SST), a novel training paradigm, to explore how to leverage the mutual benefits of the data from different label domains (i. e. various levels of label granularity) to train a powerful human parsing network.
no code implementations • ICCV 2023 • Youjia Zhang, Teng Xu, Junqing Yu, Yuteng Ye, Junle Wang, Yanqing Jing, Jingyi Yu, Wei Yang
Recovering the physical attributes of an object's appearance from its images captured under an unknown illumination is challenging yet essential for photo-realistic rendering.
1 code implementation • 27 Nov 2022 • Yuteng Ye, Hang Zhou, Jiale Cai, Chenxing Gao, Youjia Zhang, Junle Wang, Qiang Hu, Junqing Yu, Wei Yang
The framework mainly consists of a sparse encoder, a multi-view feature mathcing module, and a feature consolidation decoder.
no code implementations • 23 May 2022 • Hao Zhang, Ruimao Zhang, Zhanglin Peng, Junle Wang, Yanqing Jing
A simple pixel selection strategy followed with the construction of multi-level contrastive units is introduced to optimize the model for both domain adaptation and active supervised learning.
1 code implementation • CVPR 2022 • Yangyang Xu, Bailin Deng, Junle Wang, Yanqing Jing, Jia Pan, Shengfeng He
Although previous research can leverage generative priors to produce high-resolution results, their quality can suffer from the entangled semantics of the latent space.
1 code implementation • 13 Nov 2021 • Shaoguo Wen, Junle Wang
In this work, we present a simple yet effective unified model for perceptual quality assessment of image and video.
1 code implementation • 13 Oct 2021 • Suiyi Ling, Andreas Pastor, Junle Wang, Patrick Le Callet
In this paper, we thus propose (1) a re-adapted multi-task attention network to predict both the mean opinion score and the standard deviation in an end-to-end manner; (2) a brand-new confidence interval ranking loss that encourages the model to focus on image-pairs that are less certain about the difference of their aesthetic scores.
1 code implementation • 27 Jan 2021 • Shaoguo Wen, Suiyi Ling, Junle Wang, Ximing Chen, Lizhi Fang, Yanqing Jing, Patrick Le Callet
Nowadays, with the vigorous expansion and development of gaming video streaming techniques and services, the expectation of users, especially the mobile phone users, for higher quality of experience is also growing swiftly.
no code implementations • 27 Jan 2021 • Zhenyu Lei, Yejing Xie, Suiyi Ling, Andreas Pastor, Junle Wang, Patrick Le Callet
Our inclination is to seek and learn the correlations between different aesthetic relevant dimensions to further boost the generalization performance in predicting all the aesthetic dimensions.
no code implementations • 1 Mar 2020 • Jing Li, Suiyi Ling, Junle Wang, Zhi Li, Patrick Le Callet
In the big data era, data labeling can be obtained through crowdsourcing.
no code implementations • 28 Mar 2019 • Suiyi Ling, Jing Li, Junle Wang, Patrick Le Callet
In this paper, we proposed a no-reference (NR) quality metric for RGB plus image-depth (RGB-D) synthesis images based on Generative Adversarial Networks (GANs), namely GANs-NQM.
1 code implementation • NeurIPS 2018 • Jing Li, Rafal K. Mantiuk, Junle Wang, Suiyi Ling, Patrick Le Callet
In this paper we present a hybrid active sampling strategy for pairwise preference aggregation, which aims at recovering the underlying rating of the test candidates from sparse and noisy pairwise labelling.