no code implementations • ECCV 2020 • Yanchun Xie, Jimin Xiao, Ming-Jie Sun, Chao Yao, Kai-Zhu Huang
To this end, we engaged neural texture transfer to swap texture features between the low-resolution image and the high-resolution reference image.
1 code implementation • 8 Oct 2024 • Qi Tang, Yao Zhao, Meiqin Liu, Chao Yao
Diffusion-based Video Super-Resolution (VSR) is renowned for generating perceptually realistic videos, yet it grapples with maintaining detail consistency across frames due to stochastic fluctuations.
1 code implementation • 4 Jun 2024 • Philip Anastassiou, Jiawei Chen, Jitong Chen, Yuanzhe Chen, Zhuo Chen, Ziyi Chen, Jian Cong, Lelai Deng, Chuang Ding, Lu Gao, Mingqing Gong, Peisong Huang, Qingqing Huang, Zhiying Huang, YuanYuan Huo, Dongya Jia, ChuMin Li, Feiya Li, Hui Li, Jiaxin Li, Xiaoyang Li, Xingxing Li, Lin Liu, Shouda Liu, Sichao Liu, Xudong Liu, Yuchen Liu, Zhengxi Liu, Lu Lu, Junjie Pan, Xin Wang, Yuping Wang, Yuxuan Wang, Zhen Wei, Jian Wu, Chao Yao, Yifeng Yang, YuanHao Yi, Junteng Zhang, Qidi Zhang, Shuo Zhang, Wenjie Zhang, Yang Zhang, Zilin Zhao, Dejian Zhong, Xiaobin Zhuang
Seed-TTS offers superior controllability over various speech attributes such as emotion and is capable of generating highly expressive and diverse speech for speakers in the wild.
1 code implementation • 23 May 2024 • Meiqin Liu, Chenming Xu, Yukai Gu, Chao Yao, Yao Zhao
Previous neural video compression methods necessitate distinct codecs for three types of frames (I-frame, P-frame and B-frame), which hinders a unified approach and generalization across different video contexts.
1 code implementation • 25 Mar 2024 • Yirong Zeng, Xiao Ding, Yi Zhao, Xiangyu Li, Jie Zhang, Chao Yao, Ting Liu, Bing Qin
Furthermore, we construct RU22Fact, a novel multilingual explainable fact-checking dataset on the Russia-Ukraine conflict in 2022 of 16K samples, each containing real-world claims, optimized evidence, and referenced explanation.
1 code implementation • 11 Jan 2024 • Changtai Li, Xu Han, Chao Yao, Xiaojuan Ban
Efficient and accurate extraction of microstructures in micrographs of materials is essential in process optimization and the exploration of structure-property relationships.
no code implementations • 27 Dec 2023 • Xueyuan Yang, Chao Yao, Xiaojuan Ban
Leveraging wearable devices for motion reconstruction has emerged as an economical and viable technique.
1 code implementation • 13 Dec 2023 • Qi Tang, Yao Zhao, Meiqin Liu, Jian Jin, Chao Yao
As a critical clue of video super-resolution (VSR), inter-frame alignment significantly impacts overall performance.
1 code implementation • 25 Sep 2023 • Chenming Xu, Meiqin Liu, Chao Yao, Weisi Lin, Yao Zhao
Learned B-frame video compression aims to adopt bi-directional motion estimation and motion compensation (MEMC) coding for middle frame reconstruction.
no code implementations • 16 Mar 2023 • Jiaming Liang, Meiqin Liu, Chao Yao, Chunyu Lin, Yao Zhao
Variable-rate mechanism has improved the flexibility and efficiency of learning-based image compression that trains multiple models for different rate-distortion tradeoffs.
1 code implementation • 3 Nov 2022 • Meiqin Liu, Shuo Jin, Chao Yao, Chunyu Lin, Yao Zhao
A spatio-temporal stability module is designed to learn the self-alignment from inter-frames.
1 code implementation • 9 Jun 2022 • Meiqin Liu, Chenming Xu, Chao Yao, Chunyu Lin, Yao Zhao
Video frame interpolation (VFI) aims to generate predictive frames by warping learnable motions from the bidirectional historical references.
no code implementations • 14 May 2022 • Chao Yao, Shuo Jin, Meiqin Liu, Xiaojuan Ban
In this paper, we proposed an image denoising network structure based on Transformer, which is named DenSformer.
no code implementations • 25 Nov 2019 • Yang Liu, Zhaoyang Lu, Jing Li, Tao Yang, Chao Yao
For the same action, the knowledge learned from different media types, e. g., videos and images, may be related and complementary.
no code implementations • 18 Sep 2019 • Yang Liu, Zhaoyang Lu, Jing Li, Chao Yao, Yanzi Deng
However, the infrared action data is limited until now, which degrades the performance of infrared action recognition.
no code implementations • 18 Sep 2019 • Yang Liu, Zhaoyang Lu, Jing Li, Tao Yang, Chao Yao
Existing methods for infrared action recognition are either based on spatial or local temporal information, however, the global temporal information, which can better describe the movements of body parts across the whole video, is not considered.