1 code implementation • 12 Feb 2025 • Yujie Zhou, Jiazi Bu, Pengyang Ling, Pan Zhang, Tong Wu, Qidong Huang, Jinsong Li, Xiaoyi Dong, Yuhang Zang, Yuhang Cao, Anyi Rao, Jiaqi Wang, Li Niu
Second, leveraging the physical principle of light transport independence, we apply linear blending between the source video's appearance and the relighted appearance, using a Progressive Light Fusion (PLF) strategy to ensure smooth temporal transitions in illumination.
no code implementations • 8 Oct 2024 • Jiazi Bu, Pengyang Ling, Pan Zhang, Tong Wu, Xiaoyi Dong, Yuhang Zang, Yuhang Cao, Dahua Lin, Jiaqi Wang
Additionally, we have observed that the energy contained within the temporal attention maps is directly related to the magnitude of motion amplitude in the generated videos.
2 code implementations • 8 Jun 2024 • Pengyang Ling, Jiazi Bu, Pan Zhang, Xiaoyi Dong, Yuhang Zang, Tong Wu, Huaian Chen, Jiaqi Wang, Yi Jin
Motion-based controllable video generation offers the potential for creating captivating visual content.
no code implementations • 30 Jan 2024 • Danning Lao, Qi Liu, Jiazi Bu, Junchi Yan, Wei Shen
As computer vision continues to advance and finds widespread applications across various domains, the need for interpretability in deep learning models becomes paramount.
1 code implementation • 25 Dec 2023 • Xiangyuan Xue, Kailing Wang, Jiazi Bu, Qirui Li, Zhiyuan Zhang
In this work, we propose MetaScript, a novel Chinese content generation system designed to address the diminishing presence of personal handwriting styles in the digital representation of Chinese characters.