no code implementations • 16 Jun 2020 • Jie An, Tao Li, Hao-Zhi Huang, Li Shen, Xuan Wang, Yongyi Tang, Jinwen Ma, Wei Liu, Jiebo Luo
Extracting effective deep features to represent content and style information is the key to universal style transfer.
1 code implementation • 28 May 2019 • Yongyi Tang, Lin Ma, Lianqiang Zhou
However, extracting motion information, specifically in the form of optical flow features, is extremely computationally expensive, especially for large-scale video classification.
no code implementations • 29 Sep 2018 • Yongyi Tang, Xing Zhang, Jingwen Wang, Shaoxiang Chen, Lin Ma, Yu-Gang Jiang
This paper describes our solution for the 2$^\text{nd}$ YouTube-8M video understanding challenge organized by Google AI.
no code implementations • 7 May 2018 • Yongyi Tang, Lin Ma, Wei Liu, Wei-Shi Zheng
Human motion prediction aims at generating future frames of human motion based on an observed sequence of skeletons.
no code implementations • 20 Sep 2017 • Yongyi Tang, Peizhen Zhang, Jian-Fang Hu, Wei-Shi Zheng
Rather than simply recognizing the action of a person individually, collective activity recognition aims to find out what a group of people is acting in a collective scene.
no code implementations • 4 Jul 2017 • Shaoxiang Chen, Xi Wang, Yongyi Tang, Xinpeng Chen, Zuxuan Wu, Yu-Gang Jiang
This paper introduces the system we developed for the Google Cloud & YouTube-8M Video Understanding Challenge, which can be considered as a multi-label classification problem defined on top of the large scale YouTube-8M Dataset.