2 code implementations • 7 Mar 2023 • Mingzhen Sun, Weining Wang, Xinxin Zhu, Jing Liu
Experimental results demonstrate that our method achieves new state-of-the-art performance on five challenging benchmarks for video prediction and unconditional video generation: BAIR, RoboNet, KTH, KITTI and UCF101.
no code implementations • 23 Aug 2022 • Matias D. Cattaneo, Richard K. Crump, Weining Wang
Beta-sorted portfolios -- portfolios comprised of assets with similar covariation to selected risk factors -- are a popular tool in empirical finance to analyze models of (conditional) expected returns.
no code implementations • 8 May 2022 • Toru Kitagawa, Weining Wang, Mengshan Xu
This paper develops a novel method for policy choice in a dynamic setting where the available data is a multivariate time series.
1 code implementation • 24 Mar 2022 • Qi Li, Weining Wang, Chengzhong Xu, Zhenan Sun
In addition, semantic information is introduced into the semantic-guided fusion module to control the swapped area and model the pose and expression more accurately.
no code implementations • 27 Jan 2022 • Georg Keilbar, Juan M. Rodriguez-Poo, Alexandra Soberon, Weining Wang
We derive its asymptotic properties by finding out that the limiting distribution has a discontinuity, depending on the explanatory power of our basis functions which is expressed by the variance of the error of the factor loadings.
1 code implementation • 15 Nov 2021 • Xiu Xu, Weining Wang, Yongcheol Shin, Chaowen Zheng
We propose a dynamic network quantile regression model to investigate the quantile connectedness using a predetermined network information.
no code implementations • 6 Sep 2021 • Xingjian He, Weining Wang, Zhiyong Xu, Hao Wang, Jie Jiang, Jing Liu
Compared with image scene parsing, video scene parsing introduces temporal information, which can effectively improve the consistency and accuracy of prediction.
1 code implementation • 1 Jul 2021 • Jing Liu, Xinxin Zhu, Fei Liu, Longteng Guo, Zijia Zhao, Mingzhen Sun, Weining Wang, Hanqing Lu, Shiyu Zhou, Jiajun Zhang, Jinqiao Wang
In this paper, we propose an Omni-perception Pre-Trainer (OPT) for cross-modal understanding and generation, by jointly modeling visual, text and audio resources.
Ranked #1 on Image Retrieval on Localized Narratives
no code implementations • 29 Jun 2021 • Xingqun Qi, Muyi Sun, Weining Wang, Xiaoxiao Dong, Qi Li, Caifeng Shan
To tackle these challenges, we propose a novel Semantic-Driven Generative Adversarial Network (SDGAN) which embeds global structure-level style injection and local class-level knowledge re-weighting.
no code implementations • 16 May 2021 • Victor Chernozhukov, Chen Huang, Weining Wang
We uncover the network effect with a flexible sparse deviation from a predetermined adjacency matrix.
1 code implementation • 17 Feb 2021 • Hao Wang, Weining Wang, Jing Liu
Video semantic segmentation requires to utilize the complex temporal relations between frames of the video sequence.
Ranked #1 on Video Semantic Segmentation on CamVid
no code implementations • ICCV 2021 • Fei Liu, Jing Liu, Weining Wang, Hanqing Lu
Specifically, we present a novel graph memory mechanism to perform relational reasoning, and further develop two types of graph memory: a) visual graph memory that leverages visual information of video for relational reasoning; b) semantic graph memory that is specifically designed to explicitly leverage semantic knowledge contained in the classes and attributes of video objects, and perform relational reasoning in the semantic space.
no code implementations • 16 Dec 2020 • Xinxin Zhu, Weining Wang, Longteng Guo, Jing Liu
The whole process involves a visual understanding module and a language generation module, which brings more challenges to the design of deep neural networks than other tasks.
no code implementations • 23 Sep 2020 • Ai Jun Hou, Weining Wang, Cathy Y. H. Chen, Wolfgang Karl Härdle
We show how the proposed pricing mechanism underlines the importance of jumps in cryptocurrency markets.
no code implementations • CVPR 2019 • Weining Wang, Yan Huang, Liang Wang
Current studies on action detection in untrimmed videos are mostly designed for action classes, where an action is described at word level such as jumping, tumbling, swing, etc.