Search Results for author: Xun Guo

Found 9 papers, 3 papers with code

What Makes for Good Representations for Contrastive Learning

no code implementations29 Sep 2021 Haoqing Wang, Xun Guo, Zhi-Hong Deng, Yan Lu

Therefore, we assume the task-relevant information that is not shared between views can not be ignored and theoretically prove that the minimal sufficient representation in contrastive learning is not sufficient for the downstream tasks, which causes performance degradation.

Contrastive Learning Representation Learning

Cross-Stage Transformer for Video Learning

no code implementations29 Sep 2021 Yuanze Lin, Xun Guo, Yan Lu

By inserting the proposed cross-stage mechanism in existing spatial and temporal transformer blocks, we build a separable transformer network for video learning based on ViT structure, in which self-attentions and features are progressively aggregated from one block to the next.

Action Recognition Temporal Action Localization

Self-Supervised Video Representation Learning with Meta-Contrastive Network

no code implementations ICCV 2021 Yuanze Lin, Xun Guo, Yan Lu

Our method contains two training stages based on model-agnostic meta learning (MAML), each of which consists of a contrastive branch and a meta branch.

Contrastive Learning Meta-Learning +6

SSAN: Separable Self-Attention Network for Video Representation Learning

no code implementations CVPR 2021 Xudong Guo, Xun Guo, Yan Lu

However, spatial correlations and temporal correlations represent different contextual information of scenes and temporal reasoning.

Action Recognition Representation Learning +3

An End-to-End Compression Framework Based on Convolutional Neural Networks

5 code implementations2 Aug 2017 Feng Jiang, Wen Tao, Shaohui Liu, Jie Ren, Xun Guo, Debin Zhao

The second CNN, named reconstruction convolutional neural network (RecCNN), is used to reconstruct the decoded image with high-quality in the decoding end.

Denoising Image Compression

Cannot find the paper you are looking for? You can Submit a new open access paper.