Search Results for author: Yulin Wang

Found 17 papers, 11 papers with code

AdaFocusV3: On Unified Spatial-temporal Dynamic Video Recognition

no code implementations27 Sep 2022 Yulin Wang, Yang Yue, Xinhong Xu, Ali Hassani, Victor Kulikov, Nikita Orlov, Shiji Song, Humphrey Shi, Gao Huang

Recent research has revealed that reducing the temporal and spatial redundancy are both effective approaches towards efficient video recognition, e. g., allocating the majority of computation to a task-relevant subset of frames or the most valuable image regions of each frame.

Video Recognition

Making the Best of Both Worlds: A Domain-Oriented Transformer for Unsupervised Domain Adaptation

no code implementations2 Aug 2022 Wenxuan Ma, Jinming Zhang, Shuang Li, Chi Harold Liu, Yulin Wang, Wei Li

To alleviate these issues, we propose to simultaneously conduct feature alignment in two individual spaces focusing on different domains, and create for each space a domain-oriented classifier tailored specifically for that domain.

pseudo label Unsupervised Domain Adaptation

Glance and Focus Networks for Dynamic Visual Recognition

1 code implementation9 Jan 2022 Gao Huang, Yulin Wang, Kangchen Lv, Haojun Jiang, Wenhui Huang, Pengfei Qi, Shiji Song

Spatial redundancy widely exists in visual recognition tasks, i. e., discriminative features in an image or video frame usually correspond to only a subset of pixels, while the remaining regions are irrelevant to the task at hand.

Image Classification Video Recognition

AdaFocus V2: End-to-End Training of Spatial Dynamic Networks for Video Recognition

1 code implementation CVPR 2022 Yulin Wang, Yang Yue, Yuanze Lin, Haojun Jiang, Zihang Lai, Victor Kulikov, Nikita Orlov, Humphrey Shi, Gao Huang

Recent works have shown that the computational efficiency of video recognition can be significantly improved by reducing the spatial redundancy.

Video Recognition

Not All Images are Worth 16x16 Words: Dynamic Transformers for Efficient Image Recognition

2 code implementations NeurIPS 2021 Yulin Wang, Rui Huang, Shiji Song, Zeyi Huang, Gao Huang

Inspired by this phenomenon, we propose a Dynamic Transformer to automatically configure a proper number of tokens for each input image.

Ranked #27 on Image Classification on CIFAR-100 (using extra training data)

Image Classification

Adaptive Focus for Efficient Video Recognition

1 code implementation ICCV 2021 Yulin Wang, Zhaoxi Chen, Haojun Jiang, Shiji Song, Yizeng Han, Gao Huang

In this paper, we explore the spatial redundancy in video recognition with the aim to improve the computational efficiency.

Video Recognition

Transferable Semantic Augmentation for Domain Adaptation

1 code implementation CVPR 2021 Shuang Li, Mixue Xie, Kaixiong Gong, Chi Harold Liu, Yulin Wang, Wei Li

To remedy this, we propose a Transferable Semantic Augmentation (TSA) approach to enhance the classifier adaptation ability through implicitly generating source features towards target semantics.

Domain Adaptation

MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition

1 code implementation CVPR 2021 Shuang Li, Kaixiong Gong, Chi Harold Liu, Yulin Wang, Feng Qiao, Xinjing Cheng

Real-world training data usually exhibits long-tailed distribution, where several majority classes have a significantly larger number of samples than the remaining minority classes.

Data Augmentation Meta-Learning

Dynamic Neural Networks: A Survey

no code implementations9 Feb 2021 Yizeng Han, Gao Huang, Shiji Song, Le Yang, Honghui Wang, Yulin Wang

Dynamic neural network is an emerging research topic in deep learning.

Decision Making

Revisiting Locally Supervised Learning: an Alternative to End-to-end Training

1 code implementation26 Jan 2021 Yulin Wang, Zanlin Ni, Shiji Song, Le Yang, Gao Huang

Due to the need to store the intermediate activations for back-propagation, end-to-end (E2E) training of deep networks usually suffers from high GPUs memory footprint.

Revisiting Locally Supervised Training of Deep Neural Networks

no code implementations ICLR 2021 Yulin Wang, Zanlin Ni, Shiji Song, Le Yang, Gao Huang

As InfoPro loss is difficult to compute in its original form, we derive a feasible upper bound as a surrogate optimization objective, yielding a simple but effective algorithm.

Regularizing Deep Networks with Semantic Data Augmentation

1 code implementation21 Jul 2020 Yulin Wang, Gao Huang, Shiji Song, Xuran Pan, Yitong Xia, Cheng Wu

The proposed method is inspired by the intriguing property that deep networks are effective in learning linearized features, i. e., certain directions in the deep feature space correspond to meaningful semantic transformations, e. g., changing the background or view angle of an object.

Data Augmentation

Meta-Semi: A Meta-learning Approach for Semi-supervised Learning

no code implementations5 Jul 2020 Yulin Wang, Jiayi Guo, Shiji Song, Gao Huang

In this paper, we propose a novel meta-learning based SSL algorithm (Meta-Semi) that requires tuning only one additional hyper-parameter, compared with a standard supervised deep learning algorithm, to achieve competitive performance under various conditions of SSL.

Meta-Learning

Implicit Semantic Data Augmentation for Deep Networks

1 code implementation NeurIPS 2019 Yulin Wang, Xuran Pan, Shiji Song, Hong Zhang, Cheng Wu, Gao Huang

Our work is motivated by the intriguing property that deep networks are surprisingly good at linearizing features, such that certain directions in the deep feature space correspond to meaningful semantic transformations, e. g., adding sunglasses or changing backgrounds.

Image Augmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.