Search Results for author: YaoWei Wang

Found 91 papers, 53 papers with code

1st Place Solution for MOSE Track in CVPR 2024 PVUW Workshop: Complex Video Object Segmentation

no code implementations7 Jun 2024 Deshui Miao, Xin Li, Zhenyu He, YaoWei Wang, Ming-Hsuan Yang

In this challenge, we propose a semantic embedding video object segmentation model and use the salient features of objects as query representations.

Object Segmentation +3

vHeat: Building Vision Models upon Heat Conduction

1 code implementation26 May 2024 Zhaozhi Wang, Yue Liu, Yunfan Liu, Hongtian Yu, YaoWei Wang, Qixiang Ye, Yunjie Tian

A fundamental problem in learning robust and expressive visual representations lies in efficiently estimating the spatial relationships of visual semantics throughout the entire image.

Computational Efficiency

LG-VQ: Language-Guided Codebook Learning

no code implementations23 May 2024 Guotao Liang, Baoquan Zhang, YaoWei Wang, Xutao Li, Yunming Ye, Huaibin Wang, Chuyao Luo, Kola Ye, linfeng Luo

Vector quantization (VQ) is a key technique in high-resolution and high-fidelity image synthesis, which aims to learn a codebook to encode an image with a sequence of discrete codes and then generate an image in an auto-regression manner.

Image Captioning Image Generation +1

Prompt Customization for Continual Learning

1 code implementation28 Apr 2024 Yong Dai, Xiaopeng Hong, Yabin Wang, Zhiheng Ma, Dongmei Jiang, YaoWei Wang

In contrast to conventional methods that employ hard prompt selection, PGM assigns different coefficients to prompts from a fixed-sized pool of prompts and generates tailored prompts.

Continual Learning Incremental Learning

Spatio-Temporal Side Tuning Pre-trained Foundation Models for Video-based Pedestrian Attribute Recognition

3 code implementations27 Apr 2024 Xiao Wang, Qian Zhu, Jiandong Jin, Jun Zhu, Futian Wang, Bo Jiang, YaoWei Wang, Yonghong Tian

Specifically, we formulate the video-based PAR as a vision-language fusion problem and adopt a pre-trained foundation model CLIP to extract the visual features.

Attribute Pedestrian Attribute Recognition +1

Motion-aware Latent Diffusion Models for Video Frame Interpolation

no code implementations21 Apr 2024 Zhilin Huang, Yijie Yu, Ling Yang, Chujun Qin, Bing Zheng, Xiawu Zheng, Zikun Zhou, YaoWei Wang, Wenming Yang

With the advancement of AIGC, video frame interpolation (VFI) has become a crucial component in existing video generation frameworks, attracting widespread research interest.

Motion Estimation Video Frame Interpolation +1

HiVG: Hierarchical Multimodal Fine-grained Modulation for Visual Grounding

2 code implementations20 Apr 2024 Linhui Xiao, Xiaoshan Yang, Fang Peng, YaoWei Wang, Changsheng Xu

Specifically, HiVG consists of a multi-layer adaptive cross-modal bridge and a hierarchical multimodal low-rank adaptation (Hi LoRA) paradigm.

Visual Grounding

State Space Model for New-Generation Network Alternative to Transformers: A Survey

1 code implementation15 Apr 2024 Xiao Wang, Shiao Wang, Yuhe Ding, Yuehang Li, Wentao Wu, Yao Rong, Weizhe Kong, Ju Huang, Shihao Li, Haoxiang Yang, Ziwen Wang, Bo Jiang, Chenglong Li, YaoWei Wang, Yonghong Tian, Jin Tang

In this paper, we give the first comprehensive review of these works and also provide experimental comparisons and analysis to better demonstrate the features and advantages of SSM.

StoryImager: A Unified and Efficient Framework for Coherent Story Visualization and Completion

1 code implementation9 Apr 2024 Ming Tao, Bing-Kun Bao, Hao Tang, YaoWei Wang, Changsheng Xu

3) The story visualization and continuation models are trained and inferred independently, which is not user-friendly.

Image Generation Story Visualization

RTracker: Recoverable Tracking via PN Tree Structured Memory

1 code implementation CVPR 2024 Yuqing Huang, Xin Li, Zikun Zhou, YaoWei Wang, Zhenyu He, Ming-Hsuan Yang

Upon the PN tree memory, we develop corresponding walking rules for determining the state of the target and define a set of control flows to unite the tracker and the detector in different tracking scenarios.

Visual Object Tracking Visual Tracking

Tracking Meets LoRA: Faster Training, Larger Model, Stronger Performance

no code implementations8 Mar 2024 Liting Lin, Heng Fan, Zhipeng Zhang, YaoWei Wang, Yong Xu, Haibin Ling

The shared embeddings, which describe the absolute coordinates of multi-resolution images (namely, the template and search images), are inherited from the pre-trained backbones.

Inductive Bias Position +1

Prompt-Driven Dynamic Object-Centric Learning for Single Domain Generalization

no code implementations CVPR 2024 Deng Li, Aming Wu, YaoWei Wang, Yahong Han

In this paper, we propose a dynamic object-centric perception network based on prompt learning, aiming to adapt to the variations in image complexity.

Domain Generalization Image Classification +3

Towards Robust and Efficient Cloud-Edge Elastic Model Adaptation via Selective Entropy Distillation

1 code implementation27 Feb 2024 Yaofo Chen, Shuaicheng Niu, YaoWei Wang, Shoukai Xu, Hengjie Song, Mingkui Tan

Moreover, with the increasing data collected at the edge, this paradigm also fails to further adapt the cloud model for better performance.

Semi-supervised Counting via Pixel-by-pixel Density Distribution Modelling

no code implementations23 Feb 2024 Hui Lin, Zhiheng Ma, Rongrong Ji, YaoWei Wang, Zhou Su, Xiaopeng Hong, Deyu Meng

This paper focuses on semi-supervised crowd counting, where only a small portion of the training data are labeled.

Crowd Counting Decoder +1

Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition

1 code implementation22 Feb 2024 Feng Lu, Lijun Zhang, Xiangyuan Lan, Shuting Dong, YaoWei Wang, Chun Yuan

Experimental results show that our method outperforms the state-of-the-art methods with less training data and training time, and uses about only 3% retrieval runtime of the two-stage VPR methods with RANSAC-based spatial verification.

Re-Ranking Visual Place Recognition

MB-RACS: Measurement-Bounds-based Rate-Adaptive Image Compressed Sensing Network

no code implementations19 Jan 2024 Yujun Huang, Bin Chen, Naiqi Li, Baoyi An, Shu-Tao Xia, YaoWei Wang

In this paper, we propose a Measurement-Bounds-based Rate-Adaptive Image Compressed Sensing Network (MB-RACS) framework, which aims to adaptively determine the sampling rate for each image block in accordance with traditional measurement bounds theory.

Image Compressed Sensing

VMamba: Visual State Space Model

6 code implementations18 Jan 2024 Yue Liu, Yunjie Tian, Yuzhong Zhao, Hongtian Yu, Lingxi Xie, YaoWei Wang, Qixiang Ye, Yunfan Liu

Designing computationally efficient network architectures persists as an ongoing necessity in computer vision.

Computational Efficiency Language Modelling +1

Modality-Collaborative Test-Time Adaptation for Action Recognition

no code implementations CVPR 2024 Baochen Xiong, Xiaoshan Yang, Yaguang Song, YaoWei Wang, Changsheng Xu

Existing image-based TTA methods cannot be directly applied to this task because video have domain shift in multimodal and temporal which brings difficulties to adaptation.

Action Recognition Test-time Adaptation +1

FFCA-Net: Stereo Image Compression via Fast Cascade Alignment of Side Information

no code implementations28 Dec 2023 Yichong Xia, Yujun Huang, Bin Chen, Haoqian Wang, YaoWei Wang

To address this limitation, we propose a Feature-based Fast Cascade Alignment network (FFCA-Net) to fully leverage the side information on the decoder.

Data Compression Decoder +2

Regressor-Segmenter Mutual Prompt Learning for Crowd Counting

no code implementations CVPR 2024 Mingyue Guo, Li Yuan, Zhaoyi Yan, Binghui Chen, YaoWei Wang, Qixiang Ye

In this study, we propose mutual prompt learning (mPrompt), which leverages a regressor and a segmenter as guidance for each other, solving bias and inaccuracy caused by annotation variance while distinguishing foreground from background.

Crowd Counting

Recognizing Conditional Causal Relationships about Emotions and Their Corresponding Conditions

no code implementations28 Nov 2023 Xinhong Chen, Zongxi Li, YaoWei Wang, Haoran Xie, JianPing Wang, Qing Li

To highlight the context in such special causal relationships, we propose a new task to determine whether or not an input pair of emotion and cause has a valid causal relationship under different contexts and extract the specific context clauses that participate in the causal relationship.


Uncovering Hidden Connections: Iterative Search and Reasoning for Video-grounded Dialog

1 code implementation11 Oct 2023 Haoyu Zhang, Meng Liu, YaoWei Wang, Da Cao, Weili Guan, Liqiang Nie

In response to these challenges, we present an iterative search and reasoning framework, which consists of a textual encoder, a visual encoder, and a generator.

Question Answering Response Generation +1

Learning Mask-aware CLIP Representations for Zero-Shot Segmentation

1 code implementation NeurIPS 2023 Siyu Jiao, Yunchao Wei, YaoWei Wang, Yao Zhao, Humphrey Shi

However, in the paper, we reveal that CLIP is insensitive to different mask proposals and tends to produce similar predictions for various mask proposals of the same image.

Open Vocabulary Semantic Segmentation Zero Shot Segmentation

MultiCapCLIP: Auto-Encoding Prompts for Zero-Shot Multilingual Visual Captioning

1 code implementation25 Aug 2023 Bang Yang, Fenglin Liu, Xian Wu, YaoWei Wang, Xu sun, Yuexian Zou

To deal with the label shortage problem, we present a simple yet effective zero-shot approach MultiCapCLIP that can generate visual captions for different scenarios and languages without any labeled vision-caption pairs of downstream datasets.

Image Captioning Video Captioning

CiteTracker: Correlating Image and Text for Visual Tracking

1 code implementation ICCV 2023 Xin Li, Yuqing Huang, Zhenyu He, YaoWei Wang, Huchuan Lu, Ming-Hsuan Yang

Existing visual tracking methods typically take an image patch as the reference of the target to perform tracking.

Attribute Descriptive +2

MixBCT: Towards Self-Adapting Backward-Compatible Training

1 code implementation14 Aug 2023 Yu Liang, Yufeng Zhang, Shiliang Zhang, YaoWei Wang, Sheng Xiao, Rong Xiao, Xiaoyu Wang

Instance-based methods like L2 regression take into account the distribution of old features but impose strong constraints on the performance of the new model itself.

Face Recognition Image Retrieval +1

Benign Shortcut for Debiasing: Fair Visual Recognition via Intervention with Shortcut Features

no code implementations13 Aug 2023 Yi Zhang, Jitao Sang, Junyang Wang, Dongmei Jiang, YaoWei Wang

To this end, we propose \emph{Shortcut Debiasing}, to first transfer the target task's learning of bias attributes from bias features to shortcut features, and then employ causal intervention to eliminate shortcut features during inference.


Strip-MLP: Efficient Token Interaction for Vision MLP

1 code implementation ICCV 2023 Guiping Cao, Shengda Luo, Wenjian Huang, Xiangyuan Lan, Dongmei Jiang, YaoWei Wang, JianGuo Zhang

Finally, based on the Strip MLP layer, we propose a novel \textbf{L}ocal \textbf{S}trip \textbf{M}ixing \textbf{M}odule (LSMM) to boost the token interaction power in the local region.

HQG-Net: Unpaired Medical Image Enhancement with High-Quality Guidance

no code implementations15 Jul 2023 Chunming He, Kai Li, Guoxia Xu, Jiangpeng Yan, Longxiang Tang, Yulun Zhang, Xiu Li, YaoWei Wang

Specifically, we extract features from an HQ image and explicitly insert the features, which are expected to encode HQ cues, into the enhancement network to guide the LQ enhancement with the variational normalization module.

Image Enhancement Medical Image Enhancement

Improving Deep Representation Learning via Auxiliary Learnable Target Coding

1 code implementation30 May 2023 KangJun Liu, Ke Chen, YaoWei Wang, Kui Jia

Deep representation learning is a subfield of machine learning that focuses on learning meaningful and useful representations of data through deep neural networks.

Representation Learning Retrieval

ShuffleMix: Improving Representations via Channel-Wise Shuffle of Interpolated Hidden States

1 code implementation30 May 2023 KangJun Liu, Ke Chen, Lihua Guo, YaoWei Wang, Kui Jia

Inspired by good robustness of alternative dropout strategies against over-fitting on limited patterns of training samples, this paper introduces a novel concept of ShuffleMix -- Shuffle of Mixed hidden features, which can be interpreted as a kind of dropout operation in feature space.

Benchmarking Data Augmentation +1

Manifold-Aware Self-Training for Unsupervised Domain Adaptation on Regressing 6D Object Pose

1 code implementation18 May 2023 Yichen Zhang, Jiehong Lin, Ke Chen, Zelin Xu, YaoWei Wang, Kui Jia

Domain gap between synthetic and real data in visual regression (e. g. 6D pose estimation) is bridged in this paper via global feature alignment and local refinement on the coarse classification of discretized anchor classes in target space, which imposes a piece-wise target manifold regularization into domain-invariant representation learning.

6D Pose Estimation regression +2

CLIP-VG: Self-paced Curriculum Adapting of CLIP for Visual Grounding

2 code implementations15 May 2023 Linhui Xiao, Xiaoshan Yang, Fang Peng, Ming Yan, YaoWei Wang, Changsheng Xu

In order to utilize vision and language pre-trained models to address the grounding problem, and reasonably take advantage of pseudo-labels, we propose CLIP-VG, a novel method that can conduct self-paced curriculum adapting of CLIP with pseudo-language labels.

Transfer Learning Visual Grounding

Towards Efficient Task-Driven Model Reprogramming with Foundation Models

no code implementations5 Apr 2023 Shoukai Xu, Jiangchao Yao, Ran Luo, Shuhai Zhang, Zihao Lian, Mingkui Tan, Bo Han, YaoWei Wang

Moreover, the data used for pretraining foundation models are usually invisible and very different from the target data of downstream tasks.

Knowledge Distillation Transfer Learning

Reliability-Hierarchical Memory Network for Scribble-Supervised Video Object Segmentation

1 code implementation25 Mar 2023 Zikun Zhou, Kaige Mao, Wenjie Pei, Hongpeng Wang, YaoWei Wang, Zhenyu He

To be specific, RHMNet first only uses the memory in the high-reliability level to locate the region with high reliability belonging to the target, which is highly similar to the initial target scribble.

Semantic Segmentation Video Object Segmentation +1

ZeroNLG: Aligning and Autoencoding Domains for Zero-Shot Multimodal and Multilingual Natural Language Generation

1 code implementation11 Mar 2023 Bang Yang, Fenglin Liu, Yuexian Zou, Xian Wu, YaoWei Wang, David A. Clifton

We present the results of extensive experiments on twelve NLG tasks, showing that, without using any labeled downstream pairs for training, ZeroNLG generates high-quality and believable outputs and significantly outperforms existing zero-shot methods.

Image Captioning Machine Translation +5

Unsupervised Domain Adaptation via Distilled Discriminative Clustering

1 code implementation23 Feb 2023 Hui Tang, YaoWei Wang, Kui Jia

Differently, motivated by the fundamental assumption for domain adaptability, we re-cast the domain adaptation problem as discriminative clustering of target data, given strong privileged information provided by the closely related, labeled source data.

Clustering Unsupervised Domain Adaptation

Large-scale Multi-Modal Pre-trained Models: A Comprehensive Survey

1 code implementation20 Feb 2023 Xiao Wang, Guangyao Chen, Guangwu Qian, Pengcheng Gao, Xiao-Yong Wei, YaoWei Wang, Yonghong Tian, Wen Gao

We also give visualization and analysis of the model parameters and results on representative downstream tasks.

DilateFormer: Multi-Scale Dilated Transformer for Visual Recognition

1 code implementation3 Feb 2023 Jiayu Jiao, Yu-Ming Tang, Kun-Yu Lin, Yipeng Gao, Jinhua Ma, YaoWei Wang, Wei-Shi Zheng

In this work, we explore effective Vision Transformers to pursue a preferable trade-off between the computational complexity and size of the attended receptive field.

Instance Segmentation object-detection +2

CIGAR: Cross-Modality Graph Reasoning for Domain Adaptive Object Detection

no code implementations CVPR 2023 Yabo Liu, Jinghua Wang, Chao Huang, YaoWei Wang, Yong Xu

To overcome these problems, we propose a cross-modality graph reasoning adaptation (CIGAR) method to take advantage of both visual and linguistic knowledge.

Graph Matching object-detection +1

AsyFOD: An Asymmetric Adaptation Paradigm for Few-Shot Domain Adaptive Object Detection

1 code implementation CVPR 2023 Yipeng Gao, Kun-Yu Lin, Junkai Yan, YaoWei Wang, Wei-Shi Zheng

Critically, in FSDAOD, the data-scarcity in the target domain leads to an extreme data imbalance between the source and target domains, which potentially causes over-adaptation in traditional feature alignment.

object-detection Object Detection

Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples

1 code implementation CVPR 2023 Jiaming Zhang, Xingjun Ma, Qi Yi, Jitao Sang, Yu-Gang Jiang, YaoWei Wang, Changsheng Xu

Furthermore, we propose to leverage VisionandLanguage Pre-trained Models (VLPMs) like CLIP as the surrogate model to improve the transferability of the crafted UCs to diverse domains.

Data Poisoning

Universal Object Detection with Large Vision Model

1 code implementation19 Dec 2022 Feng Lin, Wenze Hu, YaoWei Wang, Yonghong Tian, Guangming Lu, Fanglin Chen, Yong Xu, Xiaoyu Wang

In this study, our focus is on a specific challenge: the large-scale, multi-domain universal object detection problem, which contributes to the broader goal of achieving a universal vision system.

Object object-detection +1

Isolation and Impartial Aggregation: A Paradigm of Incremental Learning without Interference

1 code implementation29 Nov 2022 Yabin Wang, Zhiheng Ma, Zhiwu Huang, YaoWei Wang, Zhou Su, Xiaopeng Hong

To avoid obvious stage learning bottlenecks, we propose a brand-new stage-isolation based incremental learning framework, which leverages a series of stage-isolated classifiers to perform the learning task of each stage without the interference of others.

Continual Learning Incremental Learning

SgVA-CLIP: Semantic-guided Visual Adapting of Vision-Language Models for Few-shot Image Classification

no code implementations28 Nov 2022 Fang Peng, Xiaoshan Yang, Linhui Xiao, YaoWei Wang, Changsheng Xu

Although significant progress has been made in few-shot learning, most of existing few-shot image classification methods require supervised pre-training on a large amount of samples of base classes, which limits their generalization ability in real world application.

Few-Shot Image Classification Few-Shot Learning +2

Revisiting Color-Event based Tracking: A Unified Network, Dataset, and Metric

2 code implementations20 Nov 2022 Chuanming Tang, Xiao Wang, Ju Huang, Bo Jiang, Lin Zhu, Jianlin Zhang, YaoWei Wang, Yonghong Tian

In this paper, we propose a single-stage backbone network for Color-Event Unified Tracking (CEUTrack), which achieves the above functions simultaneously.

Object Localization Object Tracking

HARDVS: Revisiting Human Activity Recognition with Dynamic Vision Sensors

2 code implementations17 Nov 2022 Xiao Wang, Zongzhen Wu, Bo Jiang, Zhimin Bao, Lin Zhu, Guoqi Li, YaoWei Wang, Yonghong Tian

The main streams of human activity recognition (HAR) algorithms are developed based on RGB cameras which are suffered from illumination, fast motion, privacy-preserving, and large energy consumption.

Activity Prediction Human Activity Recognition +1

Spikformer: When Spiking Neural Network Meets Transformer

2 code implementations29 Sep 2022 Zhaokun Zhou, Yuesheng Zhu, Chao He, YaoWei Wang, Shuicheng Yan, Yonghong Tian, Li Yuan

Spikformer (66. 3M parameters) with comparable size to SEW-ResNet-152 (60. 2M, 69. 26%) can achieve 74. 81% top1 accuracy on ImageNet using 4 time steps, which is the state-of-the-art in directly trained SNNs models.

Image Classification

Learned Distributed Image Compression with Multi-Scale Patch Matching in Feature Domain

no code implementations6 Sep 2022 Yujun Huang, Bin Chen, Shiyu Qin, Jiawei Li, YaoWei Wang, Tao Dai, Shu-Tao Xia

Specifically, MSFDPM consists of a side information feature extractor, a multi-scale feature domain patch matching module, and a multi-scale feature fusion network.

Decoder Image Compression +1

DAS: Densely-Anchored Sampling for Deep Metric Learning

1 code implementation30 Jul 2022 Lizhao Liu, Shangxin Huang, Zhuangwei Zhuang, Ran Yang, Mingkui Tan, YaoWei Wang

To this end, we propose a Densely-Anchored Sampling (DAS) scheme that considers the embedding with corresponding data point as "anchor" and exploits the anchor's nearby embedding space to densely produce embeddings without data points.

Face Recognition Image Retrieval +2

Entity-Graph Enhanced Cross-Modal Pretraining for Instance-level Product Retrieval

no code implementations17 Jun 2022 Xiao Dong, Xunlin Zhan, Yunchao Wei, XiaoYong Wei, YaoWei Wang, Minlong Lu, Xiaochun Cao, Xiaodan Liang

Our goal in this research is to study a more realistic environment in which we can conduct weakly-supervised multi-modal instance-level product retrieval for fine-grained product categories.


Prompt-based Learning for Unpaired Image Captioning

no code implementations26 May 2022 Peipei Zhu, Xiao Wang, Lin Zhu, Zhenglong Sun, Weishi Zheng, YaoWei Wang, Changwen Chen

Inspired by the success of Vision-Language Pre-Trained Models (VL-PTMs) in this research, we attempt to infer the cross-domain cue information about a given image from the large VL-PTMs for the UIC task.

Image Captioning Question Answering +2

Global-Supervised Contrastive Loss and View-Aware-Based Post-Processing for Vehicle Re-Identification

no code implementations17 Apr 2022 Zhijun Hu, Yong Xu, Jie Wen, Xianjing Cheng, Zaijun Zhang, Lilei Sun, YaoWei Wang

The proposed VABPP method is the first time that the view-aware-based method is used as a post-processing method in the field of vehicle re-identification.

Attribute Vehicle Re-Identification

Fine-Grained Object Classification via Self-Supervised Pose Alignment

2 code implementations CVPR 2022 Xuhui Yang, YaoWei Wang, Ke Chen, Yong Xu, Yonghong Tian

Semantic patterns of fine-grained objects are determined by subtle appearance difference of local parts, which thus inspires a number of part-based methods.

Classification Object +1

Boost Test-Time Performance with Closed-Loop Inference

no code implementations21 Mar 2022 Shuaicheng Niu, Jiaxiang Wu, Yifan Zhang, Guanghui Xu, Haokun Li, Peilin Zhao, Junzhou Huang, YaoWei Wang, Mingkui Tan

Motivated by this, we propose to predict those hard-classified test samples in a looped manner to boost the model performance.

Auxiliary Learning

Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance

1 code implementation16 Mar 2022 Chen Tang, Kai Ouyang, Zhi Wang, Yifei Zhu, YaoWei Wang, Wen Ji, Wenwu Zhu

For example, MPQ search on ResNet18 with our indicators takes only 0. 06 s, which improves time efficiency exponentially compared to iterative search methods.


Peng Cheng Object Detection Benchmark for Smart City

no code implementations11 Mar 2022 YaoWei Wang, Zhouxin Yang, Rui Liu, Deng Li, Yuandu Lai, Leyuan Fang, Yahong Han

Considering the diversity and complexity of scenes in intelligent city governance, we build a large-scale object detection benchmark for the smart city.

Object object-detection +1

Unpaired Image Captioning by Image-level Weakly-Supervised Visual Concept Recognition

no code implementations7 Mar 2022 Peipei Zhu, Xiao Wang, Yong Luo, Zhenglong Sun, Wei-Shi Zheng, YaoWei Wang, Changwen Chen

The image-level labels are utilized to train a weakly-supervised object recognition model to extract object information (e. g., instance) in an image, and the extracted instances are adopted to infer the relationships among different objects based on an enhanced graph neural network (GNN).

Graph Neural Network Image Captioning +2

Boosting Crowd Counting via Multifaceted Attention

1 code implementation CVPR 2022 Hui Lin, Zhiheng Ma, Rongrong Ji, YaoWei Wang, Xiaopeng Hong

Secondly, we design the Local Attention Regularization to supervise the training of LRA by minimizing the deviation among the attention for different feature locations.

Crowd Counting

Conceptor Learning for Class Activation Mapping

no code implementations21 Jan 2022 Guangwu Qian, Zhen-Qun Yang, Xu-Lu Zhang, YaoWei Wang, Qing Li, Xiao-Yong Wei

Class Activation Mapping (CAM) has been widely adopted to generate saliency maps which provides visual explanations for deep neural networks (DNNs).


Towards End-to-End Image Compression and Analysis with Transformers

1 code implementation17 Dec 2021 Yuanchao Bai, Xu Yang, Xianming Liu, Junjun Jiang, YaoWei Wang, Xiangyang Ji, Wen Gao

Meanwhile, we propose a feature aggregation module to fuse the compressed features with the selected intermediate features of the Transformer, and feed the aggregated features to a deconvolutional neural network for image reconstruction.

Classification Image Classification +3

Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection

no code implementations16 Dec 2021 Rui Liu, Yahong Han, YaoWei Wang, Qi Tian

In the second stage, augmented source and target data with pseudo labels are adopted to perform the self-training for prediction consistency.

Object object-detection +1

Learning to Share in Multi-Agent Reinforcement Learning

2 code implementations16 Dec 2021 Yuxuan Yi, Ge Li, YaoWei Wang, Zongqing Lu

Inspired by the fact that sharing plays a key role in human's learning of cooperation, we propose LToS, a hierarchically decentralized MARL framework that enables agents to learn to dynamically share reward with neighbors so as to encourage agents to cooperate on the global objective through collectives.

Multi-agent Reinforcement Learning reinforcement-learning +1

An Informative Tracking Benchmark

1 code implementation13 Dec 2021 Xin Li, Qiao Liu, Wenjie Pei, Qiuhong Shen, YaoWei Wang, Huchuan Lu, Ming-Hsuan Yang

Along with the rapid progress of visual tracking, existing benchmarks become less informative due to redundancy of samples and weak discrimination between current trackers, making evaluations on all datasets extremely time-consuming.

Visual Tracking

Optimized Separable Convolution: Yet Another Efficient Convolution Operator

no code implementations29 Sep 2021 Tao Wei, Yonghong Tian, YaoWei Wang, Yun Liang, Chang Wen Chen

In this research, we propose a novel and principled operator called optimized separable convolution by optimal design for the internal number of groups and kernel sizes for general separable convolutions can achieve the complexity of O(C^{\frac{3}{2}}K).

M5Product: Self-harmonized Contrastive Learning for E-commercial Multi-modal Pretraining

no code implementations CVPR 2022 Xiao Dong, Xunlin Zhan, Yangxin Wu, Yunchao Wei, Michael C. Kampffmeyer, XiaoYong Wei, Minlong Lu, YaoWei Wang, Xiaodan Liang

Despite the potential of multi-modal pre-training to learn highly discriminative feature representations from complementary data modalities, current progress is being slowed by the lack of large-scale modality-diverse datasets.

Contrastive Learning

VisEvent: Reliable Object Tracking via Collaboration of Frame and Event Flows

2 code implementations11 Aug 2021 Xiao Wang, Jianing Li, Lin Zhu, Zhipeng Zhang, Zhe Chen, Xin Li, YaoWei Wang, Yonghong Tian, Feng Wu

Different from visible cameras which record intensity images frame by frame, the biologically inspired event camera produces a stream of asynchronous and sparse events with much lower latency.

Object Tracking

MFGNet: Dynamic Modality-Aware Filter Generation for RGB-T Tracking

2 code implementations22 Jul 2021 Xiao Wang, Xiujun Shu, Shiliang Zhang, Bo Jiang, YaoWei Wang, Yonghong Tian, Feng Wu

The visible and thermal filters will be used to conduct a dynamic convolutional operation on their corresponding input feature maps respectively.

Rgb-T Tracking

Direct Measure Matching for Crowd Counting

no code implementations4 Jul 2021 Hui Lin, Xiaopeng Hong, Zhiheng Ma, Xing Wei, Yunfeng Qiu, YaoWei Wang, Yihong Gong

Second, we derive a semi-balanced form of Sinkhorn divergence, based on which a Sinkhorn counting loss is designed for measure matching.

Crowd Counting

Self-Supervised Tracking via Target-Aware Data Synthesis

no code implementations21 Jun 2021 Xin Li, Wenjie Pei, YaoWei Wang, Zhenyu He, Huchuan Lu, Ming-Hsuan Yang

While deep-learning based tracking methods have achieved substantial progress, they entail large-scale and high-quality annotated data for sufficient training.

Representation Learning Self-Supervised Learning +1

Learning Scalable lY=-Constrained Near-Lossless Image Compression via Joint Lossy Image and Residual Compression

no code implementations CVPR 2021 Yuanchao Bai, Xianming Liu, WangMeng Zuo, YaoWei Wang, Xiangyang Ji

To achieve scalable compression with the error bound larger than zero, we derive the probability model of the quantized residual by quantizing the learned probability model of the original residual, instead of training multiple networks.

Image Compression

Tracking by Joint Local and Global Search: A Target-aware Attention based Approach

1 code implementation9 Jun 2021 Xiao Wang, Jin Tang, Bin Luo, YaoWei Wang, Yonghong Tian, Feng Wu

In this paper, we propose a novel and general target-aware attention mechanism (termed TANet) and integrate it with tracking-by-detection framework to conduct joint local and global search for robust tracking.

Decoder Object +1

Conformer: Local Features Coupling Global Representations for Visual Recognition

4 code implementations ICCV 2021 Zhiliang Peng, Wei Huang, Shanzhi Gu, Lingxi Xie, YaoWei Wang, Jianbin Jiao, Qixiang Ye

Within Convolutional Neural Network (CNN), the convolution operations are good at extracting local features but experience difficulty to capture global representations.

Image Classification Instance Segmentation +4

Anomaly Detection with Prototype-Guided Discriminative Latent Embeddings

no code implementations30 Apr 2021 Yuandu Lai, Yahong Han, YaoWei Wang

Recent efforts towards video anomaly detection (VAD) try to learn a deep autoencoder to describe normal event patterns with small reconstruction errors.

Anomaly Detection Optical Flow Estimation +1

AAformer: Auto-Aligned Transformer for Person Re-Identification

no code implementations2 Apr 2021 Kuan Zhu, Haiyun Guo, Shiliang Zhang, YaoWei Wang, Gaopan Huang, Honglin Qiao, Jing Liu, Jinqiao Wang, Ming Tang

In this paper, we introduce an alignment scheme in Transformer architecture for the first time and propose the Auto-Aligned Transformer (AAformer) to automatically locate both the human parts and non-human ones at patch-level.

Human Parsing Image Classification +3

Learning Scalable $\ell_\infty$-constrained Near-lossless Image Compression via Joint Lossy Image and Residual Compression

no code implementations31 Mar 2021 Yuanchao Bai, Xianming Liu, WangMeng Zuo, YaoWei Wang, Xiangyang Ji

To achieve scalable compression with the error bound larger than zero, we derive the probability model of the quantized residual by quantizing the learned probability model of the original residual, instead of training multiple networks.

Image Compression

Dynamic Attention guided Multi-Trajectory Analysis for Single Object Tracking

1 code implementation30 Mar 2021 Xiao Wang, Zhe Chen, Jin Tang, Bin Luo, YaoWei Wang, Yonghong Tian, Feng Wu

In this paper, we propose to introduce more dynamics by devising a dynamic attention-guided multi-trajectory tracking strategy.

Object Tracking

Classification of Single-View Object Point Clouds

no code implementations18 Dec 2020 Zelin Xu, Ke Chen, KangJun Liu, Changxing Ding, YaoWei Wang, Kui Jia

By adapting existing ModelNet40 and ScanNet datasets to the single-view, partial setting, experiment results can verify the necessity of object pose estimation and superiority of our PAPNet to existing classifiers.

3D Object Classification 6D Pose Estimation using RGB +6

Modular Graph Attention Network for Complex Visual Relational Reasoning

no code implementations22 Nov 2020 Yihan Zheng, Zhiquan Wen, Mingkui Tan, Runhao Zeng, Qi Chen, YaoWei Wang, Qi Wu

Moreover, to capture the complex logic in a query, we construct a relational graph to represent the visual objects and their relationships, and propose a multi-step reasoning method to progressively understand the complex logic.

Graph Attention Question Answering +5

Cannot find the paper you are looking for? You can Submit a new open access paper.