Search Results for author: Jinqiao Wang

Found 79 papers, 29 papers with code

Large Batch Optimization for Object Detection: Training COCO in 12 Minutes

no code implementations ECCV 2020 Tong Wang, Yousong Zhu, Chaoyang Zhao, Wei Zeng, Yao-Wei Wang, Jinqiao Wang, Ming Tang

Most of existing object detectors usually adopt a small training batch size ( ~16), which severely hinders the whole community from exploring large-scale datasets due to the extremely long training procedure.

object-detection Object Detection

Blended Grammar Network for Human Parsing

no code implementations ECCV 2020 Xiaomei Zhang, Yingying Chen, Bingke Zhu, Jinqiao Wang, Ming Tang

Although human parsing has made great progress, it still faces a challenge, i. e., how to extract the whole foreground from similar or cluttered scenes effectively.

Human Parsing

Occlusion-Aware Siamese Network for Human Pose Estimation

no code implementations ECCV 2020 Lu Zhou, Yingying Chen, Yunze Gao, Jinqiao Wang, Hanqing Lu

To overcome the defects caused by the erasing operation, we perform feature reconstruction to recover the information destroyed by occlusion and details lost in cleaning procedure.

Pose Estimation

MathPhys-Guided Coarse-to-Fine Anomaly Synthesis with SQE-Driven Bi-Level Optimization for Anomaly Detection

no code implementations17 Apr 2025 Long Qian, Bingke Zhu, Yingying Chen, Ming Tang, Jinqiao Wang

Anomaly detection is a crucial task in computer vision, yet collecting real-world defect images is inherently difficult due to the rarity and unpredictability of anomalies.

Anomaly Detection Data Augmentation +1

An Empirical Study of Validating Synthetic Data for Text-Based Person Retrieval

1 code implementation28 Mar 2025 Min Cao, Ziyin Zeng, Yuxin Lu, Mang Ye, Dong Yi, Jinqiao Wang

(1) We propose an inter-class image generation pipeline, in which an automatic prompt construction strategy is introduced to guide generative Artificial Intelligence (AI) models in generating various inter-class images without reliance on original data.

Image Augmentation Image Generation +5

Vision-R1: Evolving Human-Free Alignment in Large Vision-Language Models via Vision-Guided Reinforcement Learning

1 code implementation23 Mar 2025 Yufei Zhan, Yousong Zhu, Shurong Zheng, Hongyin Zhao, Fan Yang, Ming Tang, Jinqiao Wang

Large Vision-Language Models (LVLMs) typically follow a two-stage training paradigm-pretraining and supervised fine-tuning.

PhysVLM: Enabling Visual Language Models to Understand Robotic Physical Reachability

1 code implementation11 Mar 2025 Weijie Zhou, Manli Tao, Chaoyang Zhao, Haiyun Guo, Honghui Dong, Ming Tang, Jinqiao Wang

Specifically, the S-P Map abstracts a robot's physical reachability into a generalized spatial representation, independent of specific robot configurations, allowing the model to focus on reachability features rather than robot-specific parameters.

Visual Reasoning

Synthetic Data is an Elegant GIFT for Continual Vision-Language Models

no code implementations6 Mar 2025 Bin Wu, Wuxuan Shi, Jinqiao Wang, Mang Ye

Pre-trained Vision-Language Models (VLMs) require Continual Learning (CL) to efficiently update their knowledge and adapt to various downstream tasks without retraining from scratch.

Continual Learning Image Generation

FLARE: A Framework for Stellar Flare Forecasting using Stellar Physical Properties and Historical Records

no code implementations25 Feb 2025 Bingke Zhu, Xiaoxiao Wang, Minghui Jia, Yihan Tao, Xiao Kong, Ali Luo, Yingying Chen, Ming Tang, Jinqiao Wang

Stellar flare events are critical observational samples for astronomical research; however, recorded flare events remain limited.

A Benchmark for Crime Surveillance Video Analysis with Large Models

no code implementations13 Feb 2025 Haoran Chen, Dong Yi, Moyan Cao, Chensen Huang, Guibo Zhu, Jinqiao Wang

To fill this gap, we propose a benchmark for crime surveillance video analysis with large models denoted as UCVL, including 1, 829 videos and reorganized annotations from the UCF-Crime and UCF-Crime Annotation datasets.

Systematic Outliers in Large Language Models

1 code implementation10 Feb 2025 Yongqi An, Xu Zhao, Tao Yu, Ming Tang, Jinqiao Wang

Outliers have been widely observed in Large Language Models (LLMs), significantly impacting model performance and posing challenges for model compression.

Model Compression

MME-Industry: A Cross-Industry Multimodal Evaluation Benchmark

no code implementations28 Jan 2025 Dongyi Yi, Guibo Zhu, Chenglin Ding, Zongshu Li, Dong Yi, Jinqiao Wang

With the rapid advancement of Multimodal Large Language Models (MLLMs), numerous evaluation benchmarks have emerged.

MME Model Optimization +1

FiLo++: Zero-/Few-Shot Anomaly Detection by Fused Fine-Grained Descriptions and Deformable Localization

1 code implementation17 Jan 2025 Zhaopeng Gu, Bingke Zhu, Guibo Zhu, Yingying Chen, Ming Tang, Jinqiao Wang

However, their handcrafted generic descriptions fail to capture the diverse range of anomalies that may emerge in different objects, and simple patch-level image-text matching often struggles to localize anomalous regions of varying shapes and sizes.

Anomaly Detection Image-text matching +4

LINK: Adaptive Modality Interaction for Audio-Visual Video Parsing

no code implementations30 Dec 2024 Langyu Wang, Bingke Zhu, Yingying Chen, Jinqiao Wang

Audio-visual video parsing focuses on classifying videos through weak labels while identifying events as either visible, audible, or both, alongside their respective temporal boundaries.

Cracking the Code of Hallucination in LVLMs with Vision-aware Head Divergence

no code implementations18 Dec 2024 Jinghan He, Kuan Zhu, Haiyun Guo, Junfeng Fang, Zhenglin Hua, Yuheng Jia, Ming Tang, Tat-Seng Chua, Jinqiao Wang

Large vision-language models (LVLMs) have made substantial progress in integrating large language models (LLMs) with visual inputs, enabling advanced multimodal reasoning.

Hallucination Multimodal Reasoning

UniVAD: A Training-free Unified Model for Few-shot Visual Anomaly Detection

no code implementations4 Dec 2024 Zhaopeng Gu, Bingke Zhu, Guibo Zhu, Yingying Chen, Ming Tang, Jinqiao Wang

Visual Anomaly Detection (VAD) aims to identify abnormal samples in images that deviate from normal patterns, covering multiple domains, including industrial, logical, and medical fields.

Anomaly Detection Patch Matching

Friend or Foe? Harnessing Controllable Overfitting for Anomaly Detection

no code implementations30 Nov 2024 Long Qian, Bingke Zhu, Yingying Chen, Ming Tang, Jinqiao Wang

Overfitting has long been stigmatized as detrimental to model performance, especially in the context of anomaly detection.

Multi-class Anomaly Detection

SEEKR: Selective Attention-Guided Knowledge Retention for Continual Learning of Large Language Models

1 code implementation9 Nov 2024 Jinghan He, Haiyun Guo, Kuan Zhu, Zihan Zhao, Ming Tang, Jinqiao Wang

In this work, we first explore and emphasize the importance of attention weights in knowledge retention, and then propose a SElective attEntion-guided Knowledge Retention method (SEEKR) for data-efficient replay-based continual learning of large language models (LLMs).

Continual Learning

Griffon-G: Bridging Vision-Language and Vision-Centric Tasks via Large Multimodal Models

1 code implementation21 Oct 2024 Yufei Zhan, Hongyin Zhao, Yousong Zhu, Fan Yang, Ming Tang, Jinqiao Wang

None of the LMMs have yet comprehensively unified both types of tasks within a single model, as seen in Large Language Models in the natural language processing field.

Instruction Following object-detection +6

Enhancing Text-to-SQL Capabilities of Large Language Models via Domain Database Knowledge Injection

no code implementations24 Sep 2024 Xingyu Ma, Xin Tian, Lingxiang Wu, Xuepeng Wang, Xueming Tang, Jinqiao Wang

However, LLMs face challenges due to hallucination issues and a lack of domain-specific database knowledge(such as table schema and cell values).

Hallucination Text-To-SQL

MROVSeg: Breaking the Resolution Curse of Vision-Language Models in Open-Vocabulary Image Segmentation

no code implementations27 Aug 2024 Yuanbing Zhu, Bingke Zhu, Yingying Chen, Yunfang Niu, Ming Tang, Jinqiao Wang

Pretrained vision-language models (VLMs), \eg CLIP, are increasingly used to bridge the gap between open- and close-vocabulary recognition in open-vocabulary image segmentation.

Image Segmentation Open Vocabulary Semantic Segmentation +2

AnyDesign: Versatile Area Fashion Editing via Mask-Free Diffusion

no code implementations21 Aug 2024 Yunfang Niu, Lingxiang Wu, Dong Yi, Jie Peng, Ning Jiang, Haiying Wu, Jinqiao Wang

Moreover, these methods are limited in the variety of clothing types they can handle, as most datasets focus on people in clean backgrounds and only include generic garments such as tops, pants, and dresses.

Auto DragGAN: Editing the Generative Image Manifold in an Autoregressive Manner

no code implementations26 Jul 2024 Pengxiang Cai, Zhiwei Liu, Guibo Zhu, Yunfang Niu, Jinqiao Wang

Specifically, we develop a transformer encoder-decoder based network named 'Latent Predictor' to predict the latent code motion trajectories from handle points to target points in an autoregressive manner.

Recurrent Context Compression: Efficiently Expanding the Context Window of LLM

1 code implementation10 Jun 2024 Chensen Huang, Guibo Zhu, Xuepeng Wang, Yifei Luo, Guojing Ge, Haoran Chen, Dong Yi, Jinqiao Wang

To extend the context length of Transformer-based large language models (LLMs) and improve comprehension capabilities, we often face limitations due to computational resources and bounded memory storage capacity.

Long-Context Understanding Question Answering +2

VS-Assistant: Versatile Surgery Assistant on the Demand of Surgeons

no code implementations14 May 2024 Zhen Chen, Xingjian Luo, Jinlin Wu, Danny T. M. Chan, Zhen Lei, Jinqiao Wang, Sebastien Ourselin, Hongbin Liu

In this work, by leveraging advanced multimodal large language models (MLLMs), we propose a Versatile Surgery Assistant (VS-Assistant) that can accurately understand the surgeon's intention and complete a series of surgical understanding tasks, e. g., surgical scene analysis, surgical instrument detection, and segmentation on demand.

Pattern-Aware Chain-of-Thought Prompting in Large Language Models

no code implementations23 Apr 2024 Yufeng Zhang, Xuepeng Wang, Lingxiang Wu, Jinqiao Wang

In this paper, we propose Pattern-Aware CoT, a prompting method that considers the diversity of demonstration patterns.

Diversity

PM-VIS: High-Performance Box-Supervised Video Instance Segmentation

no code implementations22 Apr 2024 Zhangjing Yang, Dun Liu, Wensheng Cheng, Jinqiao Wang, Yi Wu

Our PM-VIS model, trained with high-quality pseudo mask annotations, demonstrates strong ability in instance mask prediction, achieving state-of-the-art performance on the YouTube-VIS 2019, YouTube-VIS 2021, and OVIS validation sets, notably narrowing the gap between box-supervised and fully supervised VIS methods.

Instance Segmentation Semantic Segmentation +1

FiLo: Zero-Shot Anomaly Detection by Fine-Grained Description and High-Quality Localization

1 code implementation21 Apr 2024 Zhaopeng Gu, Bingke Zhu, Guibo Zhu, Yingying Chen, Hao Li, Ming Tang, Jinqiao Wang

Zero-shot anomaly detection (ZSAD) methods entail detecting anomalies directly without access to any known normal or abnormal samples within the target item categories.

Anomaly Detection Position +1

Optimization of Prompt Learning via Multi-Knowledge Representation for Vision-Language Models

no code implementations16 Apr 2024 Enming Zhang, Bingke Zhu, Yingying Chen, Qinghai Miao, Ming Tang, Jinqiao Wang

This limitation restricts the capabilities of pretrained VLMs and can result in incorrect predictions in downstream tasks.

Diversity

Griffon v2: Advancing Multimodal Perception with High-Resolution Scaling and Visual-Language Co-Referring

1 code implementation14 Mar 2024 Yufei Zhan, Yousong Zhu, Hongyin Zhao, Fan Yang, Ming Tang, Jinqiao Wang

Large Vision Language Models have achieved fine-grained object perception, but the limitation of image resolution remains a significant obstacle to surpass the performance of task-specific experts in complex and dense scenarios.

Object Object Counting +3

PFDM: Parser-Free Virtual Try-on via Diffusion Model

no code implementations5 Feb 2024 Yunfang Niu, Dong Yi, Lingxiang Wu, Zhiwei Liu, Pengxiang Cai, Jinqiao Wang

Virtual try-on can significantly improve the garment shopping experiences in both online and in-store scenarios, attracting broad interest in computer vision.

Virtual Try-on

Self-Supervised Representation Learning from Arbitrary Scenarios

no code implementations CVPR 2024 Zhaowen Li, Yousong Zhu, Zhiyang Chen, Zongxin Gao, Rui Zhao, Chaoyang Zhao, Ming Tang, Jinqiao Wang

To address this conflict this work abandons the non-generalizable global-level constraints and proposes explicit patch-level contrastive learning as a solution.

Contrastive Learning Data Augmentation +2

Mitigating Hallucination in Visual Language Models with Visual Supervision

no code implementations27 Nov 2023 Zhiyang Chen, Yousong Zhu, Yufei Zhan, Zhaowen Li, Chaoyang Zhao, Jinqiao Wang, Ming Tang

Large vision-language models (LVLMs) suffer from hallucination a lot, generating responses that apparently contradict to the image content occasionally.

Hallucination

Continual Instruction Tuning for Large Multimodal Models

no code implementations27 Nov 2023 Jinghan He, Haiyun Guo, Ming Tang, Jinqiao Wang

2) Are the existing three classes of continual learning methods still applicable to the continual instruction tuning of LMMs?

Continual Learning

Surgical Temporal Action-aware Network with Sequence Regularization for Phase Recognition

no code implementations21 Nov 2023 Zhen Chen, Yuhao Zhai, Jun Zhang, Jinqiao Wang

Specifically, we propose an efficient multi-scale surgical temporal action (MS-STA) module, which integrates visual features with spatial and temporal knowledge of surgical actions at the cost of 2D networks.

Surgical phase recognition

ChineseWebText: Large-scale High-quality Chinese Web Text Extracted with Effective Evaluation Model

1 code implementation2 Nov 2023 Jianghao Chen, Pu Jian, Tengxiao Xi, Dongyi Yi, Qianlong Du, Chenglin Ding, Guibo Zhu, Chengqing Zong, Jinqiao Wang, Jiajun Zhang

Using our proposed approach, we release the largest and latest large-scale high-quality Chinese web text ChineseWebText, which consists of 1. 42 TB and each text is associated with a quality score, facilitating the LLM researchers to choose the data according to the desired quality thresholds.

SSPFusion: A Semantic Structure-Preserving Approach for Infrared and Visible Image Fusion

no code implementations26 Sep 2023 Qiao Yang, Yu Zhang, Jian Zhang, Zijing Zhao, Shunli Zhang, Jinqiao Wang, Junzhe Chen

Most existing learning-based infrared and visible image fusion (IVIF) methods exhibit massive redundant information in the fusion images, i. e., yielding edge-blurring effect or unrecognizable for object detectors.

Infrared And Visible Image Fusion

AnomalyGPT: Detecting Industrial Anomalies Using Large Vision-Language Models

1 code implementation29 Aug 2023 Zhaopeng Gu, Bingke Zhu, Guibo Zhu, Yingying Chen, Ming Tang, Jinqiao Wang

Large Vision-Language Models (LVLMs) such as MiniGPT-4 and LLaVA have demonstrated the capability of understanding images and achieved remarkable performance in various visual tasks.

Anomaly Detection In-Context Learning

Fast Segment Anything

1 code implementation21 Jun 2023 Xu Zhao, Wenchao Ding, Yongqi An, Yinglong Du, Tao Yu, Min Li, Ming Tang, Jinqiao Wang

In this paper, we propose a speed-up alternative method for this fundamental task with comparable performance.

Edge Detection Image Segmentation +6

FreConv: Frequency Branch-and-Integration Convolutional Networks

no code implementations10 Apr 2023 Zhaowen Li, Xu Zhao, Peigeng Ding, Zongxin Gao, Yuting Yang, Ming Tang, Jinqiao Wang

In the high-frequency branch, a derivative-filter-like architecture is designed to extract the high-frequency information while a light extractor is employed in the low-frequency branch because the low-frequency information is usually redundant.

ZBS: Zero-shot Background Subtraction via Instance-level Background Modeling and Foreground Selection

1 code implementation CVPR 2023 Yongqi An, Xu Zhao, Tao Yu, Haiyun Guo, Chaoyang Zhao, Ming Tang, Jinqiao Wang

However, previous unsupervised deep learning BGS algorithms perform poorly in sophisticated scenarios such as shadows or night lights, and they cannot detect objects outside the pre-defined categories.

Foreground Segmentation Object +2

Efficient Masked Autoencoders with Self-Consistency

no code implementations28 Feb 2023 Zhaowen Li, Yousong Zhu, Zhiyang Chen, Wei Li, Chaoyang Zhao, Rui Zhao, Ming Tang, Jinqiao Wang

Besides, we design the self-consistency learning to further maintain the consistency of predictions of overlapping masked patches among parts.

Image Classification Language Modeling +5

Temporal-Channel Topology Enhanced Network for Skeleton-Based Action Recognition

1 code implementation25 Feb 2023 Jinzhao Luo, Lu Zhou, Guibo Zhu, Guojing Ge, Beiying Yang, Jinqiao Wang

Most current methods adopt graph convolutional network (GCN) for topology modeling, but GCN-based methods are limited in long-distance correlation modeling and generalizability.

Action Recognition Skeleton Based Action Recognition

Masked Contrastive Pre-Training for Efficient Video-Text Retrieval

no code implementations2 Dec 2022 Fangxun Shu, Biaolong Chen, Yue Liao, Shuwen Xiao, Wenyu Sun, Xiaobo Li, Yousong Zhu, Jinqiao Wang, Si Liu

Our MAC aims to reduce video representation's spatial and temporal redundancy in the VidLP model by a mask sampling mechanism to improve pre-training efficiency.

Ranked #40 on Video Retrieval on MSR-VTT-1kA (using extra training data)

Image-text Retrieval Text Retrieval +1

Transfering Low-Frequency Features for Domain Adaptation

no code implementations31 Aug 2022 Zhaowen Li, Xu Zhao, Chaoyang Zhao, Ming Tang, Jinqiao Wang

Previous unsupervised domain adaptation methods did not handle the cross-domain problem from the perspective of frequency for computer vision.

Image Classification object-detection +2

Plug-and-Play Pseudo Label Correction Network for Unsupervised Person Re-identification

no code implementations14 Jun 2022 Tianyi Yan, Kuan Zhu, Haiyun Guo, Guibo Zhu, Ming Tang, Jinqiao Wang

Clustering-based methods, which alternate between the generation of pseudo labels and the optimization of the feature extraction network, play a dominant role in both unsupervised learning (USL) and unsupervised domain adaptive (UDA) person re-identification (Re-ID).

Clustering Pseudo Label +1

UniVIP: A Unified Framework for Self-Supervised Visual Pre-training

no code implementations CVPR 2022 Zhaowen Li, Yousong Zhu, Fan Yang, Wei Li, Chaoyang Zhao, Yingying Chen, Zhiyang Chen, Jiahao Xie, Liwei Wu, Rui Zhao, Ming Tang, Jinqiao Wang

Furthermore, our method can also exploit single-centric-object dataset such as ImageNet and outperforms BYOL by 2. 5% with the same pre-training epochs in linear probing, and surpass current self-supervised object detection methods on COCO dataset, demonstrating its universality and potential.

Image Classification Object +4

Pruning-aware Sparse Regularization for Network Pruning

1 code implementation18 Jan 2022 Nanfei Jiang, Xu Zhao, Chaoyang Zhao, Yongqi An, Ming Tang, Jinqiao Wang

MaskSparsity imposes the fine-grained sparse regularization on the specific filters selected by a pruning mask, rather than all the filters of the model.

Network Pruning

Multi-initialization Optimization Network for Accurate 3D Human Pose and Shape Estimation

no code implementations24 Dec 2021 Zhiwei Liu, Xiangyu Zhu, Lu Yang, Xiang Yan, Ming Tang, Zhen Lei, Guibo Zhu, Xuetao Feng, Yan Wang, Jinqiao Wang

In the second stage, we design a mesh refinement transformer (MRT) to respectively refine each coarse reconstruction result via a self-attention mechanism.

Ranked #74 on 3D Human Pose Estimation on 3DPW (MPJPE metric)

3D human pose and shape estimation 3D Reconstruction

DPT: Deformable Patch-based Transformer for Visual Recognition

1 code implementation30 Jul 2021 Zhiyang Chen, Yousong Zhu, Chaoyang Zhao, Guosheng Hu, Wei Zeng, Jinqiao Wang, Ming Tang

To address this problem, we propose a new Deformable Patch (DePatch) module which learns to adaptively split the images into patches with different positions and scales in a data-driven way rather than using predefined fixed patches.

Image Classification object-detection +2

OPT: Omni-Perception Pre-Trainer for Cross-Modal Understanding and Generation

2 code implementations1 Jul 2021 Jing Liu, Xinxin Zhu, Fei Liu, Longteng Guo, Zijia Zhao, Mingzhen Sun, Weining Wang, Hanqing Lu, Shiyu Zhou, Jiajun Zhang, Jinqiao Wang

In this paper, we propose an Omni-perception Pre-Trainer (OPT) for cross-modal understanding and generation, by jointly modeling visual, text and audio resources.

Audio to Text Retrieval Cross-Modal Retrieval +4

Improving Multiple Object Tracking With Single Object Tracking

no code implementations CVPR 2021 Linyu Zheng, Ming Tang, Yingying Chen, Guibo Zhu, Jinqiao Wang, Hanqing Lu

Despite considerable similarities between multiple object tracking (MOT) and single object tracking (SOT) tasks, modern MOT methods have not benefited from the development of SOT ones to achieve satisfactory performance.

Multiple Object Tracking Object +2

MST: Masked Self-Supervised Transformer for Visual Representation

no code implementations NeurIPS 2021 Zhaowen Li, Zhiyang Chen, Fan Yang, Wei Li, Yousong Zhu, Chaoyang Zhao, Rui Deng, Liwei Wu, Rui Zhao, Ming Tang, Jinqiao Wang

More importantly, the masked tokens together with the remaining tokens are further recovered by a global image decoder, which preserves the spatial information of the image and is more friendly to the downstream dense prediction tasks.

Language Modeling Language Modelling +5

Adaptive Class Suppression Loss for Long-Tail Object Detection

1 code implementation CVPR 2021 Tong Wang, Yousong Zhu, Chaoyang Zhao, Wei Zeng, Jinqiao Wang, Ming Tang

To address the problem of long-tail distribution for the large vocabulary object detection task, existing methods usually divide the whole categories into several groups and treat each group with different strategies.

Object object-detection +1

AAformer: Auto-Aligned Transformer for Person Re-Identification

no code implementations2 Apr 2021 Kuan Zhu, Haiyun Guo, Shiliang Zhang, YaoWei Wang, Jing Liu, Jinqiao Wang, Ming Tang

In this article, we introduce an alignment scheme in transformer architecture for the first time and propose the auto-aligned transformer (AAformer) to automatically locate both the human parts and nonhuman ones at patch level.

Human Parsing Image Classification +3

High-Performance Discriminative Tracking With Transformers

no code implementations ICCV 2021 Bin Yu, Ming Tang, Linyu Zheng, Guibo Zhu, Jinqiao Wang, Hao Feng, Xuetao Feng, Hanqing Lu

End-to-end discriminative trackers improve the state of the art significantly, yet the improvement in robustness and efficiency is restricted by the conventional discriminative model, i. e., least-squares based regression.

Decoder Object +2

Task Decoupled Knowledge Distillation For Lightweight Face Detectors

1 code implementation14 Oct 2020 Xiaoqing Liang, Xu Zhao, Chaoyang Zhao, Nanfei Jiang, Ming Tang, Jinqiao Wang

This method decouples the distillation task of face detection into two subtasks, i. e., the classification distillation subtask and the regression distillation subtask.

Face Detection Knowledge Distillation +1

Identity-Guided Human Semantic Parsing for Person Re-Identification

1 code implementation ECCV 2020 Kuan Zhu, Haiyun Guo, Zhiwei Liu, Ming Tang, Jinqiao Wang

In this paper, we propose the identity-guided human semantic parsing approach (ISP) to locate both the human body parts and personal belongings at pixel-level for aligned person re-ID only with person identity labels.

Clustering Human Parsing +3

Learning Feature Embeddings for Discriminant Model based Tracking

no code implementations ECCV 2020 Linyu Zheng, Ming Tang, Yingying Chen, Jinqiao Wang, Hanqing Lu

After observing that the features used in most online discriminatively trained trackers are not optimal, in this paper, we propose a novel and effective architecture to learn optimal feature embeddings for online discriminative tracking.

Visual Tracking

Fast Kernelized Correlation Filters without Boundary Effect

no code implementations17 Jun 2018 Ming Tang, Linyu Zheng, Bin Yu, Jinqiao Wang

To achieve the fast training and detection, a set of cyclic bases is introduced to construct the filter.

Visual Tracking

On the Relations of Correlation Filter Based Trackers and Struck

no code implementations25 Nov 2017 Jinqiao Wang, Ming Tang, Linyu Zheng, Jiayi Feng

In recent years, two types of trackers, namely correlation filter based tracker (CF tracker) and structured output tracker (Struck), have exhibited the state-of-the-art performance.

Relation

CoupleNet: Coupling Global Structure with Local Parts for Object Detection

3 code implementations ICCV 2017 Yousong Zhu, Chaoyang Zhao, Jinqiao Wang, Xu Zhao, Yi Wu, Hanqing Lu

To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection.

Object object-detection +3

Fast Deep Matting for Portrait Animation on Mobile Phone

1 code implementation26 Jul 2017 Bingke Zhu, Yingying Chen, Jinqiao Wang, Si Liu, Bo Zhang, Ming Tang

Finally, an automatic portrait animation system based on fast deep matting is built on mobile devices, which does not need any interaction and can realize real-time matting with 15 fps.

Image Matting Portrait Animation +1

Joint Background Reconstruction and Foreground Segmentation via A Two-stage Convolutional Neural Network

no code implementations24 Jul 2017 Xu Zhao, Yingying Chen, Ming Tang, Jinqiao Wang

In the first stage, a convolutional encoder-decoder sub-network is employed to reconstruct the background images and encode rich prior knowledge of background scenes.

Decoder Foreground Segmentation +1

Learning Adaptive Receptive Fields for Deep Image Parsing Network

no code implementations CVPR 2017 Zhen Wei, Yao Sun, Jinqiao Wang, Hanjiang Lai, Si Liu

In this paper, we introduce a novel approach to regulate receptive field in deep image parsing network automatically.

Face Parsing

Relaxing From Vocabulary: Robust Weakly-Supervised Deep Learning for Vocabulary-Free Image Tagging

no code implementations ICCV 2015 Jianlong Fu, Yue Wu, Tao Mei, Jinqiao Wang, Hanqing Lu, Yong Rui

The development of deep learning has empowered machines with comparable capability of recognizing limited image categories to human beings.

Cannot find the paper you are looking for? You can Submit a new open access paper.