Search Results for author: Qiang Ling

Found 11 papers, 6 papers with code

You Only Train Once: Learning a General Anomaly Enhancement Network with Random Masks for Hyperspectral Anomaly Detection

1 code implementation31 Mar 2023 Zhaoxu Li, Yingqian Wang, Chao Xiao, Qiang Ling, Zaiping Lin, Wei An

Trained on a set of anomaly-free hyperspectral images with random masks, our network can learn the spatial context characteristics between anomalies and background in an unsupervised way.

Anomaly Detection Model Selection

Micro Expression Generation with Thin-plate Spline Motion Model and Face Parsing

3 code implementations MM '22: Proceedings of the 30th ACM International Conference on Multimedia 2022 Jun Yu, Guochen Xie, Zhongpeng Cai, Peng He, Fang Gao, Qiang Ling

We (Team: USTC-IAT-United) also compare our method with other competitors' in MEGC2022, and the expert evaluation results show that our method performs best, which verifies the effectiveness of our method.

Face Parsing Micro-expression Generation +2

Pseudo-Label Generation and Various Data Augmentation for Semi-Supervised Hyperspectral Object Detection

1 code implementation Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops 2022 Jun Yu, Liwen Zhang, Shenshen Du, Hao Chang, Keda Lu, Zhong Zhang, Ye Yu, Lei Wang, Qiang Ling

To overcome these difficulties, this paper first select fewer but suitable data augmentation methods to improve the accuracy of the supervised model based on the labeled training set, which is suitable for the characteristics of hyperspectral images.

Data Augmentation object-detection +3

A viable framework for semi-supervised learning on realistic dataset

2 code implementations Machine Learning 2022 Hao Chang, Guochen Xie, Jun Yu, Qiang Ling, Fang Gao, Ye Yu

Semi-supervised Fine-Grained Recognition is a challenging task due to the difficulty of data imbalance, high inter-class similarity and domain mismatch.

Ranking-Based Siamese Visual Tracking

1 code implementation CVPR 2022 Feng Tang, Qiang Ling

Current Siamese-based trackers mainly formulate the visual tracking into two independent subtasks, including classification and localization.

Classification Visual Tracking

Multi-model Ensemble Learning Method for Human Expression Recognition

no code implementations28 Mar 2022 Jun Yu, Zhongpeng Cai, Peng He, Guocheng Xie, Qiang Ling

Moreover, we introduce the multi-fold ensemble method to train and ensemble several models with the same architecture but different data distributions to enhance the performance of our solution.

Ensemble Learning

Cognitive Diagnosis with Explicit Student Vector Estimation and Unsupervised Question Matrix Learning

no code implementations1 Mar 2022 Lu Dong, ZhenHua Ling, Qiang Ling, Zefeng Lai

Then, based on the estimated student vectors, the probabilistic part of DINA can be modified to a student dependent model that the slip and guess rates are related to student vectors.

cognitive diagnosis

BiSTF: Bilateral-Branch Self-Training Framework for Semi-Supervised Large-scale Fine-Grained Recognition

no code implementations14 Jul 2021 Hao Chang, Guochen Xie, Jun Yu, Qiang Ling

Semi-supervised Fine-Grained Recognition is a challenge task due to the difficulty of data imbalance, high inter-class similarity and domain mismatch.

Adaptively Meshed Video Stabilization

no code implementations14 Jun 2020 Minda Zhao, Qiang Ling

Moreover, foreground and background feature trajectories are no longer distinguished and both contribute to the estimation of the camera motion in the proposed optimization problem, which yields better estimation performance than previous works, particularly in challenging videos with large foreground objects or strong parallax.

Blocking Motion Estimation +1

Bicycle Detection Based On Multi-feature and Multi-frame Fusion in low-resolution traffic videos

no code implementations11 Jun 2017 Yi-Cheng Zhang, Qiang Ling

So bicycle detection is one major task of traffic video surveillance systems in China.

Cannot find the paper you are looking for? You can Submit a new open access paper.