Search Results for author: Qingchao Chen

Found 15 papers, 8 papers with code

Amplifying Key Cues for Human-Object-Interaction Detection

no code implementations ECCV 2020 Yang Liu, Qingchao Chen, Andrew Zisserman

In this paper we introduce two methods to amplify key cues in the image, and also a method to combine these and other cues when considering the interaction between a human and an object.

Human-Object Interaction Detection Object

Progressive trajectory matching for medical dataset distillation

no code implementations20 Mar 2024 Zhen Yu, Yang Liu, Qingchao Chen

To solve these barriers, we propose to design a novel progressive trajectory matching strategy to improve the training stability for medical image dataset distillation.

Transfer Learning

EviPrompt: A Training-Free Evidential Prompt Generation Method for Segment Anything Model in Medical Images

no code implementations10 Nov 2023 Yinsong Xu, Jiaqi Tang, Aidong Men, Qingchao Chen

Then, we incorporate the human prior into the prompts, which is vital for alleviating the domain gap between natural and medical images and enhancing the applicability and usefulness of SAM in medical scenarios.

Image Segmentation Medical Image Segmentation +1

Efficient Adaptive Human-Object Interaction Detection with Concept-guided Memory

1 code implementation ICCV 2023 Ting Lei, Fabian Caba, Qingchao Chen, Hailin Jin, Yuxin Peng, Yang Liu

This observation motivates us to design an HOI detector that can be trained even with long-tailed labeled data and can leverage existing knowledge from pre-trained models.

Human-Object Interaction Detection Retrieval

Incorporating Pre-training Data Matters in Unsupervised Domain Adaptation

no code implementations6 Aug 2023 Yinsong Xu, Aidong Men, Yang Liu, Qingchao Chen

To answer the first question, we empirically observed an interesting Spontaneous Pulling (SP) Effect in fine-tuning where the discrepancies between any two of the three domains (ImageNet, Source, Target) decrease but at the cost of the impaired semantic structure of the pre-train domain.

Unsupervised Domain Adaptation

Confidence-aware Pseudo-label Learning for Weakly Supervised Visual Grounding

1 code implementation ICCV 2023 Yang Liu, Jiahua Zhang, Qingchao Chen, Yuxin Peng

Visual grounding aims at localizing the target object in image which is most related to the given free-form natural language query.

Descriptive Object +5

Masked Retraining Teacher-Student Framework for Domain Adaptive Object Detection

1 code implementation ICCV 2023 Zijing Zhao, Sitong Wei, Qingchao Chen, Dehui Li, Yifan Yang, Yuxin Peng, Yang Liu

This helps the student model capture target domain characteristics and become a more data-efficient learner to gain knowledge from the limited number of pseudo boxes.

object-detection Object Detection +1

Uncertainty-Induced Transferability Representation for Source-Free Unsupervised Domain Adaptation

1 code implementation30 Aug 2022 Jiangbo Pei, Zhuqing Jiang, Aidong Men, Liang Chen, Yang Liu, Qingchao Chen

Secondly, based on the UTR, we propose a novel Calibrated Adaption Framework (CAF) for SFUDA, including i)the source knowledge calibration module that guides the target model to learn the transferable source knowledge and discard the non-transferable one, and ii)the target semantics calibration module that calibrates the unreliable semantics.

Unsupervised Domain Adaptation

Delving into the Continuous Domain Adaptation

1 code implementation28 Aug 2022 Yinsong Xu, Zhuqing Jiang, Aidong Men, Yang Liu, Qingchao Chen

Existing domain adaptation methods assume that domain discrepancies are caused by a few discrete attributes and variations, e. g., art, real, painting, quickdraw, etc.

Attribute Domain Adaptation

Seeing your sleep stage: cross-modal distillation from EEG to infrared video

1 code implementation11 Aug 2022 Jianan Han, Shaoxing Zhang, Aidong Men, Yang Liu, Ziming Yao, Yan Yan, Qingchao Chen

$S^3VE$ is a large-scale dataset including synchronized infrared video and EEG signal for sleep stage classification, including 105 subjects and 154, 573 video clips that is more than 1100 hours long.

EEG

Weakly Supervised Temporal Sentence Grounding With Gaussian-Based Contrastive Proposal Learning

1 code implementation CVPR 2022 Minghang Zheng, Yanjie Huang, Qingchao Chen, Yuxin Peng, Yang Liu

Moreover, they train their model to distinguish positive visual-language pairs from negative ones randomly collected from other videos, ignoring the highly confusing video segments within the same video.

Model Optimization Sentence +1

Adaptive Cross-Modal Prototypes for Cross-Domain Visual-Language Retrieval

no code implementations CVPR 2021 Yang Liu, Qingchao Chen, Samuel Albanie

In this paper, we study the task of visual-text retrieval in the highly practical setting in which labelled visual data with paired text descriptions are available in one domain (the "source"), but only unlabelled visual data (without text descriptions) are available in the domain of interest (the "target").

Inductive Bias Retrieval +1

Longitudinal Image Registration with Temporal-order and Subject-specificity Discrimination

no code implementations29 Aug 2020 Qianye Yang, Yunguan Fu, Francesco Giganti, Nooshin Ghavami, Qingchao Chen, J. Alison Noble, Tom Vercauteren, Dean Barratt, Yipeng Hu

Morphological analysis of longitudinal MR images plays a key role in monitoring disease progression for prostate cancer patients, who are placed under an active surveillance program.

Image Registration Morphological Analysis +1

Re-Weighted Adversarial Adaptation Network for Unsupervised Domain Adaptation

no code implementations CVPR 2018 Qingchao Chen, Yang Liu, Zhaowen Wang, Ian Wassell, Kevin Chetty

In this paper, we propose the Re-weighted Adversarial Adaptation Network (RAAN) to reduce the feature distribution divergence and adapt the classifier when domain discrepancies are disparate.

Open-Ended Question Answering Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.