no code implementations • 28 Feb 2024 • Deng Li, Aming Wu, YaoWei Wang, Yahong Han
In this paper, we propose a dynamic object-centric perception network based on prompt learning, aiming to adapt to the variations in image complexity.
1 code implementation • ICCV 2023 • Yukuan Min, Aming Wu, Cheng Deng
Then, we construct a class-balanced curriculum learning strategy to balance the different environments to remove the predicate imbalance.
no code implementations • CVPR 2023 • Aming Wu, Cheng Deng
To simulate this ability, a task of unsupervised out-of-distribution object detection (OOD-OD) is proposed to detect the objects that are never-seen-before during model training, which is beneficial for promoting the safe deployment of object detectors.
no code implementations • ICCV 2023 • Aming Wu, Da Chen, Cheng Deng
For this task, the challenge mainly lies in how to only leverage the known in-distribution (ID) data to detect OOD objects accurately without affecting the detection of ID objects, which can be framed as the diffusion problem for deep feature synthesis.
1 code implementation • 26 Dec 2022 • Deng Li, Aming Wu, Yahong Han, Qi Tian
Considering the complexity and variability of real scene tasks, we propose a Prototype-guided Cross-task Knowledge Distillation (ProC-KD) approach to transfer the intrinsic local-level object knowledge of a large-scale teacher network to various task scenarios.
1 code implementation • CVPR 2022 • Muli Yang, Yuehua Zhu, Jiaping Yu, Aming Wu, Cheng Deng
In response to the explosively-increasing requirement of annotated data, Novel Class Discovery (NCD) has emerged as a promising alternative to automatically recognize unknown classes without any annotation.
1 code implementation • CVPR 2022 • Aming Wu, Cheng Deng
Particularly, for the night-sunny scene, our method outperforms baselines by 3%, which indicates that our method is instrumental in enhancing generalization ability.
Ranked #3 on Robust Object Detection on DWD
1 code implementation • NeurIPS 2021 • Aming Wu, Suqi Zhao, Cheng Deng, Wei Liu
To alleviate the impact of few samples, enhancing the generalization and discrimination abilities of detectors on new objects plays an important role.
1 code implementation • ICCV 2021 • Aming Wu, Rui Liu, Yahong Han, Linchao Zhu, Yi Yang
Secondly, domain-specific representations are introduced as the differences between the input and domain-invariant representations.
1 code implementation • 22 Jun 2021 • Zhipeng Wang, Hao Wang, Jiexi Yan, Aming Wu, Cheng Deng
Most existing methods regard ZS-SBIR as a traditional classification problem and employ a cross-entropy or triplet-based loss to achieve retrieval, which neglect the problems of the domain gap between sketches and natural images and the large intra-class diversity in sketches.
1 code implementation • ICCV 2021 • Aming Wu, Yahong Han, Linchao Zhu, Yi Yang
Thus, we develop a new framework of few-shot object detection with universal prototypes ({FSOD}^{up}) that owns the merit of feature generalization towards novel objects.
Ranked #23 on Few-Shot Object Detection on MS-COCO (10-shot)
no code implementations • IEEE Transactions on Circuits and Systems for Video Technology 2020 • Aming Wu, Yahong Han, Zhou Zhao, Yi Yang
In this article, we devise a novel memory decoder for visual narrating.
Ranked #13 on Visual Storytelling on VIST
no code implementations • 27 Feb 2020 • Aming Wu, Yahong Han
Instead of the common practice, i. e., sequence decoding with RNN, in this paper, we devise a novel memory decoder for video captioning.
1 code implementation • NeurIPS 2019 • Aming Wu, Linchao Zhu, Yahong Han, Yi Yang
Inspired by this idea, towards VCR, we propose a connective cognition network (CCN) to dynamically reorganize the visual neuron connectivity that is contextualized by the meaning of questions and answers.
no code implementations • 20 Nov 2019 • Aming Wu, Yahong Han, Linchao Zhu, Yi Yang
Most state-of-the-art methods of object detection suffer from poor generalization ability when the training and test data are from different domains, e. g., with different styles.