no code implementations • 26 Jan 2022 • Sinuo Deng, Lifang Wu, Ge Shi, Lehao Xing, Meng Jian, Ye Xiang
We first introduce a prompt tuning method that mimics the pretraining objective of CLIP and thus can leverage the rich image and text semantics entailed in CLIP.
no code implementations • ICCV 2019 • Ye Xiang, Ying Fu, Pan Ji, Hua Huang
The discriminator combines real-examples from new data and pseudo-examples generated from the old data distribution to learn representation for both old and new classes.