2 code implementations • 9 Apr 2024 • Jianlang Chen, Xuhong Ren, Qing Guo, Felix Juefei-Xu, Di Lin, Wei Feng, Lei Ma, Jianjun Zhao
To achieve high accuracy on both clean and adversarial data, we propose building a spatial-temporal continuous representation using the semantic text guidance of the object of interest.
no code implementations • 19 Mar 2024 • Sensen Gao, Xiaojun Jia, Xuhong Ren, Ivor Tsang, Qing Guo
Vision-language pre-training (VLP) models exhibit remarkable capabilities in comprehending both images and text, yet they remain susceptible to multimodal adversarial examples (AEs).
no code implementations • 21 Sep 2022 • Xuhong Ren, Jianlang Chen, Felix Juefei-Xu, Wanli Xue, Qing Guo, Lei Ma, Jianjun Zhao, ShengYong Chen
Then, we propose a novel core-failure-set guided DARTS that embeds a K-center-greedy algorithm for DARTS to select suitable corrupted failure examples to refine the model architecture.
no code implementations • 23 Apr 2021 • Ziyi Cheng, Xuhong Ren, Felix Juefei-Xu, Wanli Xue, Qing Guo, Lei Ma, Jianjun Zhao
Online updating of the object model via samples from historical frames is of great importance for accurate visual object tracking.