The task of privacy-preserving model upgrades in image retrieval desires to reap the benefits of rapidly evolving new models without accessing the raw gallery images.
To train and evaluate our approach, we construct a dataset named as TextLogo3K, consisting of about 3, 500 text logo images and their pixel-level annotations.
The task of backward-compatible representation learning is therefore introduced to support backfill-free model upgrades, where the new query features are interoperable with the old gallery features.
Graph Neural Networks (GNNs) have achieved great success in various tasks, but their performance highly relies on a large number of labeled nodes, which typically requires considerable human effort.
In contrast, hot-refresh model upgrades deploy the new model immediately and then gradually improve the retrieval accuracy by backfilling the gallery on-the-fly.
Moreover, we employ a joint optimization combining pretext tasks with contrastive learning to further enhance the spatio-temporal representation learning.
As a basic task of computer vision, image similarity retrieval is facing the challenge of large-scale data and image copy attacks.
Message passing is the core of most graph models such as Graph Convolutional Network (GCN) and Label Propagation (LP), which usually require a large number of clean labeled data to smooth out the neighborhood over the graph.
In contrast, hot-refresh model upgrades deploy the new model immediately and then gradually improve the retrieval accuracy by backﬁlling the gallery on-the-ﬂy.
Data selection methods, such as active learning and core-set selection, are useful tools for improving the data efficiency of deep learning models on large-scale datasets.