no code implementations • 24 May 2023 • Xiaojin Zhang, Wenjie Li, Kai Chen, Shutao Xia, Qiang Yang
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters, which facilitates the trade-off between privacy and utility.
1 code implementation • 21 Nov 2022 • Yuting Wang, Jinpeng Wang, Bin Chen, Ziyun Zeng, Shutao Xia
To capture video semantic information for better hashing learning, we adopt an encoder-decoder structure to reconstruct the video from its temporal-masked frames.
no code implementations • CVPR 2022 • Ning Kang, Shanzhao Qiu, Shifeng Zhang, Zhenguo Li, Shutao Xia
Generative model based image lossless compression algorithms have seen a great success in improving compression ratio.
no code implementations • 7 Jul 2021 • Peidong Liu, Zibin He, Xiyu Yan, Yong Jiang, Shutao Xia, Feng Zheng, Maowei Hu
In this work, we propose an effective weakly-supervised video semantic segmentation pipeline with click annotations, called WeClick, for saving laborious annotating effort by segmenting an instance of the semantic class with only a single click.
no code implementations • 7 Jun 2021 • Bowen Zhao, Chen Chen, Qi Ju, Shutao Xia
Training on class-imbalanced data usually results in biased models that tend to predict samples into the majority classes, which is a common and notorious problem.
no code implementations • 28 Dec 2020 • Bowen Zhao, Chen Chen, Xi Xiao, Shutao Xia
Object detectors are typically learned on fully-annotated training data with fixed predefined categories.
1 code implementation • CVPR 2022 • Yan Feng, Baoyuan Wu, Yanbo Fan, Li Liu, Zhifeng Li, Shutao Xia
This work studies black-box adversarial attacks against deep neural networks (DNNs), where the attacker can only access the query feedback returned by the attacked DNN model, while other information such as model parameters or the training datasets are unknown.