no code implementations • 26 Mar 2024 • Yifan Yan, Ruomin He, Zhenghua Liu
MUTE-SLAM effectively tracks camera positions and incrementally builds a scalable multi-map representation for both small and large indoor environments.
no code implementations • 17 Mar 2023 • Yifan Yan, Xudong Pan, Mi Zhang, Min Yang
Copyright protection for deep neural networks (DNNs) is an urgent need for AI corporations.
no code implementations • 29 Jun 2022 • Xudong Pan, Yifan Yan, Shengyao Zhang, Mi Zhang, Min Yang
In this paper, we present a novel insider attack called Matryoshka, which employs an irrelevant scheduled-to-publish DNN model as a carrier model for covert transmission of multiple secret models which memorize the functionality of private ML data stored in local data centers.
no code implementations • 30 Apr 2022 • Yifan Yan, Xudong Pan, Yining Wang, Mi Zhang, Min Yang
On $9$ state-of-the-art white-box watermarking schemes and a broad set of industry-level DNN architectures, our attack for the first time reduces the embedded identity message in the protected models to be almost random.
no code implementations • 26 Oct 2020 • Xudong Pan, Mi Zhang, Yifan Yan, Jiaming Zhu, Min Yang
Among existing privacy attacks on the gradient of neural networks, \emph{data reconstruction attack}, which reverse engineers the training batch from the gradient, poses a severe threat on the private training data.