no code implementations • 29 Sep 2022 • Teru Nagamori, Hiroki Ito, AprilPyone MaungMaung, Hitoshi Kiya
In an experiment, the protected models allowed authorized users to obtain almost the same performance as that of non-protected models but also with robustness against unauthorized access without a key.
no code implementations • 11 Jun 2022 • Hiroki Ito, AprilPyone MaungMaung, Sayaka Shiota, Hitoshi Kiya
In this paper, we propose an access control method with a secret key for semantic segmentation models for the first time so that unauthorized users without a secret key cannot benefit from the performance of trained models.
no code implementations • 1 Feb 2022 • Teru Nagamori, Hiroki Ito, April Pyone Maung Maung, Hitoshi Kiya
In this paper, the use of encrypted feature maps is shown to be effective in access control of object detection models for the first time.
no code implementations • 3 Sep 2021 • Hiroki Ito, MaungMaung AprilPyone, Hitoshi Kiya
In an experiment, the protected models were demonstrated to allow rightful users to obtain almost the same performance as that of non-protected models but also to be robust against access by unauthorized users without a key.
no code implementations • 20 Jul 2021 • Hiroki Ito, MaungMaung AprilPyone, Hitoshi Kiya
Since production-level trained deep neural networks (DNNs) are of a great business value, protecting such DNN models against copyright infringement and unauthorized access is in a rising demand.
no code implementations • 7 Aug 2020 • Hiroki Ito, Yuma Kinoshita, Hitoshi Kiya
We propose a transformation network for generating visually-protected images for privacy-preserving DNNs.