In the last few years, there has been an increasing trend to consider Structure from Motion (SfM, in computer vision) and Simultaneous Localization and Mapping (SLAM, in robotics) problems from the point of view of pose averaging (also known as global SfM, in computer vision) or Pose Graph Optimization (PGO, in robotics), where the motion of the camera is reconstructed by considering only relative rigid body transformations instead of including also 3-D points (as done in a full Bundle Adjustment).
Neural networks are susceptible to data inference attacks such as the model inversion attack and the membership inference attack, where the attacker could infer the reconstruction and the membership of a data sample from the confidence scores predicted by the target classifier.
This paper introduces a novel and distributed method for detecting inter-map loop closure outliers in simultaneous localization and mapping (SLAM).
In this paper, we show that distillation, a widely used transformation technique, is a quite effective attack to remove watermark embedded by existing algorithms.
In this work, we investigate the model inversion problem in the adversarial settings, where the adversary aims at inferring information about the target model's training data and test data from the model's prediction values.