Search Results for author: Ziqi Yang

Found 5 papers, 0 papers with code

Rotational Outlier Identification in Pose Graphs Using Dual Decomposition

no code implementations ECCV 2020 Arman Karimian, Ziqi Yang, Roberto Tron

In the last few years, there has been an increasing trend to consider Structure from Motion (SfM, in computer vision) and Simultaneous Localization and Mapping (SLAM, in robotics) problems from the point of view of pose averaging (also known as global SfM, in computer vision) or Pose Graph Optimization (PGO, in robotics), where the motion of the camera is reconstructed by considering only relative rigid body transformations instead of including also 3-D points (as done in a full Bundle Adjustment).

Outlier Detection Simultaneous Localization and Mapping +1

Defending Model Inversion and Membership Inference Attacks via Prediction Purification

no code implementations8 May 2020 Ziqi Yang, Bin Shao, Bohan Xuan, Ee-Chien Chang, Fan Zhang

Neural networks are susceptible to data inference attacks such as the model inversion attack and the membership inference attack, where the attacker could infer the reconstruction and the membership of a data sample from the confidence scores predicted by the target classifier.

Inference Attack Membership Inference Attack

Statistical Outlier Identification in Multi-robot Visual SLAM using Expectation Maximization

no code implementations7 Feb 2020 Arman Karimian, Ziqi Yang, Roberto Tron

This paper introduces a novel and distributed method for detecting inter-map loop closure outliers in simultaneous localization and mapping (SLAM).

Outlier Detection Simultaneous Localization and Mapping

Effectiveness of Distillation Attack and Countermeasure on Neural Network Watermarking

no code implementations14 Jun 2019 Ziqi Yang, Hung Dang, Ee-Chien Chang

In this paper, we show that distillation, a widely used transformation technique, is a quite effective attack to remove watermark embedded by existing algorithms.

Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment

no code implementations22 Feb 2019 Ziqi Yang, Ee-Chien Chang, Zhenkai Liang

In this work, we investigate the model inversion problem in the adversarial settings, where the adversary aims at inferring information about the target model's training data and test data from the model's prediction values.

Cannot find the paper you are looking for? You can Submit a new open access paper.