In this paper, we show that such efficiency highly depends on the scale at which the attack is applied, and attacking at the optimal scale significantly improves the efficiency.
We aim to bridge the gap between the two by investigating how to efficiently estimate gradient based on a projected low-dimensional space.
Yet, there have been limited studies on the adversarial robustness of multi-modal models that fuse LiDAR features with image features.
Such adversarial attacks can be achieved by adding a small magnitude of perturbation to the input to mislead model prediction.
To train the meta-model without knowledge of the attack strategy, we introduce a technique called jumbo learning that samples a set of Trojaned models following a general distribution.
In this paper, we take the task of link prediction as an example, which is one of the most fundamental problems for graph analysis, and introduce a data positioning attack to node embedding methods.