Competitions with currently unpublished results:
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
With the increasing use of machine-learning driven algorithmic judgements, it is critical to develop models that are robust to evolving or manipulated inputs.
While autoregressive models excel at image compression, their sample quality is often lacking.
Relying on this, we learn a defense transformer to counterattack the adversarial examples by parameterizing the affine transformations and exploiting the boundary information of DNNs.
In order to defend against the adversarial perturbations, adversarially trained GAN (ATGAN) is proposed to improve the adversarial robustness generalization of the state-of-the-art CNNs trained by adversarial training.
False data injection attack (FDIA) is a critical security issue in power system state estimation.
Finally, we propose an adversarial defense strategy that reduces the average fooling rate by threefold to 15. 22% against a single policy attack, thereby increasing the robustness of the detection models i. e. the proposed model can effectively detect variants (metamorphic) of malware.
Through neuron coverage and data imperceptibility, we use data-oriented metrics to measure the integrity of test examples; by delving into model structure and behavior, we exploit model-oriented metrics to further evaluate robustness in the adversarial setting.
We are largely motivated by the search for a soft measure that sheds further light on the decision boundary's geometry.