Is It Time to Redefine the Classification Task for Deep Learning Systems?

ICML Workshop AML 2021  ·  Keji Han, Yun Li, Songcan Chen ·

Many works have demonstrated that deep neural networks (DNNs) are vulnerable to adversarial examples. A deep learning system involves a couple of elements: the learning task, data set, deep model, loss, and optimizer. Each element may cause the vulnerability of the deep learning system, and simply attributing the vulnerability of the deep learning system to the deep model may impede addressing the adversarial attack. So we redefine the robustness of DNNs as the robustness of the deep neural learning system, and we experimentally find that the vulnerability of the deep learning system also roots in the learning task itself. In detail, this paper defines the interval-label classification task for the deep classification system, whose labels are predefined non-overlapping intervals instead of a fixed value (hard label) or probability vector (soft label). The experimental results demonstrate that the interval-label classification task is more robust than the traditional classification task while retaining accuracy.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here