FairNAS: Rethinking Evaluation Fairness of Weight Sharing Neural Architecture Search

ICCV 2021  ·  Xiangxiang Chu, Bo Zhang, Ruijun Xu ·

One of the most critical problems in weight-sharing neural architecture search is the evaluation of candidate models within a predefined search space. In practice, a one-shot supernet is trained to serve as an evaluator. A faithful ranking certainly leads to more accurate searching results. However, current methods are prone to making misjudgments. In this paper, we prove that their biased evaluation is due to inherent unfairness in the supernet training. In view of this, we propose two levels of constraints: expectation fairness and strict fairness. Particularly, strict fairness ensures equal optimization opportunities for all choice blocks throughout the training, which neither overestimates nor underestimates their capacity. We demonstrate that this is crucial for improving the confidence of models' ranking. Incorporating the one-shot supernet trained under the proposed fairness constraints with a multi-objective evolutionary search algorithm, we obtain various state-of-the-art models, e.g., FairNAS-A attains 77.5% top-1 validation accuracy on ImageNet. The models and their evaluation codes are made publicly available online http://github.com/fairnas/FairNAS .

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract

Results from the Paper


Ranked #3 on Neural Architecture Search on CIFAR-10 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Neural Architecture Search CIFAR-10 FairNAS-A Top-1 Error Rate 1.8% # 3
Search Time (GPU days) 8 # 27
Parameters 3 # 1
FLOPS 391 # 1
Image Classification ImageNet FairNAS-C Top 1 Accuracy 74.69% # 901
Number of params 4.4M # 389
GFLOPs 0.642 # 77
Image Classification ImageNet FairNAS-B Top 1 Accuracy 75.10% # 885
Number of params 4.5M # 390
GFLOPs 0.690 # 82
Image Classification ImageNet FairNAS-A Top 1 Accuracy 75.34% # 879
Number of params 4.6M # 391
GFLOPs 0.776 # 92
Neural Architecture Search ImageNet FairNAS-B Top-1 Error Rate 24.9 # 113
Accuracy 75.1 # 90
Params 4.5M # 50
MACs 345M # 104
Neural Architecture Search ImageNet FairNAS-C Top-1 Error Rate 25.4 # 119
Accuracy 74.69 # 95
Params 4.4M # 52
MACs 321M # 94
Neural Architecture Search ImageNet FairNAS-A Top-1 Error Rate 24.7 # 110
Accuracy 75.34 # 87
Params 4.6M # 49
MACs 388M # 110
Neural Architecture Search NAS-Bench-201, CIFAR-10 FairNAS Accuracy (Test) 93.23 # 24
Accuracy (Val) 90.07 # 20
Search time (s) 9845 # 8
Neural Architecture Search NAS-Bench-201, CIFAR-100 FairNAS Accuracy (Test) 71.00 # 23
Accuracy (Val) 70.94 # 21
Search time (s) 9845 # 8
Neural Architecture Search NAS-Bench-201, ImageNet-16-120 FairNAS Accuracy (Test) 42.19 # 30
Search time (s) 9845 # 10

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Uses Extra
Training Data
Source Paper Compare
Neural Architecture Search NATS-Bench Topology, CIFAR-10 FairNAS (Chu et al., 2021) Test Accuracy 93.23 # 7
Neural Architecture Search NATS-Bench Topology, CIFAR-100 FairNAS (Chu et al., 2021) Test Accuracy 71.00 # 7
Neural Architecture Search NATS-Bench Topology, ImageNet16-120 FairNAS (Chu et al., 2021) Test Accuracy 42.19 # 7

Methods