Training Techniques | Nesterov Accelerated Gradient, Weight Decay |
---|---|
Architecture | Batch Normalization, Convolution, ReLU, Residual Connection |
ID | hrnet_w18 |
SHOW MORE |
Training Techniques | Nesterov Accelerated Gradient, Weight Decay |
---|---|
Architecture | Batch Normalization, Convolution, ReLU, Residual Connection |
ID | hrnet_w18_small |
SHOW MORE |
Training Techniques | Nesterov Accelerated Gradient, Weight Decay |
---|---|
Architecture | Batch Normalization, Convolution, ReLU, Residual Connection |
ID | hrnet_w18_small_v2 |
SHOW MORE |
Training Techniques | Nesterov Accelerated Gradient, Weight Decay |
---|---|
Architecture | Batch Normalization, Convolution, ReLU, Residual Connection |
ID | hrnet_w30 |
SHOW MORE |
Training Techniques | Nesterov Accelerated Gradient, Weight Decay |
---|---|
Architecture | Batch Normalization, Convolution, ReLU, Residual Connection |
ID | hrnet_w32 |
SHOW MORE |
Training Techniques | Nesterov Accelerated Gradient, Weight Decay |
---|---|
Architecture | Batch Normalization, Convolution, ReLU, Residual Connection |
ID | hrnet_w40 |
SHOW MORE |
Training Techniques | Nesterov Accelerated Gradient, Weight Decay |
---|---|
Architecture | Batch Normalization, Convolution, ReLU, Residual Connection |
ID | hrnet_w44 |
SHOW MORE |
Training Techniques | Nesterov Accelerated Gradient, Weight Decay |
---|---|
Architecture | Batch Normalization, Convolution, ReLU, Residual Connection |
ID | hrnet_w48 |
SHOW MORE |
Training Techniques | Nesterov Accelerated Gradient, Weight Decay |
---|---|
Architecture | Batch Normalization, Convolution, ReLU, Residual Connection |
ID | hrnet_w64 |
SHOW MORE |
HRNet, or High-Resolution Net, is a general purpose convolutional neural network for tasks like semantic segmentation, object detection and image classification. It is able to maintain high resolution representations through the whole process. We start from a high-resolution convolution stream, gradually add high-to-low resolution convolution streams one by one, and connect the multi-resolution streams in parallel. The resulting network consists of several ($4$ in the paper) stages and the $n$th stage contains $n$ streams corresponding to $n$ resolutions. The authors conduct repeated multi-resolution fusions by exchanging the information across the parallel streams over and over.
To load a pretrained model:
import timm
m = timm.create_model('hrnet_w30', pretrained=True)
m.eval()
Replace the model name with the variant you want to use, e.g. hrnet_w30
. You can find the IDs in the model summaries at the top of this page.
You can follow the timm recipe scripts for training a new model afresh.
@misc{sun2019highresolution,
title={High-Resolution Representations for Labeling Pixels and Regions},
author={Ke Sun and Yang Zhao and Borui Jiang and Tianheng Cheng and Bin Xiao and Dong Liu and Yadong Mu and Xinggang Wang and Wenyu Liu and Jingdong Wang},
year={2019},
eprint={1904.04514},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
MODEL | TOP 1 ACCURACY | TOP 5 ACCURACY |
---|---|---|
hrnet_w64 | 79.46% | 94.65% |
hrnet_w48 | 79.32% | 94.51% |
hrnet_w40 | 78.93% | 94.48% |
hrnet_w44 | 78.89% | 94.37% |
hrnet_w32 | 78.45% | 94.19% |
hrnet_w30 | 78.21% | 94.22% |
hrnet_w18 | 76.76% | 93.44% |
hrnet_w18_small_v2 | 75.11% | 92.41% |
hrnet_w18_small | 72.34% | 90.68% |