no code implementations • 20 Nov 2022 • Songhao Jiang, Yan Chu, Tianxing Ma, Tianning Zang
Compared with the classical models or the generalization improvement methods, such as Dropout, Mixup, Cutout, and CutMix, Feature Weaken shows good compatibility and performance.
1 code implementation • 25 Nov 2020 • Alexander Partin, Thomas Brettin, Yvonne A. Evrard, Yitan Zhu, Hyunseung Yoo, Fangfang Xia, Songhao Jiang, Austin Clyde, Maulik Shukla, Michael Fonstein, James H. Doroshow, Rick Stevens
In contrast, a GBDT with hyperparameter tuning exhibits superior performance as compared with both NNs at the lower range of training sizes for two of the datasets, whereas the mNN performs better at the higher range of training sizes.