no code implementations • 13 Oct 2015 • Zhiguang Wang, Tim Oates, James Lo
We generalized a modified exponentialized estimator by pushing the robust-optimal (RO) index $\lambda$ to $-\infty$ for achieving robustness to outliers by optimizing a quasi-Minimin function.
no code implementations • 8 Jun 2015 • Zhiguang Wang, Tim Oates, James Lo
This paper proposes a set of new error criteria and learning approaches, Adaptive Normalized Risk-Averting Training (ANRAT), to attack the non-convex optimization problem in training deep neural networks (DNNs).