In this paper, we propose a novel Decomposable Winograd Method (DWM), which breaks through the limitation of original Winograd's minimal filtering algorithm to a wide and general convolutions.
Gaussian processes (GP) for machine learning have been studied systematically over the past two decades and they are by now widely used in a number of diverse applications.
no code implementations • 23 Oct 2017 • Jinhua Tao, Zidong Du, Qi Guo, Huiying Lan, Lei Zhang, Shengyuan Zhou, Lingjie Xu, Cong Liu, Haifeng Liu, Shan Tang, Allen Rush, Willian Chen, Shaoli Liu, Yunji Chen, Tianshi Chen
The variety of emerging intelligence processors requires standard benchmarks for fair comparison and system optimization (in both software and hardware).
In this paper, a comparative study of estimators based on these different types of regularizers is reported.
The number of hypothesis grows rapidly with the number of systems and approximate solutions become a necessity for any problems of practical interests.
The aim of this paper is to answer the following research questions: Given a fitness function class, which functions are the easiest with respect to an evolutionary algorithm?
(1) We demonstrate rigorously that for elitist EAs with identical global mutation, using a lager population size always increases the average rate of convergence to the optimal set; and yet, sometimes, the expected number of generations needed to find an optimal solution (measured by either the maximal value or the average value) may increase, rather than decrease.