no code implementations • 20 Sep 2020 • Veljko Milutinovic, Erfan Sadeqi Azer, Kristy Yoshimoto, Gerhard Klimeck, Miljan Djordjevic, Milos Kotlar, Miroslav Bojovic, Bozidar Miladinovic, Nenad Korolija, Stevan Stankovic, Nenad Filipović, Zoran Babovic, Miroslav Kosanic, Akira Tsuda, Mateo Valero, Massimo De Santo, Erich Neuhold, Jelena Skoručak, Laura Dipietro, Ivan Ratkovic
This article starts from the assumption that near future 100BTransistor SuperComputers-on-a-Chip will include N big multi-core processors, 1000N small many-core processors, a TPU-like fixed-structure systolic array accelerator for the most frequently used Machine Learning algorithms needed in bandwidth-bound applications and a flexible-structure reprogrammable accelerator for less frequently used Machine Learning algorithms needed in latency-critical applications.
Distributed, Parallel, and Cluster Computing
no code implementations • 6 Apr 2020 • Gustavo A. Valencia-Zapata, Carolina Gonzalez-Canas, Michael G. Zentner, Okan Ersoy, Gerhard Klimeck
Several studies point out different causes of performance degradation in supervised machine learning.
no code implementations • 5 Sep 2017 • Gustavo A. Valencia-Zapata, Daniel Mejia, Gerhard Klimeck, Michael Zentner, Okan Ersoy
Probabilistic mixture models have been widely used for different machine learning and pattern recognition tasks such as clustering, dimensionality reduction, and classification.