Search Results for author: Marc Casas

Found 5 papers, 1 papers with code

Compressed Real Numbers for AI: a case-study using a RISC-V CPU

no code implementations11 Sep 2023 Federico Rossi, Marco Cococcioni, Roger Ferrer Ibàñez, Jesùs Labarta, Filippo Mantovani, Marc Casas, Emanuele Ruffaldi, Sergio Saponara

As recently demonstrated, Deep Neural Networks (DNN), usually trained using single precision IEEE 754 floating point numbers (binary32), can also work using lower precision.

Generating Efficient DNN-Ensembles with Evolutionary Computation

no code implementations18 Sep 2020 Marc Ortiz, Florian Scheidegger, Marc Casas, Cristiano Malossi, Eduard Ayguadé

In this work, we leverage ensemble learning as a tool for the creation of faster, smaller, and more accurate deep learning models.

Ensemble Learning Image Classification

Reducing Data Motion to Accelerate the Training of Deep Neural Networks

1 code implementation5 Apr 2020 Sicong Zhuang, Cristiano Malossi, Marc Casas

This paper reduces the cost of DNNs training by decreasing the amount of data movement across heterogeneous architectures composed of several GPUs and multicore CPU devices.

Distributed, Parallel, and Cluster Computing

A Dynamic Approach to Accelerate Deep Learning Training

no code implementations25 Sep 2019 John Osorio, Adrià Armejach, Eric Petit, Marc Casas

The first approach achieves accuracy ratios slightly slower than the state-of-the-art by using half-precision arithmetic during more than 99% of training.

Low-Precision Floating-Point Schemes for Neural Network Training

no code implementations14 Apr 2018 Marc Ortiz, Adrián Cristal, Eduard Ayguadé, Marc Casas

The use of low-precision fixed-point arithmetic along with stochastic rounding has been proposed as a promising alternative to the commonly used 32-bit floating point arithmetic to enhance training neural networks training in terms of performance and energy efficiency.

Cannot find the paper you are looking for? You can Submit a new open access paper.