2 code implementations • 10 Jan 2020 • Yury Pisarchyk, Juhyun Lee
While deep neural net inference was considered a task for servers only, latest advances in technology allow the task of inference to be moved to mobile and embedded devices, desired for various reasons ranging from latency to privacy.
no code implementations • 3 Jul 2019 • Juhyun Lee, Nikolay Chirkov, Ekaterina Ignasheva, Yury Pisarchyk, Mogan Shieh, Fabio Riccardi, Raman Sarokin, Andrei Kulik, Matthias Grundmann
On-device inference of machine learning models for mobile phones is desirable due to its lower latency and increased privacy.