no code implementations • 2 Oct 2023 • Yeonsoo Jeon, Mattan Erez, Michael Orshansky
Privacy-Preserving ML (PPML) based on Homomorphic Encryption (HE) is a promising foundational privacy technology.
no code implementations • 27 Sep 2023 • Zihao Deng, Benjamin Ghaemmaghami, Ashish Kumar Singh, Benjamin Cho, Leo Orshansky, Mattan Erez, Michael Orshansky
At constant model quality, MLET allows embedding dimension, and model size, reduction by up to 16x, and 5. 8x on average, across the models.
no code implementations • 11 Jul 2023 • Zihao Deng, Xin Wang, Sayeh Sharify, Michael Orshansky
Quantization assigning the same bit-width to all layers leads to large accuracy degradation at low precision and is wasteful at high precision settings.
no code implementations • 5 Mar 2023 • Xiaodan Xi, Ge Li, Ye Wang, Yeonsoo Jeon, Michael Orshansky
We construct lattice PUF with a physically obfuscated key and an LWE decryption function block.
no code implementations • 11 Nov 2021 • Zihao Deng, Michael Orshansky
DNNs deployed on analog processing in memory (PIM) architectures are subject to fabrication-time variability.
no code implementations • 28 Aug 2021 • Ge Li, Mohit Tiwari, Michael Orshansky
Spatial accelerators, that parallelize matrix/vector operations, are utilized for enhancing energy efficiency of DNN computation.
no code implementations • 10 Jun 2020 • Benjamin Ghaemmaghami, Zihao Deng, Benjamin Cho, Leo Orshansky, Ashish Kumar Singh, Mattan Erez, Michael Orshansky
Increasing the dimension of embedding vectors improves model accuracy but comes at a high cost to model size.