Search Results for author: Michael Orshansky

Found 7 papers, 0 papers with code

Artemis: HE-Aware Training for Efficient Privacy-Preserving Machine Learning

no code implementations2 Oct 2023 Yeonsoo Jeon, Mattan Erez, Michael Orshansky

Privacy-Preserving ML (PPML) based on Homomorphic Encryption (HE) is a promising foundational privacy technology.

Model Compression Privacy Preserving

Mixed-Precision Quantization with Cross-Layer Dependencies

no code implementations11 Jul 2023 Zihao Deng, Xin Wang, Sayeh Sharify, Michael Orshansky

Quantization assigning the same bit-width to all layers leads to large accuracy degradation at low precision and is wasteful at high precision settings.

Quantization

A Provably Secure Strong PUF based on LWE: Construction and Implementation

no code implementations5 Mar 2023 Xiaodan Xi, Ge Li, Ye Wang, Yeonsoo Jeon, Michael Orshansky

We construct lattice PUF with a physically obfuscated key and an LWE decryption function block.

Variability-Aware Training and Self-Tuning of Highly Quantized DNNs for Analog PIM

no code implementations11 Nov 2021 Zihao Deng, Michael Orshansky

DNNs deployed on analog processing in memory (PIM) architectures are subject to fabrication-time variability.

Quantization

Power-Based Attacks on Spatial DNN Accelerators

no code implementations28 Aug 2021 Ge Li, Mohit Tiwari, Michael Orshansky

Spatial accelerators, that parallelize matrix/vector operations, are utilized for enhancing energy efficiency of DNN computation.

Model extraction

Cannot find the paper you are looking for? You can Submit a new open access paper.