no code implementations • 27 Sep 2022 • Yongqin Wang, Rachit Rajat, Murali Annavaram
In this work, we show that serialization is unnecessary, particularly in the context of ML computations (both in Convolutional neural networks and in Transformer-based models).
no code implementations • 30 Jun 2022 • Hanieh Hashemi, Yongqin Wang, Murali Annavaram
DarKnight relies on cooperative execution between trusted execution environments (TEE) and accelerators, where the TEE provides privacy and integrity verification, while accelerators perform the bulk of the linear algebraic computation to optimize the performance.
no code implementations • 5 May 2021 • Hanieh Hashemi, Yongqin Wang, Chuan Guo, Murali Annavaram
This learning setting presents, among others, two unique challenges: how to protect privacy of the clients' data during training, and how to ensure integrity of the trained model.
no code implementations • 1 May 2021 • Hanieh Hashemi, Yongqin Wang, Murali Annavaram
Privacy and security-related concerns are growing as machine learning reaches diverse application domains.
1 code implementation • 7 Dec 2019 • Krishna Giri Narra, Zhifeng Lin, Yongqin Wang, Keshav Balasubramaniam, Murali Annavaram
However, the overhead of blinding and unblinding the data is a limiting factor to scalability.