Search Results for author: Hanieh Hashemi

Found 7 papers, 1 papers with code

Differentially Private Heavy Hitter Detection using Federated Analytics

no code implementations21 Jul 2023 Karan Chadha, Junye Chen, John Duchi, Vitaly Feldman, Hanieh Hashemi, Omid Javidbakht, Audra McMillan, Kunal Talwar

In this work, we study practical heuristics to improve the performance of prefix-tree based algorithms for differentially private heavy hitter detection.

Data Leakage via Access Patterns of Sparse Features in Deep Learning-based Recommendation Systems

no code implementations12 Dec 2022 Hanieh Hashemi, Wenjie Xiong, Liu Ke, Kiwan Maeng, Murali Annavaram, G. Edward Suh, Hsien-Hsin S. Lee

This paper explores the private information that may be learned by tracking a recommendation model's sparse feature access patterns.

Recommendation Systems

DarKnight: An Accelerated Framework for Privacy and Integrity Preserving Deep Learning Using Trusted Hardware

no code implementations30 Jun 2022 Hanieh Hashemi, Yongqin Wang, Murali Annavaram

DarKnight relies on cooperative execution between trusted execution environments (TEE) and accelerators, where the TEE provides privacy and integrity verification, while accelerators perform the bulk of the linear algebraic computation to optimize the performance.

Attribute Inference Attack of Speech Emotion Recognition in Federated Learning Settings

1 code implementation26 Dec 2021 Tiantian Feng, Hanieh Hashemi, Rajat Hebbar, Murali Annavaram, Shrikanth S. Narayanan

To assess the information leakage of SER systems trained using FL, we propose an attribute inference attack framework that infers sensitive attribute information of the clients from shared gradients or model parameters, corresponding to the FedSGD and the FedAvg training algorithms, respectively.

Attribute Federated Learning +2

Byzantine-Robust and Privacy-Preserving Framework for FedML

no code implementations5 May 2021 Hanieh Hashemi, Yongqin Wang, Chuan Guo, Murali Annavaram

This learning setting presents, among others, two unique challenges: how to protect privacy of the clients' data during training, and how to ensure integrity of the trained model.

Federated Learning Privacy Preserving

Privacy and Integrity Preserving Training Using Trusted Hardware

no code implementations1 May 2021 Hanieh Hashemi, Yongqin Wang, Murali Annavaram

Privacy and security-related concerns are growing as machine learning reaches diverse application domains.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.