1 code implementation • Findings (NAACL) 2022 • Elan Markowitz, Keshav Balasubramanian, Mehrnoosh Mirtaheri, Murali Annavaram, Aram Galstyan, Greg Ver Steeg
Knowledge graphs (KGs) often represent knowledge bases that are incomplete.
no code implementations • 12 Dec 2022 • Hanieh Hashemi, Wenjie Xiong, Liu Ke, Kiwan Maeng, Murali Annavaram, G. Edward Suh, Hsien-Hsin S. Lee
This paper explores the private information that may be learned by tracking a recommendation model's sparse feature access patterns.
no code implementations • 27 Sep 2022 • Yongqin Wang, Rachit Rajat, Murali Annavaram
Multi-party computing (MPC) has been gaining popularity over the past years as a secure computing model, particularly for machine learning (ML) inference.
no code implementations • 30 Jun 2022 • Hanieh Hashemi, Yongqin Wang, Murali Annavaram
DarKnight relies on cooperative execution between trusted execution environments (TEE) and accelerators, where the TEE provides privacy and integrity verification, while accelerators perform the bulk of the linear algebraic computation to optimize the performance.
1 code implementation • 26 Dec 2021 • Tiantian Feng, Hanieh Hashemi, Rajat Hebbar, Murali Annavaram, Shrikanth S. Narayanan
To assess the information leakage of SER systems trained using FL, we propose an attribute inference attack framework that infers sensitive attribute information of the clients from shared gradients or model parameters, corresponding to the FedSGD and the FedAvg training algorithms, respectively.
no code implementations • 27 Jul 2021 • Tingting Tang, Ramy E. Ali, Hanieh Hashemi, Tynan Gangwani, Salman Avestimehr, Murali Annavaram
Much of the overhead in prior schemes comes from the fact that they tightly couple coding for all three problems into a single framework.
1 code implementation • 4 Jun 2021 • Chaoyang He, Emir Ceyani, Keshav Balasubramanian, Murali Annavaram, Salman Avestimehr
This work proposes SpreadGNN, a novel multi-task federated training framework capable of operating in the presence of partial labels and absence of a central server for the first time in the literature.
no code implementations • 5 May 2021 • Hanieh Hashemi, Yongqin Wang, Chuan Guo, Murali Annavaram
This learning setting presents, among others, two unique challenges: how to protect privacy of the clients' data during training, and how to ensure integrity of the trained model.
no code implementations • 1 May 2021 • Hanieh Hashemi, Yongqin Wang, Murali Annavaram
Privacy and security-related concerns are growing as machine learning reaches diverse application domains.
1 code implementation • 14 Apr 2021 • Chaoyang He, Keshav Balasubramanian, Emir Ceyani, Carl Yang, Han Xie, Lichao Sun, Lifang He, Liangwei Yang, Philip S. Yu, Yu Rong, Peilin Zhao, Junzhou Huang, Murali Annavaram, Salman Avestimehr
FedGraphNN is built on a unified formulation of graph FL and contains a wide range of datasets from different domains, popular GNN models, and FL algorithms, with secure and efficient system support.
no code implementations • 9 Dec 2020 • Alexandra Angerd, Keshav Balasubramanian, Murali Annavaram
Modern machine learning techniques are successfully being adapted to data modeled as graphs.
no code implementations • 17 Oct 2020 • Assaf Eisenman, Kiran Kumar Matam, Steven Ingram, Dheevatsa Mudigere, Raghuraman Krishnamoorthi, Krishnakumar Nair, Misha Smelyanskiy, Murali Annavaram
While Check-N-Run is applicable to long running ML jobs, we focus on checkpointing recommendation models which are currently the largest ML models with Terabytes of model size.
2 code implementations • NeurIPS 2020 • Chaoyang He, Murali Annavaram, Salman Avestimehr
However, the large model size impedes training on resource-constrained edge devices.
5 code implementations • 27 Jul 2020 • Chaoyang He, Songze Li, Jinhyun So, Xiao Zeng, Mi Zhang, Hongyi Wang, Xiaoyang Wang, Praneeth Vepakomma, Abhishek Singh, Hang Qiu, Xinghua Zhu, Jianzong Wang, Li Shen, Peilin Zhao, Yan Kang, Yang Liu, Ramesh Raskar, Qiang Yang, Murali Annavaram, Salman Avestimehr
Federated learning (FL) is a rapidly growing research field in machine learning.
1 code implementation • 18 Apr 2020 • Chaoyang He, Murali Annavaram, Salman Avestimehr
Federated Learning (FL) has been proved to be an effective learning framework when data cannot be centralized due to privacy, communication costs, and regulatory restrictions.
1 code implementation • 7 Dec 2019 • Krishna Giri Narra, Zhifeng Lin, Yongqin Wang, Keshav Balasubramaniam, Murali Annavaram
However, the overhead of blinding and unblinding the data is a limiting factor to scalability.
no code implementations • 22 Oct 2019 • Zhifeng Lin, Krishna Giri Narra, Mingchao Yu, Salman Avestimehr, Murali Annavaram
Most of the model training is performed on high performance compute nodes and the training data is stored near these nodes for faster training.
no code implementations • 5 Jun 2019 • Krishna Narra, Zhifeng Lin, Ganesh Ananthanarayanan, Salman Avestimehr, Murali Annavaram
In this work, we argue that MLaaS platforms also provide unique opportunities to cut the cost of redundancy.
no code implementations • 27 Apr 2019 • Krishna Giri Narra, Zhifeng Lin, Ganesh Ananthanarayanan, Salman Avestimehr, Murali Annavaram
Deploying the collage-cnn models in the cloud, we demonstrate that the 99th percentile tail latency of inference can be reduced by 1. 2x to 2x compared to replication based approaches while providing high accuracy.
no code implementations • NeurIPS 2018 • Mingchao Yu, Zhifeng Lin, Krishna Narra, Songze Li, Youjie Li, Nam Sung Kim, Alexander Schwing, Murali Annavaram, Salman Avestimehr
Data parallelism can boost the training speed of convolutional neural networks (CNN), but could suffer from significant communication costs caused by gradient aggregation.