no code implementations • 10 Oct 2023 • Marius Arvinte, Cory Cornelius, Jason Martin, Nageen Himayat
Beyond their impressive sampling capabilities, score-based diffusion models offer a powerful analysis tool in the form of unbiased density estimation of a query sample under the training data distribution.
no code implementations • 2 Jun 2023 • Nikita Zeulin, Olga Galinina, Nageen Himayat, Sergey Andreev
In conventional federated hyperdimensional computing (HDC), training larger models usually results in higher predictive performance but also requires more computational, communication, and energy resources.
no code implementations • 17 Mar 2023 • Aleksei Ponomarenko-Timofeev, Olga Galinina, Ravikumar Balakrishnan, Nageen Himayat, Sergey Andreev, Yevgeni Koucheryavy
The proposed method utilizes efficient computations and model exchange in a network of heterogeneous nodes and allows personalization of the learning model in the presence of non-i. i. d.
no code implementations • 20 Sep 2022 • Anthony Thomas, Behnam Khaleghi, Gopi Krishna Jha, Sanjoy Dasgupta, Nageen Himayat, Ravi Iyer, Nilesh Jain, Tajana Rosing
Hyperdimensional computing (HDC) is a paradigm for data representation and learning originating in computational neuroscience.
no code implementations • 26 Nov 2021 • Nikita Zeulin, Olga Galinina, Nageen Himayat, Sergey Andreev, Robert W. Heath Jr
Today, various machine learning (ML) applications offer continuous data processing and real-time data analytics at the edge of a wireless network.
no code implementations • ICLR 2022 • Ravikumar Balakrishnan, Tian Li, Tianyi Zhou, Nageen Himayat, Virginia Smith, Jeff Bilmes
In every communication round of federated learning, a random subset of clients communicate their model updates back to the server which then aggregates them all.
no code implementations • 12 Nov 2020 • Saurav Prakash, Sagar Dhakal, Mustafa Akdeniz, Yair Yona, Shilpa Talwar, Salman Avestimehr, Nageen Himayat
For minimizing the epoch deadline time at the MEC server, we provide a tractable approach for finding the amount of coding redundancy and the number of local data points that a client processes during training, by exploiting the statistical properties of compute as well as communication delays.
no code implementations • 7 Jul 2020 • Saurav Prakash, Sagar Dhakal, Mustafa Akdeniz, A. Salman Avestimehr, Nageen Himayat
Federated Learning (FL) is an exciting new paradigm that enables training a global model from data generated locally at the client nodes, without moving client data to a centralized server.
no code implementations • 21 Feb 2020 • Sagar Dhakal, Saurav Prakash, Yair Yona, Shilpa Talwar, Nageen Himayat
Here, model parameters are computed locally by each client device and exchanged with a central server, which aggregates the local models for a global view, without requiring sharing of training data.