no code implementations • 22 Mar 2024 • Yassaman Ebrahimzadeh Maboud, Muhammad Adnan, Divya Mahajan, Prashant J. Nair
Training recommendation models pose significant challenges regarding resource utilization and performance.
no code implementations • 18 Mar 2024 • Minsu Kim, Jinwoo Hwang, Guseul Heo, Seiyeon Cho, Divya Mahajan, Jongse Park
Learned indexes use machine learning models to learn the mappings between keys and their corresponding positions in key-value indexes.
no code implementations • 28 Aug 2023 • Muhammad Adnan, Yassaman Ebrahimzadeh Maboud, Divya Mahajan, Prashant J. Nair
However, deep learning-based recommendation models often face challenges due to evolving user behaviour and item features, leading to covariate shifts.
1 code implementation • NeurIPS 2023 • Irene Wang, Prashant J. Nair, Divya Mahajan
Building on this dropout technique, we develop an adaptive training framework, Federated Learning using Invariant Dropout (FLuID).
no code implementations • 11 Apr 2022 • Muhammad Adnan, Yassaman Ebrahimzadeh Maboud, Divya Mahajan, Prashant J. Nair
Hotline increases the overall training throughput to 35. 7 epochs/hour in comparison to 5. 3 epochs/hour for the Intel-optimized DLRM baseline
1 code implementation • 1 Mar 2021 • Muhammad Adnan, Yassaman Ebrahimzadeh Maboud, Divya Mahajan, Prashant J. Nair
This paper leverages this asymmetrical access pattern to offer a framework, called FAE, and proposes a hot-embedding aware data layout for training recommender models.
1 code implementation • NeurIPS 2020 • Jakub Tarnawski, Amar Phanishayee, Nikhil R. Devanur, Divya Mahajan, Fanny Nina Paravecino
However, for such settings (large models and multiple heterogeneous devices), we require automated algorithms and toolchains that can partition the ML workload across devices.
no code implementations • 8 Jan 2018 • Divya Mahajan, Joon Kyung Kim, Jacob Sacks, Adel Ardalan, Arun Kumar, Hadi Esmaeilzadeh
The data revolution is fueled by advances in machine learning, databases, and hardware design.