no code implementations • 1 Dec 2023 • Junchen Zhao, Yurun Song, Simeng Liu, Ian G. Harris, Sangeetha Abdu Jyothi
Second, an optimized data transmission mechanism ensures efficient and structured data flow between model segments while also maintaining the integrity of the original model structure.
no code implementations • 27 Feb 2023 • Sagar Patel, Sangeetha Abdu Jyothi, Nina Narodytska
We present CrystalBox, a novel, model-agnostic, posthoc explainability framework for Deep Reinforcement Learning (DRL) controllers in the large family of input-driven environments which includes computer systems.
no code implementations • 24 Feb 2023 • Sagar Patel, Junyang Zhang, Sangeetha Abdu Jyothi, Nina Narodytska
First, we identify the critical features that determine the behavior of the traces.
no code implementations • 29 Apr 2020 • Sayed Hadi Hashemi, Sangeetha Abdu Jyothi, Brighten Godfrey, Roy Campbell
The method of choice for parameter aggregation in Deep Neural Network (DNN) training, a network-intensive task, is shifting from the Parameter Server model to decentralized aggregation schemes (AllReduce) inspired by theoretical guarantees of better performance.
1 code implementation • 8 Mar 2018 • Sayed Hadi Hashemi, Sangeetha Abdu Jyothi, Roy H. Campbell
We develop a system, TicTac, to improve the iteration time by fixing this issue in distributed deep learning with Parameter Servers while guaranteeing near-optimal overlap of communication and computation.