no code implementations • 13 Mar 2023 • Jingtao Li, Adnan Siraj Rakin, Xing Chen, Li Yang, Zhezhi He, Deliang Fan, Chaitali Chakrabarti
We show that under practical cases, the proposed ME attacks work exceptionally well for SFL.
no code implementations • 17 Nov 2022 • Shunyao Wu, Chaitali Chakrabarti, Ahmed Alkhateeb
Given this future blockage prediction capability, the paper also shows that the developed solutions can achieve an order of magnitude saving in network latency, which further highlights the potential of the developed blockage prediction solutions for wireless networks.
1 code implementation • 18 Aug 2022 • Jingtao Li, Jian Zhou, Yan Xiong, Xing Chen, Chaitali Chakrabarti
Sampling is an essential part of raw point cloud data processing such as in the popular PointNet++ scheme.
1 code implementation • CVPR 2022 • Jingtao Li, Adnan Siraj Rakin, Xing Chen, Zhezhi He, Deliang Fan, Chaitali Chakrabarti
While such a scheme helps reduce the computational load at the client end, it opens itself to reconstruction of raw data from intermediate activation by the server.
no code implementations • 18 Nov 2021 • Shunyao Wu, Chaitali Chakrabarti, Ahmed Alkhateeb
If used for proactive hand-off, the proposed solutions can potentially provide an order of magnitude saving in the network latency, which highlights a promising direction for addressing the blockage challenges in mmWave/sub-THz networks.
no code implementations • 16 Nov 2021 • Shunyao Wu, Muhammad Alrabeiah, Chaitali Chakrabarti, Ahmed Alkhateeb
In this paper, we propose a novel solution that relies only on in-band mmWave wireless measurements to proactively predict future dynamic line-of-sight (LOS) link blockages.
no code implementations • 14 Aug 2021 • Gokul Krishnan, Sumit K. Mandal, Manvitha Pannala, Chaitali Chakrabarti, Jae-sun Seo, Umit Y. Ogras, Yu Cao
In-memory computing (IMC) on a monolithic chip for deep learning faces dramatic challenges on area, yield, and on-chip interconnection cost due to the ever-increasing model sizes.
no code implementations • 20 Jul 2021 • Xing Chen, Jingtao Li, Chaitali Chakrabarti
An added benefit of the proposed communication reduction method is that the computations at the client side are reduced due to reduction in the number of client model updates.
no code implementations • 6 Jul 2021 • Gokul Krishnan, Sumit K. Mandal, Chaitali Chakrabarti, Jae-sun Seo, Umit Y. Ogras, Yu Cao
In this technique, we use analytical models of NoC to evaluate end-to-end communication latency of any given DNN.
no code implementations • 22 Mar 2021 • Adnan Siraj Rakin, Li Yang, Jingtao Li, Fan Yao, Chaitali Chakrabarti, Yu Cao, Jae-sun Seo, Deliang Fan
Apart from recovering the inference accuracy, our RA-BNN after growing also shows significantly higher resistance to BFA.
1 code implementation • 20 Jan 2021 • Jingtao Li, Adnan Siraj Rakin, Zhezhi He, Deliang Fan, Chaitali Chakrabarti
In this work, we propose RADAR, a Run-time adversarial weight Attack Detection and Accuracy Recovery scheme to protect DNN weights against PBFA.
no code implementations • 18 Jan 2021 • Shunyao Wu, Muhammad Alrabeiah, Andrew Hredzak, Chaitali Chakrabarti, Ahmed Alkhateeb
To evaluate our proposed approach, we build a mmWave communication setup with a moving blockage and collect a dataset of received power sequences.
2 code implementations • 24 Jul 2020 • Adnan Siraj Rakin, Zhezhi He, Jingtao Li, Fan Yao, Chaitali Chakrabarti, Deliang Fan
Prior works of BFA focus on un-targeted attack that can hack all inputs into a random output class by flipping a very small number of weight bits stored in computer memory.
1 code implementation • 27 Jan 2020 • Richard Uhrie, Chaitali Chakrabarti, John Brunhaver
Modern program runtime is dominated by segments of repeating code called kernels.
Distributed, Parallel, and Cluster Computing Programming Languages
no code implementations • 19 Apr 2018 • Shihui Yin, Gaurav Srivastava, Shreyas K. Venkataramanaiah, Chaitali Chakrabarti, Visar Berisha, Jae-sun Seo
Deep learning algorithms have shown tremendous success in many recognition tasks; however, these algorithms typically include a deep neural network (DNN) structure and a large number of parameters, which makes it challenging to implement them on power/area-constrained embedded platforms.
1 code implementation • 19 Sep 2017 • Shihui Yin, Shreyas K. Venkataramanaiah, Gregory K. Chen, Ram Krishnamurthy, Yu Cao, Chaitali Chakrabarti, Jae-sun Seo
We present a new back propagation based training algorithm for discrete-time spiking neural networks (SNN).