Search Results for author: Chaitali Chakrabarti

Found 16 papers, 6 papers with code

Proactively Predicting Dynamic 6G Link Blockages Using LiDAR and In-Band Signatures

no code implementations17 Nov 2022 Shunyao Wu, Chaitali Chakrabarti, Ahmed Alkhateeb

Given this future blockage prediction capability, the paper also shows that the developed solutions can achieve an order of magnitude saving in network latency, which further highlights the potential of the developed blockage prediction solutions for wireless networks.

Denoising

An Adjustable Farthest Point Sampling Method for Approximately-sorted Point Cloud Data

1 code implementation18 Aug 2022 Jingtao Li, Jian Zhou, Yan Xiong, Xing Chen, Chaitali Chakrabarti

Sampling is an essential part of raw point cloud data processing such as in the popular PointNet++ scheme.

2k

ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning

1 code implementation CVPR 2022 Jingtao Li, Adnan Siraj Rakin, Xing Chen, Zhezhi He, Deliang Fan, Chaitali Chakrabarti

While such a scheme helps reduce the computational load at the client end, it opens itself to reconstruction of raw data from intermediate activation by the server.

Federated Learning

LiDAR-Aided Mobile Blockage Prediction in Real-World Millimeter Wave Systems

no code implementations18 Nov 2021 Shunyao Wu, Chaitali Chakrabarti, Ahmed Alkhateeb

If used for proactive hand-off, the proposed solutions can potentially provide an order of magnitude saving in the network latency, which highlights a promising direction for addressing the blockage challenges in mmWave/sub-THz networks.

Denoising

Blockage Prediction Using Wireless Signatures: Deep Learning Enables Real-World Demonstration

no code implementations16 Nov 2021 Shunyao Wu, Muhammad Alrabeiah, Chaitali Chakrabarti, Ahmed Alkhateeb

In this paper, we propose a novel solution that relies only on in-band mmWave wireless measurements to proactively predict future dynamic line-of-sight (LOS) link blockages.

SIAM: Chiplet-based Scalable In-Memory Acceleration with Mesh for Deep Neural Networks

no code implementations14 Aug 2021 Gokul Krishnan, Sumit K. Mandal, Manvitha Pannala, Chaitali Chakrabarti, Jae-sun Seo, Umit Y. Ogras, Yu Cao

In-memory computing (IMC) on a monolithic chip for deep learning faces dramatic challenges on area, yield, and on-chip interconnection cost due to the ever-increasing model sizes.

Benchmarking

Communication and Computation Reduction for Split Learning using Asynchronous Training

no code implementations20 Jul 2021 Xing Chen, Jingtao Li, Chaitali Chakrabarti

An added benefit of the proposed communication reduction method is that the computations at the client side are reduced due to reduction in the number of client model updates.

Privacy Preserving

Impact of On-Chip Interconnect on In-Memory Acceleration of Deep Neural Networks

no code implementations6 Jul 2021 Gokul Krishnan, Sumit K. Mandal, Chaitali Chakrabarti, Jae-sun Seo, Umit Y. Ogras, Yu Cao

In this technique, we use analytical models of NoC to evaluate end-to-end communication latency of any given DNN.

RADAR: Run-time Adversarial Weight Attack Detection and Accuracy Recovery

1 code implementation20 Jan 2021 Jingtao Li, Adnan Siraj Rakin, Zhezhi He, Deliang Fan, Chaitali Chakrabarti

In this work, we propose RADAR, a Run-time adversarial weight Attack Detection and Accuracy Recovery scheme to protect DNN weights against PBFA.

Deep Learning for Moving Blockage Prediction using Real Millimeter Wave Measurements

no code implementations18 Jan 2021 Shunyao Wu, Muhammad Alrabeiah, Andrew Hredzak, Chaitali Chakrabarti, Ahmed Alkhateeb

To evaluate our proposed approach, we build a mmWave communication setup with a moving blockage and collect a dataset of received power sequences.

BIG-bench Machine Learning

T-BFA: Targeted Bit-Flip Adversarial Weight Attack

2 code implementations24 Jul 2020 Adnan Siraj Rakin, Zhezhi He, Jingtao Li, Fan Yao, Chaitali Chakrabarti, Deliang Fan

Prior works of BFA focus on un-targeted attack that can hack all inputs into a random output class by flipping a very small number of weight bits stored in computer memory.

Adversarial Attack Image Classification

Automated Parallel Kernel Extraction from Dynamic Application Traces

1 code implementation27 Jan 2020 Richard Uhrie, Chaitali Chakrabarti, John Brunhaver

Modern program runtime is dominated by segments of repeating code called kernels.

Distributed, Parallel, and Cluster Computing Programming Languages

Minimizing Area and Energy of Deep Learning Hardware Design Using Collective Low Precision and Structured Compression

no code implementations19 Apr 2018 Shihui Yin, Gaurav Srivastava, Shreyas K. Venkataramanaiah, Chaitali Chakrabarti, Visar Berisha, Jae-sun Seo

Deep learning algorithms have shown tremendous success in many recognition tasks; however, these algorithms typically include a deep neural network (DNN) structure and a large number of parameters, which makes it challenging to implement them on power/area-constrained embedded platforms.

Binarization

Cannot find the paper you are looking for? You can Submit a new open access paper.