no code implementations • 8 Jun 2023 • Su Wang, Rajeev Sahay, Adam Piaseczny, Christopher G. Brinton
In this work, we first reveal the susceptibility of FL-based signal classifiers to model poisoning attacks, which compromise the training process despite not observing data transmissions.
no code implementations • 21 Jan 2023 • Su Wang, Rajeev Sahay, Christopher G. Brinton
In this work, we reveal the susceptibility of FL-based signal classifiers to model poisoning attacks, which compromise the training process despite not observing data transmissions.
1 code implementation • 28 Nov 2022 • Rajeev Sahay, Minjun Zhang, David J. Love, Christopher G. Brinton
Recent work has advocated for the use of deep learning to perform power allocation in the downlink of massive MIMO (maMIMO) networks.
no code implementations • 15 Jun 2022 • Rajeev Sahay, Swaroop Appadwedula, David J. Love, Christopher G. Brinton
Many communications and sensing applications hinge on the detection of a signal in a noisy, interference-heavy environment.
no code implementations • 8 Apr 2021 • Rajeev Sahay, Christopher G. Brinton, David J. Love
Furthermore, adversarial interference is transferable in black box environments, allowing an adversary to attack multiple deep learning models with a single perturbation crafted for a particular classification model.
no code implementations • 3 Apr 2021 • Rehana Mahfuz, Rajeev Sahay, Aly El Gamal
To reduce the training time of the defense for a small trade-off in performance, we propose the hidden layer defense, which involves feeding the output of the encoder of a denoising autoencoder into the network.
no code implementations • 2 Nov 2020 • Rajeev Sahay, Christopher G. Brinton, David J. Love
Automatic modulation classification (AMC) aims to improve the efficiency of crowded radio spectrums by automatically predicting the modulation constellation of wireless RF signals.
no code implementations • 22 Feb 2020 • Kirthi Shankar Sivamani, Rajeev Sahay, Aly El Gamal
In this letter, we propose a novel method to detect adversarial inputs, by augmenting the main classification network with multiple binary detectors (observer networks) which take inputs from the hidden layers of the original network (convolutional kernel outputs) and classify the input as clean or adversarial.
no code implementations • 26 Jan 2020 • Rehana Mahfuz, Rajeev Sahay, Aly El Gamal
Gradient-based adversarial attacks on neural networks can be crafted in a variety of ways by varying either how the attack algorithm relies on the gradient, the network architecture used for crafting the attack, or both.
no code implementations • 13 Jun 2019 • Rajeev Sahay, Rehana Mahfuz, Aly El Gamal
The reliance on deep learning algorithms has grown significantly in recent years.
1 code implementation • 7 Dec 2018 • Rajeev Sahay, Rehana Mahfuz, Aly El Gamal
Machine Learning models are vulnerable to adversarial attacks that rely on perturbing the input data.