Search Results for author: Rajeev Sahay

Found 11 papers, 2 papers with code

Mitigating Evasion Attacks in Federated Learning-Based Signal Classifiers

no code implementations8 Jun 2023 Su Wang, Rajeev Sahay, Adam Piaseczny, Christopher G. Brinton

In this work, we first reveal the susceptibility of FL-based signal classifiers to model poisoning attacks, which compromise the training process despite not observing data transmissions.

Federated Learning Model Poisoning

How Potent are Evasion Attacks for Poisoning Federated Learning-Based Signal Classifiers?

no code implementations21 Jan 2023 Su Wang, Rajeev Sahay, Christopher G. Brinton

In this work, we reveal the susceptibility of FL-based signal classifiers to model poisoning attacks, which compromise the training process despite not observing data transmissions.

Federated Learning Model Poisoning

Defending Adversarial Attacks on Deep Learning Based Power Allocation in Massive MIMO Using Denoising Autoencoders

1 code implementation28 Nov 2022 Rajeev Sahay, Minjun Zhang, David J. Love, Christopher G. Brinton

Recent work has advocated for the use of deep learning to perform power allocation in the downlink of massive MIMO (maMIMO) networks.

Denoising regression

A Neural Network-Prepended GLRT Framework for Signal Detection Under Nonlinear Distortions

no code implementations15 Jun 2022 Rajeev Sahay, Swaroop Appadwedula, David J. Love, Christopher G. Brinton

Many communications and sensing applications hinge on the detection of a signal in a noisy, interference-heavy environment.

A Deep Ensemble-based Wireless Receiver Architecture for Mitigating Adversarial Attacks in Automatic Modulation Classification

no code implementations8 Apr 2021 Rajeev Sahay, Christopher G. Brinton, David J. Love

Furthermore, adversarial interference is transferable in black box environments, allowing an adversary to attack multiple deep learning models with a single perturbation crafted for a particular classification model.

Classification General Classification

Mitigating Gradient-based Adversarial Attacks via Denoising and Compression

no code implementations3 Apr 2021 Rehana Mahfuz, Rajeev Sahay, Aly El Gamal

To reduce the training time of the defense for a small trade-off in performance, we propose the hidden layer defense, which involves feeding the output of the encoder of a denoising autoencoder into the network.

Denoising Dimensionality Reduction

Frequency-based Automated Modulation Classification in the Presence of Adversaries

no code implementations2 Nov 2020 Rajeev Sahay, Christopher G. Brinton, David J. Love

Automatic modulation classification (AMC) aims to improve the efficiency of crowded radio spectrums by automatically predicting the modulation constellation of wireless RF signals.

Classification General Classification

Non-Intrusive Detection of Adversarial Deep Learning Attacks via Observer Networks

no code implementations22 Feb 2020 Kirthi Shankar Sivamani, Rajeev Sahay, Aly El Gamal

In this letter, we propose a novel method to detect adversarial inputs, by augmenting the main classification network with multiple binary detectors (observer networks) which take inputs from the hidden layers of the original network (convolutional kernel outputs) and classify the input as clean or adversarial.

Classification General Classification

Ensemble Noise Simulation to Handle Uncertainty about Gradient-based Adversarial Attacks

no code implementations26 Jan 2020 Rehana Mahfuz, Rajeev Sahay, Aly El Gamal

Gradient-based adversarial attacks on neural networks can be crafted in a variety of ways by varying either how the attack algorithm relies on the gradient, the network architecture used for crafting the attack, or both.

Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.