Search Results for author: Christopher G. Brinton

Found 34 papers, 6 papers with code

Defending Adversarial Attacks on Deep Learning Based Power Allocation in Massive MIMO Using Denoising Autoencoders

1 code implementation28 Nov 2022 Rajeev Sahay, Minjun Zhang, David J. Love, Christopher G. Brinton

Recent work has advocated for the use of deep learning to perform power allocation in the downlink of massive MIMO (maMIMO) networks.

Denoising regression

Event-Triggered Decentralized Federated Learning over Resource-Constrained Edge Devices

no code implementations23 Nov 2022 Shahryar Zehtabi, Seyyedali Hosseinalipour, Christopher G. Brinton

We theoretically demonstrate that our methodology converges to the globally optimal learning model at a $O{(\frac{\ln{k}}{\sqrt{k}})}$ rate under standard assumptions in distributed learning and consensus literature.

Federated Learning

Performance Optimization for Variable Bitwidth Federated Learning in Wireless Networks

no code implementations21 Sep 2022 Sihua Wang, Mingzhe Chen, Christopher G. Brinton, Changchuan Yin, Walid Saad, Shuguang Cui

Given linear regression-based estimates of these model properties, it is shown that the FL training process can be described as a Markov decision process (MDP), and, then, a model-based reinforcement learning (RL) method is proposed to optimize action selection over iterations.

Federated Learning Model-based Reinforcement Learning +1

A Neural Network-Prepended GLRT Framework for Signal Detection Under Nonlinear Distortions

no code implementations15 Jun 2022 Rajeev Sahay, Swaroop Appadwedula, David J. Love, Christopher G. Brinton

Many communications and sensing applications hinge on the detection of a signal in a noisy, interference-heavy environment.

Nonparametric Decentralized Detection and Sparse Sensor Selection via Multi-Sensor Online Kernel Scalar Quantization

no code implementations21 May 2022 Jing Guo, Raghu G. Raj, David J. Love, Christopher G. Brinton

Moreover, we are interested in sparse sensor selection using a marginalized weighted kernel approach to improve network resource efficiency by disabling less reliable sensors with minimal effect on classification performance. To achieve our goals, we develop a multi-sensor online kernel scalar quantization (MSOKSQ) learning strategy that operates on the sensor outputs at the fusion center.

Classification online learning +1

Deep Reinforcement Learning-Based Adaptive IRS Control with Limited Feedback Codebooks

no code implementations7 May 2022 JungHoon Kim, Seyyedali Hosseinalipour, Andrew C. Marcum, Taejoon Kim, David J. Love, Christopher G. Brinton

Intelligent reflecting surfaces (IRS) consist of configurable meta-atoms, which can alter the wireless propagation environment through design of their reflection coefficients.

reinforcement-learning

Decentralized Event-Triggered Federated Learning with Heterogeneous Communication Thresholds

1 code implementation7 Apr 2022 Shahryar Zehtabi, Seyyedali Hosseinalipour, Christopher G. Brinton

Through theoretical analysis, we demonstrate that our methodology achieves asymptotic convergence to the globally optimal learning model under standard assumptions in distributed learning and graph consensus literature, and without restrictive connectivity requirements on the underlying topology.

Federated Learning

Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized Floating Aggregation Point

no code implementations26 Mar 2022 Bhargav Ganguly, Seyyedali Hosseinalipour, Kwang Taik Kim, Christopher G. Brinton, Vaneet Aggarwal, David J. Love, Mung Chiang

CE-FL also introduces floating aggregation point, where the local models generated at the devices and the servers are aggregated at an edge server, which varies from one model training round to another to cope with the network evolution in terms of data distribution and users' mobility.

Distributed Optimization Federated Learning

Latency Optimization for Blockchain-Empowered Federated Learning in Multi-Server Edge Computing

no code implementations18 Mar 2022 Dinh C. Nguyen, Seyyedali Hosseinalipour, David J. Love, Pubudu N. Pathirana, Christopher G. Brinton

To assist the ML model training for resource-constrained MDs, we develop an offloading strategy that enables MDs to transmit their data to one of the associated ESs.

Edge-computing Federated Learning +1

Parallel Successive Learning for Dynamic Distributed Model Training over Heterogeneous Wireless Networks

no code implementations7 Feb 2022 Seyyedali Hosseinalipour, Su Wang, Nicolo Michelusi, Vaneet Aggarwal, Christopher G. Brinton, David J. Love, Mung Chiang

PSL considers the realistic scenario where global aggregations are conducted with idle times in-between them for resource efficiency improvements, and incorporates data dispersion and model dispersion with local model condensation into FedL.

Federated Learning

Learning-Based Adaptive IRS Control with Limited Feedback Codebooks

no code implementations3 Dec 2021 JungHoon Kim, Seyyedali Hosseinalipour, Andrew C. Marcum, Taejoon Kim, David J. Love, Christopher G. Brinton

We consider a practical setting where (i) the IRS reflection coefficients are achieved by adjusting tunable elements embedded in the meta-atoms, (ii) the IRS reflection coefficients are affected by the incident angles of the incoming signals, (iii) the IRS is deployed in multi-path, time-varying channels, and (iv) the feedback link from the base station to the IRS has a low data rate.

UAV-assisted Online Machine Learning over Multi-Tiered Networks: A Hierarchical Nested Personalized Federated Learning Approach

no code implementations29 Jun 2021 Su Wang, Seyyedali Hosseinalipour, Maria Gorlatova, Christopher G. Brinton, Mung Chiang

The presence of time-varying data heterogeneity and computational resource inadequacy among device clusters motivate four key parts of our methodology: (i) stratified UAV swarms of leader, worker, and coordinator UAVs, (ii) hierarchical nested personalized federated learning (HN-PFL), a distributed ML framework for personalized model training across the worker-leader-core network hierarchy, (iii) cooperative UAV resource pooling to address computational inadequacy of devices by conducting model training among the UAV swarms, and (iv) model/concept drift to model time-varying data distributions.

Decision Making Personalized Federated Learning

A Deep Ensemble-based Wireless Receiver Architecture for Mitigating Adversarial Attacks in Automatic Modulation Classification

no code implementations8 Apr 2021 Rajeev Sahay, Christopher G. Brinton, David J. Love

Furthermore, adversarial interference is transferable in black box environments, allowing an adversary to attack multiple deep learning models with a single perturbation crafted for a particular classification model.

Classification General Classification

Semi-Decentralized Federated Learning with Cooperative D2D Local Model Aggregations

1 code implementation18 Mar 2021 Frank Po-Chen Lin, Seyyedali Hosseinalipour, Sheikh Shams Azam, Christopher G. Brinton, Nicolo Michelusi

Federated learning has emerged as a popular technique for distributing machine learning (ML) model training across the wireless edge.

Federated Learning

Channel Estimation via Successive Denoising in MIMO OFDM Systems: A Reinforcement Learning Approach

no code implementations25 Jan 2021 Myeung Suk Oh, Seyyedali Hosseinalipour, Taejoon Kim, Christopher G. Brinton, David J. Love

Our methodology includes a new successive channel denoising process based on channel curvature computation, for which we obtain a channel curvature magnitude threshold to identify unreliable channel estimates.

Denoising Q-Learning +1

Device Sampling for Heterogeneous Federated Learning: Theory, Algorithms, and Implementation

no code implementations4 Jan 2021 Su Wang, Mengyuan Lee, Seyyedali Hosseinalipour, Roberto Morabito, Mung Chiang, Christopher G. Brinton

The conventional federated learning (FedL) architecture distributes machine learning (ML) across worker devices by having them train local models that are periodically aggregated by a server.

Federated Learning Learning Theory

On Extending NLP Techniques from the Categorical to the Latent Space: KL Divergence, Zipf's Law, and Similarity Search

1 code implementation2 Dec 2020 Adam Hare, Yu Chen, Yinan Liu, Zhenming Liu, Christopher G. Brinton

Despite the recent successes of deep learning in natural language processing (NLP), there remains widespread usage of and demand for techniques that do not rely on machine learning.

BIG-bench Machine Learning Word Embeddings

Frequency-based Automated Modulation Classification in the Presence of Adversaries

no code implementations2 Nov 2020 Rajeev Sahay, Christopher G. Brinton, David J. Love

Automatic modulation classification (AMC) aims to improve the efficiency of crowded radio spectrums by automatically predicting the modulation constellation of wireless RF signals.

Classification General Classification

A Fast Graph Neural Network-Based Method for Winner Determination in Multi-Unit Combinatorial Auctions

no code implementations29 Sep 2020 Mengyuan Lee, Seyyedali Hosseinalipour, Christopher G. Brinton, Guanding Yu, Huaiyu Dai

However, the problem of allocating items among the bidders to maximize the auctioneers" revenue, i. e., the winner determination problem (WDP), is NP-complete to solve and inapproximable.

Federated Learning with Communication Delay in Edge Networks

no code implementations21 Aug 2020 Frank Po-Chen Lin, Christopher G. Brinton, Nicolò Michelusi

Federated learning has received significant attention as a potential solution for distributing machine learning (ML) model training through edge networks.

Federated Learning

BATS: A Spectral Biclustering Approach to Single Document Topic Modeling and Segmentation

no code implementations5 Aug 2020 Qiong Wu, Adam Hare, Sirui Wang, Yuwei Tu, Zhenming Liu, Christopher G. Brinton, Yanhua Li

In this work, we reexamine the inter-related problems of "topic identification" and "text segmentation" for sparse document learning, when there is a single new text of interest.

Text Segmentation Topic Models

Fast-Convergent Federated Learning

no code implementations26 Jul 2020 Hung T. Nguyen, Vikash Sehwag, Seyyedali Hosseinalipour, Christopher G. Brinton, Mung Chiang, H. Vincent Poor

In this paper, we propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training to optimize the expected convergence speed.

BIG-bench Machine Learning Federated Learning

Minimum Overhead Beamforming and Resource Allocation in D2D Edge Networks

no code implementations25 Jul 2020 JungHoon Kim, Taejoon Kim, Morteza Hashemi, Christopher G. Brinton, David J. Love

Device-to-device (D2D) communications is expected to be a critical enabler of distributed computing in edge networks at scale.

Distributed Computing Management

Multi-Stage Hybrid Federated Learning over Large-Scale D2D-Enabled Fog Networks

1 code implementation18 Jul 2020 Seyyedali Hosseinalipour, Sheikh Shams Azam, Christopher G. Brinton, Nicolo Michelusi, Vaneet Aggarwal, David J. Love, Huaiyu Dai

We derive the upper bound of convergence for MH-FL with respect to parameters of the network topology (e. g., the spectral radius) and the learning algorithm (e. g., the number of D2D rounds in different clusters).

Federated Learning

From Federated to Fog Learning: Distributed Machine Learning over Heterogeneous Wireless Networks

no code implementations7 Jun 2020 Seyyedali Hosseinalipour, Christopher G. Brinton, Vaneet Aggarwal, Huaiyu Dai, Mung Chiang

There are several challenges with employing conventional federated learning in contemporary networks, due to the significant heterogeneity in compute and communication capabilities that exist across devices.

BIG-bench Machine Learning Federated Learning +1

Network-Aware Optimization of Distributed Learning for Fog Computing

no code implementations17 Apr 2020 Yuwei Tu, Yichen Ruan, Su Wang, Satyavrat Wagle, Christopher G. Brinton, Carlee Joe-Wong

Unlike traditional federated learning frameworks, our method enables devices to offload their data processing tasks to each other, with these decisions determined through a convex data transfer optimization problem that trades off costs associated with devices processing, offloading, and discarding data points.

Distributed, Parallel, and Cluster Computing

Joint Optimization of Signal Design and Resource Allocation in Wireless D2D Edge Computing

no code implementations27 Feb 2020 JungHoon Kim, Taejoon Kim, Morteza Hashemi, Christopher G. Brinton, David J. Love

In this paper, unlike previous mobile edge computing (MEC) approaches, we propose a joint optimization of wireless MIMO signal design and network resource allocation to maximize energy efficiency.

Networking and Internet Architecture Signal Processing

A Deep Learning Approach to Behavior-Based Learner Modeling

no code implementations23 Jan 2020 Yuwei Tu, WeiYu Chen, Christopher G. Brinton

The increasing popularity of e-learning has created demand for improving online education through techniques such as predictive analytics and content recommendations.

Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.