Search Results for author: Sennur Ulukus

Found 39 papers, 0 papers with code

Will 6G be Semantic Communications? Opportunities and Challenges from Task Oriented and Secure Communications to Integrated Sensing

no code implementations3 Jan 2024 Yalin E. Sagduyu, Tugba Erpek, Aylin Yener, Sennur Ulukus

This paper explores opportunities and challenges of task (goal)-oriented and semantic communications for next-generation (NextG) communication networks through the integration of multi-task learning.

Federated Learning Multi-Task Learning +1

Joint Sensing and Task-Oriented Communications with Image and Wireless Data Modalities for Dynamic Spectrum Access

no code implementations21 Dec 2023 Yalin E. Sagduyu, Tugba Erpek, Aylin Yener, Sennur Ulukus

Recognizing the computational constraints and trust issues associated with on-device computation, we propose a collaborative system wherein the edge device communicates selectively processed information to a trusted receiver acting as a fusion center, where a decision is made to identify whether a potential transmitter is present, or not.

Image Classification

Deep Learning-Based Real-Time Quality Control of Standard Video Compression for Live Streaming

no code implementations21 Nov 2023 Matin Mortaheb, Mohammad A. Amir Khojastepour, Srimat T. Chakradhar, Sennur Ulukus

The encoded bitrate and the quality of the compressed video depend on encoder parameters, specifically, the quantization parameter (QP).

Quantization Video Compression

Joint Sensing and Semantic Communications with Multi-Task Deep Learning

no code implementations8 Nov 2023 Yalin E. Sagduyu, Tugba Erpek, Aylin Yener, Sennur Ulukus

The transmitter employs a deep neural network, namely an encoder, for joint operations of source coding, channel coding, and modulation, while the receiver utilizes another deep neural network, namely a decoder, for joint operations of demodulation, channel decoding, and source decoding to reconstruct the data samples.

Multi-Task Learning

A Learning Based Scheme for Fair Timeliness in Sparse Gossip Networks

no code implementations2 Oct 2023 Purbesh Mitra, Sennur Ulukus

We consider a gossip network, consisting of $n$ nodes, which tracks the information at a source.

Bayesian Optimization Fairness

Deep Learning Based Uplink Multi-User SIMO Beamforming Design

no code implementations28 Sep 2023 Cemil Vahapoglu, Timothy J. O'Shea, Tamoghna Roy, Sennur Ulukus

The advancement of fifth generation (5G) wireless communication networks has created a greater demand for wireless resource management solutions that offer high data rates, extensive coverage, minimal latency and energy-efficient performance.

Management

Semantic Multi-Resolution Communications

no code implementations22 Aug 2023 Matin Mortaheb, Mohammad A. Amir Khojastepour, Srimat T. Chakradhar, Sennur Ulukus

The experiment with both datasets illustrates that our proposed method is capable of surpassing the SSCC method in reconstructing data with different resolutions, enabling the extraction of semantic features with heightened confidence in successive layers.

Multi-Task Learning

Multi-Receiver Task-Oriented Communications via Multi-Task Deep Learning

no code implementations14 Aug 2023 Yalin E. Sagduyu, Tugba Erpek, Aylin Yener, Sennur Ulukus

A multi-task deep learning approach that involves training a common encoder at the transmitter and individual decoders at the receivers is presented for joint optimization of completing multiple tasks and communicating with multiple receivers.

Image Classification Multi-Task Learning

Timely Asynchronous Hierarchical Federated Learning: Age of Convergence

no code implementations21 Jun 2023 Purbesh Mitra, Sennur Ulukus

The goal of each client is to converge to the global model, while maintaining timeliness of the clients, i. e., having optimum training iteration time.

Federated Learning

Age of Information in Deep Learning-Driven Task-Oriented Communications

no code implementations11 Jan 2023 Yalin E. Sagduyu, Sennur Ulukus, Aylin Yener

This paper studies the notion of age in task-oriented communications that aims to execute a task at a receiver utilizing the data at its transmitter.

Personalized Decentralized Multi-Task Learning Over Dynamic Communication Graphs

no code implementations21 Dec 2022 Matin Mortaheb, Sennur Ulukus

Our algorithm uses exchanged gradients to calculate the correlations among tasks automatically, and dynamically adjusts the communication graph to connect mutually beneficial tasks and isolate those that may negatively impact each other.

Federated Learning Multi-Task Learning

Vulnerabilities of Deep Learning-Driven Semantic Communications to Backdoor (Trojan) Attacks

no code implementations21 Dec 2022 Yalin E. Sagduyu, Tugba Erpek, Sennur Ulukus, Aylin Yener

The backdoor attack can effectively change the semantic information transferred for the poisoned input samples to a target meaning.

Backdoor Attack

Is Semantic Communications Secure? A Tale of Multi-Domain Adversarial Attacks

no code implementations20 Dec 2022 Yalin E. Sagduyu, Tugba Erpek, Sennur Ulukus, Aylin Yener

By augmenting the reconstruction loss with a semantic loss, the two deep neural networks (DNNs) of this encoder-decoder pair are interactively trained with the DNN of the semantic task classifier.

Task-Oriented Communications for NextG: End-to-End Deep Learning and AI Security Aspects

no code implementations19 Dec 2022 Yalin E. Sagduyu, Sennur Ulukus, Aylin Yener

In this paper, wireless signal classification is considered as the task for the NextG Radio Access Network (RAN), where edge devices collect wireless signals for spectrum awareness and communicate with the NextG base station (gNodeB) that needs to identify the signal label.

Hierarchical Over-the-Air FedGradNorm

no code implementations14 Dec 2022 Cemil Vahapoglu, Matin Mortaheb, Sennur Ulukus

MTL can be integrated into a federated learning (FL) setting if tasks are distributed across clients and clients have a single shared network, leading to personalized federated learning (PFL).

Multi-Task Learning Personalized Federated Learning

Private Federated Submodel Learning with Sparsification

no code implementations31 May 2022 Sajani Vithana, Sennur Ulukus

We investigate the problem of private read update write (PRUW) in federated submodel learning (FSL) with sparsification.

Dynamic SAFFRON: Disease Control Over Time Via Group Testing

no code implementations18 May 2022 Batuhan Arasli, Sennur Ulukus

We characterize the performance of dynamic individual testing algorithm and introduce a novel dynamic SAFFRON based group testing algorithm.

FedGradNorm: Personalized Federated Gradient-Normalized Multi-Task Learning

no code implementations24 Mar 2022 Matin Mortaheb, Cemil Vahapoglu, Sennur Ulukus

In federated settings, the statistical heterogeneity due to different task complexities and data heterogeneity due to non-iid nature of local datasets can both degrade the learning performance of the system.

Multi-Task Learning Personalized Federated Learning

Covert Communications via Adversarial Machine Learning and Reconfigurable Intelligent Surfaces

no code implementations21 Dec 2021 Brian Kim, Tugba Erpek, Yalin E. Sagduyu, Sennur Ulukus

Results from different network topologies show that adversarial perturbation and RIS interaction vector can be jointly designed to effectively increase the signal detection accuracy at the receiver while reducing the detection accuracy at the eavesdropper to enable covert communications.

BIG-bench Machine Learning

Adversarial Attacks against Deep Learning Based Power Control in Wireless Communications

no code implementations16 Sep 2021 Brian Kim, Yi Shi, Yalin E. Sagduyu, Tugba Erpek, Sennur Ulukus

The DNN that corresponds to a regression model is trained with channel gains as the input and returns transmit powers as the output.

Adversarial Attacks on Deep Learning Based mmWave Beam Prediction in 5G and Beyond

no code implementations25 Mar 2021 Brian Kim, Yalin E. Sagduyu, Tugba Erpek, Sennur Ulukus

Deep learning provides powerful means to learn from spectrum data and solve complex tasks in 5G and beyond such as beam selection for initial access (IA) in mmWave communications.

Adversarial Attack

Group Testing with a Graph Infection Spread Model

no code implementations14 Jan 2021 Batuhan Arasli, Sennur Ulukus

We propose a class of two-step sampled group testing algorithms where we exploit the known probabilistic infection spread model.

Information Theory Computers and Society Data Structures and Algorithms Networking and Internet Architecture Signal Processing Information Theory

Timely Communication in Federated Learning

no code implementations31 Dec 2020 Baturalp Buyukates, Sennur Ulukus

Under the proposed scheme, at each iteration, the PS waits for $m$ available clients and sends them the current model.

Federated Learning

Timely Tracking of Infection Status of Individuals in a Population

no code implementations24 Dec 2020 Melih Bastopcu, Sennur Ulukus

We observe that if the total test rate is limited, instead of testing all members of the population equally, only a portion of the population is tested based on their infection and recovery rates.

Computers and Society Physics and Society

Channel Effects on Surrogate Models of Adversarial Attacks against Wireless Signal Classifiers

no code implementations3 Dec 2020 Brian Kim, Yalin E. Sagduyu, Tugba Erpek, Kemal Davaslioglu, Sennur Ulukus

The transmitter is equipped with a deep neural network (DNN) classifier for detecting the ongoing transmissions from the background emitter and transmits a signal if the spectrum is idle.

Adversarial Attack

Gradient Coding with Dynamic Clustering for Straggler Mitigation

no code implementations3 Nov 2020 Baturalp Buyukates, Emre Ozfatura, Sennur Ulukus, Deniz Gunduz

In distributed synchronous gradient descent (GD) the main performance bottleneck for the per-iteration completion time is the slowest \textit{straggling} workers.

Clustering

Adversarial Attacks with Multiple Antennas Against Deep Learning-Based Modulation Classifiers

no code implementations31 Jul 2020 Brian Kim, Yalin E. Sagduyu, Tugba Erpek, Kemal Davaslioglu, Sennur Ulukus

First, we show that multiple independent adversaries, each with a single antenna cannot improve the attack performance compared to a single adversary with multiple antennas using the same total power.

Coded Distributed Computing with Partial Recovery

no code implementations4 Jul 2020 Emre Ozfatura, Sennur Ulukus, Deniz Gunduz

In this paper, we first introduce a novel coded matrix-vector multiplication scheme, called coded computation with partial recovery (CCPR), which benefits from the advantages of both coded and uncoded computation schemes, and reduces both the computation time and the decoding complexity by allowing a trade-off between the accuracy and the speed of computation.

Distributed Computing

Age-Based Coded Computation for Bias Reduction in Distributed Learning

no code implementations2 Jun 2020 Emre Ozfatura, Baturalp Buyukates, Deniz Gunduz, Sennur Ulukus

To mitigate biased estimators, we design a $timely$ dynamic encoding framework for partial recovery that includes an ordering operator that changes the codewords and computation orders at workers over time.

How to Make 5G Communications "Invisible": Adversarial Machine Learning for Wireless Privacy

no code implementations15 May 2020 Brian Kim, Yalin E. Sagduyu, Kemal Davaslioglu, Tugba Erpek, Sennur Ulukus

We consider the problem of hiding wireless communications from an eavesdropper that employs a deep learning (DL) classifier to detect whether any transmission of interest is present or not.

BIG-bench Machine Learning

Straggler-aware Distributed Learning: Communication Computation Latency Trade-off

no code implementations10 Apr 2020 Emre Ozfatura, Sennur Ulukus, Deniz Gunduz

When gradient descent (GD) is scaled to many parallel workers for large scale machine learning problems, its per-iteration computation time is limited by the straggling workers.

Over-the-Air Adversarial Attacks on Deep Learning Based Modulation Classifier over Wireless Channels

no code implementations5 Feb 2020 Brian Kim, Yalin E. Sagduyu, Kemal Davaslioglu, Tugba Erpek, Sennur Ulukus

In the meantime, the adversary makes over-the-air transmissions that are received as superimposed with the transmitter's signals to fool the classifier at the receiver into making errors.

Adversarial Attack

Gradient Coding with Clustering and Multi-message Communication

no code implementations5 Mar 2019 Emre Ozfatura, Deniz Gunduz, Sennur Ulukus

Gradient descent (GD) methods are commonly employed in machine learning problems to optimize the parameters of the model in an iterative fashion.

Clustering Distributed Computing

Distributed Gradient Descent with Coded Partial Gradient Computations

no code implementations22 Nov 2018 Emre Ozfatura, Sennur Ulukus, Deniz Gunduz

Coded computation techniques provide robustness against straggling servers in distributed computing, with the following limitations: First, they increase decoding complexity.

Distributed Computing

Speeding Up Distributed Gradient Descent by Utilizing Non-persistent Stragglers

no code implementations7 Aug 2018 Emre Ozfatura, Deniz Gunduz, Sennur Ulukus

In most of the existing DGD schemes, either with coded computation or coded communication, the non-straggling CSs transmit one message per iteration once they complete all their assigned computation tasks.

Cannot find the paper you are looking for? You can Submit a new open access paper.