Search Results for author: Eytan Modiano

Found 13 papers, 0 papers with code

Learning to Schedule in Non-Stationary Wireless Networks With Unknown Statistics

no code implementations4 Aug 2023 Quang Minh Nguyen, Eytan Modiano

We propose a novel algorithm termed MW-UCB for generalized wireless network scheduling, which is based on the Max-Weight policy and leverages the Sliding-Window Upper-Confidence Bound to learn the channels' statistics under non-stationarity.

Scheduling

An Online Learning Approach to Optimizing Time-Varying Costs of AoI

no code implementations27 May 2021 Vishrant Tripathi, Eytan Modiano

We consider non-stationary and adversarial mobility models and illustrate the performance benefit of using our online learning algorithms compared to an oblivious scheduling policy.

Scheduling

Age Debt: A General Framework For Minimizing Age of Information

no code implementations25 Jan 2021 Vishrant Tripathi, Eytan Modiano

We consider the problem of minimizing age of information in general single-hop and multihop wireless networks.

Information Theory Networking and Internet Architecture Information Theory

WiFresh: Age-of-Information from Theory to Implementation

no code implementations28 Dec 2020 Igor Kadota, Muhammad Shahir Rahman, Eytan Modiano

In this paper, we show that as the congestion in the wireless network increases, the Age-of-Information degrades sharply, leading to outdated information at the destination.

Networking and Internet Architecture Systems and Control Systems and Control

Aging Bandits: Regret Analysis and Order-Optimal Learning Algorithm for Wireless Networks with Stochastic Arrivals

no code implementations16 Dec 2020 Eray Unsal Atay, Igor Kadota, Eytan Modiano

The goal of the learning algorithm is to minimize the Age-of-Information (AoI) in the network over $T$ time slots.

Thompson Sampling

Learning-NUM: Network Utility Maximization with Unknown Utility Functions and Queueing Delay

no code implementations16 Dec 2020 Xinzhe Fu, Eytan Modiano

Network Utility Maximization (NUM) studies the problems of allocating traffic rates to network users in order to maximize the users' total utility subject to network resource constraints.

Scheduling

RL-QN: A Reinforcement Learning Framework for Optimal Control of Queueing Systems

no code implementations14 Nov 2020 Bai Liu, Qiaomin Xie, Eytan Modiano

In this work, we consider using model-based reinforcement learning (RL) to learn the optimal control policy for queueing networks so that the average job delay (or equivalently the average queue backlog) is minimized.

Model-based Reinforcement Learning reinforcement-learning +1

Learning Algorithms for Minimizing Queue Length Regret

no code implementations11 May 2020 Thomas Stahlbuhk, Brooke Shrader, Eytan Modiano

The transmitter may attempt a frame transmission on one channel at a time, where each frame includes a packet if one is in the queue.

A Theory of Uncertainty Variables for State Estimation and Inference

no code implementations24 Sep 2019 Rajat Talak, Sertac Karaman, Eytan Modiano

Probability theory starts with a distribution function (equivalently a probability measure) as a primitive and builds all other useful concepts, such as law of total probability, Bayes' law, independence, graphical models, point estimate, on it.

Learning to Route Efficiently with End-to-End Feedback: The Value of Networked Structure

no code implementations24 Oct 2018 Ruihao Zhu, Eytan Modiano

We introduce efficient algorithms which achieve nearly optimal regrets for the problem of stochastic online shortest path routing with end-to-end feedback.

Data-driven Localization and Estimation of Disturbance in the Interconnected Power System

no code implementations4 Jun 2018 Hyang-Won Lee, Jianan Zhang, Eytan Modiano

Identifying the location of a disturbance and its magnitude is an important component for stable operation of power systems.

regression

Accelerated Primal-Dual Policy Optimization for Safe Reinforcement Learning

no code implementations19 Feb 2018 Qingkai Liang, Fanyu Que, Eytan Modiano

Constrained Markov Decision Process (CMDP) is a natural framework for reinforcement learning tasks with safety constraints, where agents learn a policy that maximizes the long-term reward while satisfying the constraints on the long-term cost.

reinforcement-learning Reinforcement Learning (RL) +1

Throughput Optimal Decentralized Scheduling of Multi-Hop Networks with End-to-End Deadline Constraints: II Wireless Networks with Interference

no code implementations6 Sep 2017 Rahul Singh, P. R. Kumar, Eytan Modiano

The key difference arises due to the fact that in our set-up the packets loose their utility once their "age" has crossed their deadline, thus making the task of optimizing timely throughput much more challenging than that of ensuring network stability.

Scheduling

Cannot find the paper you are looking for? You can Submit a new open access paper.