ESE Fall Seminar – “Learning-NUM: Utility Maximization in Stochastic Queueing Networks”
October 15, 2024 at 11:00 AM - 12:00 PM
Organizer
Venue
We consider the problem of network utility maximization (NUM) and propose a new Learning-NUM framework, where the users’ utility functions are unknown apriori and the utility function values can be observed only after the corresponding traffic is delivered to the destination. We start by considering linear utility functions and propose a priority-based network control policy, that combines techniques from network control and multi-arm bandits to achieve logarithmic regret. We then consider the case of concave utility functions and design the Gradient Sampling Max-Weight algorithm (GSMW), based on the ideas of gradient estimation and Max-Weight scheduling, that achieves sublinear utility regret. We further demonstrate the applicability of the gradient sampling approach to minimum delay routing in wireless networks. Finally, we consider the general problem of reinforcement learning for queueing networks with unbounded state-spaces, with the goal of making control decisions that minimizing the queue length. We formulate the problem as an MDP, and propose a new reinforcement learning framework, called Truncated Upper Confidence Reinforcement Learning (TUCRL), that can achieve optimal performance. We show how this framework can be applied to deep reinforcement learning (DRL) for online stochastic network optimization.

