DRL Based Adaptive Network Slicing for 5G Applications

Tech ID: 20B187

­Competitive Advantages

  • The network learns dynamically through interaction with the environment and adapts to it.
  • Edge controllers can efficiently manage the edge resources by enabling cooperation among the fog nodes.
  • Slices the network optimally as per the task load and latency needs.

Summary

Researchers at the University of South Florida have developed a network slicing model for solving the network allocation problem in the upcoming F-RAN standard of 5G communication to overcome the latency limitation of cloud-RAN in vehicle and smart-city applications. To solve the issue of network slicing in a Fog-RAN, we have developed an infinite-horizon Markov decision process (MDP) formulation with deep reinforcement learning (DRL) solution for the edge controllers i.e., fog nodes. These nodes serve as a cluster head to learn an optimal way of allocating the limited cloud resources for computing and processing to vehicular and smart-city applications with differentiated latency needs according to the task loads. The benefit of this network is that it learns dynamically through interaction with the environment and adapts to it. Hence, in a real dynamic environment with changing distributions, this model can adapt itself to an optimal slicing policy to maximize performance.

IoV and Smart City Environment

Desired Partnerships

  • License
  • Sponsored Research
  • Co-Development

Technology Transfer
TTOinfo@usf.edu
(813) 974-0994

Patents