Deep Reinforcement Learning for Context-Aware Online Service Function Chain Deployment and Migration over 6G Networks
Options
BORIS DOI
Date of Publication
2024
Publication Type
Conference Paper
Division/Institute
Publisher
Association for Computing Machinery
Language
English
Publisher DOI
Description
The Cloud Continuum Framework (CCF) logically integrates distributed
extreme edge, far edge, near edge, and cloud data centers
in 6G networks. Deploying VNFs over the CCF can enhance
network performance and Quality of Service (QoS) for modern
delay-sensitive applications and use cases in 6G networks. Deep
Reinforcement Learning (DRL) has shown potential to automate
Virtual Network Function (VNF) migrations by learning optimal
policies through continuous monitoring of the network environment.
In this work, we leverage Deep Reinforcement Learning to
optimize network control policies that continuously update VNF
placement for optimal Service Function Chain (SFC) deployment
in time-varying user traffic scenarios. By leveraging dynamic VNF
relocation, this approach seeks to improve network performance
in terms of latency, operational costs, scalability, and flexibility.
This study addresses the gap in existing solutions by jointly considering
network performance requirements and migration costs,
providing a more comprehensive strategy for efficient VNF deployment
and management. We show that our proposed DRL-based
VNF deployment method achieves a 28.8% lower delay and a 34%
lower migration overhead compared to state-of-the-art baselines
in a broad range of large-scale simulated scenarios, showing the
proposed method’s scalability features.
extreme edge, far edge, near edge, and cloud data centers
in 6G networks. Deploying VNFs over the CCF can enhance
network performance and Quality of Service (QoS) for modern
delay-sensitive applications and use cases in 6G networks. Deep
Reinforcement Learning (DRL) has shown potential to automate
Virtual Network Function (VNF) migrations by learning optimal
policies through continuous monitoring of the network environment.
In this work, we leverage Deep Reinforcement Learning to
optimize network control policies that continuously update VNF
placement for optimal Service Function Chain (SFC) deployment
in time-varying user traffic scenarios. By leveraging dynamic VNF
relocation, this approach seeks to improve network performance
in terms of latency, operational costs, scalability, and flexibility.
This study addresses the gap in existing solutions by jointly considering
network performance requirements and migration costs,
providing a more comprehensive strategy for efficient VNF deployment
and management. We show that our proposed DRL-based
VNF deployment method achieves a 28.8% lower delay and a 34%
lower migration overhead compared to state-of-the-art baselines
in a broad range of large-scale simulated scenarios, showing the
proposed method’s scalability features.
Funding(s)