Wassie, Solomon FikadieSolomon FikadieWassieDi Maio, AntonioAntonioDi Maio0000-0001-8495-8926Braun, TorstenTorstenBraun0000-0001-5968-71082024-12-172024-12-172024https://boris-portal.unibe.ch/handle/20.500.12422/194098The Cloud Continuum Framework (CCF) logically integrates distributed extreme edge, far edge, near edge, and cloud data centers in 6G networks. Deploying VNFs over the CCF can enhance network performance and Quality of Service (QoS) for modern delay-sensitive applications and use cases in 6G networks. Deep Reinforcement Learning (DRL) has shown potential to automate Virtual Network Function (VNF) migrations by learning optimal policies through continuous monitoring of the network environment. In this work, we leverage Deep Reinforcement Learning to optimize network control policies that continuously update VNF placement for optimal Service Function Chain (SFC) deployment in time-varying user traffic scenarios. By leveraging dynamic VNF relocation, this approach seeks to improve network performance in terms of latency, operational costs, scalability, and flexibility. This study addresses the gap in existing solutions by jointly considering network performance requirements and migration costs, providing a more comprehensive strategy for efficient VNF deployment and management. We show that our proposed DRL-based VNF deployment method achieves a 28.8% lower delay and a 34% lower migration overhead compared to state-of-the-art baselines in a broad range of large-scale simulated scenarios, showing the proposed method’s scalability features.en6G Network ArchitectureCloud Continuum FrameworkService OrchestratorDeep reinforcement learningDeep Reinforcement Learning for Context-Aware Online Service Function Chain Deployment and Migration over 6G Networksconference_item10.48620/7849710.1145/3672608.3707975