Paper published in a book (Scientific congresses, symposiums and conference proceedings)
Network Energy Saving for 6G and Beyond: A Deep Reinforcement Learning Approach
TRAN, Dinh Hieu; Van Huynh, Nguyen; Kaada, Soumeya et al.
2025In 2025 IEEE Wireless Communications and Networking Conference, WCNC 2025
Peer reviewed
 

Files


Full Text
Network_Energy_Saving_for_6G_and_Beyond_A_Deep_Reinforcement_Learning_Approach.pdf
Author preprint (409.64 kB)
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
6G; Deep Q-Network; Network Energy Saving; Power Saving; Self-Organizing Networks (SONs); 6g; Deep Q-network; Energy savings; Energy-savings; Ground base stations; Mobile users; Network energy saving; Power-saving; Self-organising; Self-organizing network; Engineering (all)
Abstract :
[en] Network energy saving has received great attention from operators and vendors to reduce energy consumption and CO2 emissions to the environment as well as significantly reduce costs for mobile network operators. However, the design of energy-saving networks also needs to ensure that mobile users' (MUs) QoS requirements such as throughput requirements (TR). This work considers a mobile cellular network including many ground base stations (GBSs), and some GBSs are intentionally turned off due to network energy saving (NES) or crash, so the MUs located in these outage GBSs are not served in time. Based on this observation, we propose the problem of maximizing the total achievable throughput in the network by optimizing the GBSs' antenna tilt and adaptive transmission power with a given number of served MUs satisfied. Notice that, the MU is considered successfully served if its Reference Signal Received Power (RSRP) and throughput requirement are satisfied. The formulated optimization problem becomes difficult to solve with multiple binary variables and nonconvex constraints along with random throughput requirements and random placement of MUs. We propose a Deep Q-learning-based algorithm to help the network learn the uncertainty and dynamics of the transmission environment. Extensive simulation results show that our proposed algorithm achieves much better performance than the benchmark schemes.
Disciplines :
Computer science
Author, co-author :
TRAN, Dinh Hieu ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SigCom
Van Huynh, Nguyen
Kaada, Soumeya
Vo, Van Nhan
LAGUNAS, Eva  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SigCom
CHATZINOTAS, Symeon  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SigCom
External co-authors :
yes
Language :
English
Title :
Network Energy Saving for 6G and Beyond: A Deep Reinforcement Learning Approach
Publication date :
2025
Event name :
IEEE Wireless Communications and Networking Conference (WCNC)
Event place :
Milan, Italy
Event date :
From 24 to 27 march 2025
Main work title :
2025 IEEE Wireless Communications and Networking Conference, WCNC 2025
Publisher :
Institute of Electrical and Electronics Engineers Inc.
ISBN/EAN :
9798350368369
Peer reviewed :
Peer reviewed
Available on ORBilu :
since 12 July 2025

Statistics


Number of views
56 (2 by Unilu)
Number of downloads
42 (2 by Unilu)

Scopus citations®
 
2
Scopus citations®
without self-citations
1
OpenCitations
 
0
OpenAlex citations
 
1

Bibliography


Similar publications



Contact ORBilu