Reference : Energy Minimization in UAV-Aided Networks: Actor-Critic Learning for Constrained Sche...
Scientific journals : Article
Engineering, computing & technology : Electrical & electronics engineering
http://hdl.handle.net/10993/47465
Energy Minimization in UAV-Aided Networks: Actor-Critic Learning for Constrained Scheduling Optimization
English
Yuan, Yaxiong mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SigCom >]
Lei, Lei mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SigCom >]
Vu, Thang Xuan mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SigCom >]
Chatzinotas, Symeon mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SigCom >]
Sun, Sumei mailto [Institute for Inforcom, Agency for Science, Technology, and Reseach, Singapore]
Ottersten, Björn mailto [University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > >]
27-Apr-2021
IEEE Transactions on Vehicular Technology
Institute of Electrical and Electronics Engineers
Yes
International
0018-9545
United States
[en] UAV ; deep reinforcement learning ; user scheduling ; hovering time allocation ; energy optimization ; actor-critic
[en] In unmanned aerial vehicle (UAV) applications, the UAV's limited energy supply and storage have triggered the development of intelligent energy-conserving scheduling solutions. In this paper, we investigate energy minimization for UAV-aided communication networks by jointly optimizing data-transmission scheduling and UAV hovering time. The formulated problem is combinatorial and non-convex with bilinear constraints. To tackle the problem, firstly, we provide an optimal relax-and-approximate solution and develop a near-optimal algorithm. Both the proposed solutions are served as offline performance benchmarks but might not be suitable for online operation. To this end, we develop a solution from a deep reinforcement learning (DRL) aspect. The conventional RL/DRL, e.g., deep Q-learning, however, is limited in dealing with two main issues in constrained combinatorial optimization, i.e., exponentially increasing action space and infeasible actions. The novelty of solution development lies in handling these two issues. To address the former, we propose an actor-critic-based deep stochastic online scheduling (AC-DSOS) algorithm and develop a set of approaches to confine the action space. For the latter, we design a tailored reward function to guarantee the solution feasibility. Numerical results show that, by consuming equal magnitude of time, AC-DSOS is able to provide feasible solutions and saves 29.94% energy compared with a conventional deep actor-critic method. Compared to the developed near-optimal algorithm, AC-DSOS consumes around 10% higher energy but reduces the computational time from minute-level to millisecond-level.
Researchers ; Professionals ; Students ; Others
http://hdl.handle.net/10993/47465
10.1109/TVT.2021.3075860
https://ieeexplore.ieee.org/document/9416816
FnR ; FNR12173206 > Srikanth Bommaraveni > LARGOS > Learning-assisted Optimization For Resource And Security Management In Slicing-based 5g Networks > 15/03/2018 > 14/03/2022 > 2017

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Open access
09416816.pdfPublisher postprint5.18 MBView/Open

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.