Article (Scientific journals)
DDNSAS: Deep reinforcement learning based deep Q-learning network for smart agriculture system
Devarajan, Ganesh Gopal; NAGARAJAN, Senthil Murugan; T.V., Ramana et al.
2023In Sustainable Computing: Informatics and Systems, 39 (September), p. 100890
Peer reviewed
 

Files


Full Text
Sustainable Computing Informatics and Systems.pdf
Author postprint (2.85 MB)
Request a copy

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Ant colony optimization; Convergence speed; Deep reinforcement learning; Smart agriculture; Unmanned ariel vehicle; Agriculture systems; Global population; Learning network; Performance; Q-learning; Reinforcement learnings; Smart agricultures; Unmanned ariel vehicles; Computer Science (all); Electrical and Electronic Engineering; General Computer Science
Abstract :
[en] As the global population continues to grow and environmental conditions become increasingly unpredictable, meeting the demands for food becomes increasingly difficult. To overcome these challenges, smart agriculture has emerged as the key technology. Deep Learning model with Internet of Things (IoT), unmanned aerial vehicle (UAV), and edge–fog–cloud architecture enabled smart agriculture as a key component for next agriculture revolution. In this work, we present two stage end-to-end DRL based smart agricultural system. In stage one, we proposed ACO enabled DQN (MACO-DQN) model to offload task including fire detection, pest detection, crop growth monitoring, irrigation scheduling, soil monitoring, climate monitoring, field monitoring etc. MACO-DQN model offload the task to either edge, fog or cloud networking devices based on latency, energy consumption and computing power. Once the task offloaded to computing devices (edge, fog or cloud), task of prediction and monitoring various agriculture activities is performed at stage two. In stage two, we proposed DRL based DQN (RL-DQN) model for prediction and monitoring agricultural task activities. Finally, we demonstrate experimental findings of our proposed model that represent a marked enhancement in terms of convergence speed, planning success rate, and path accuracy. To evaluate its performance, the method presented in this paper was compared to traditional deep Q-networks-based intensive learning method under consistent experimental conditions. Overall, 98.5% precision, 99.1% recall, 98.1% F-measure, and 98.5% accuracy is obtained when using our proposed methodology and based on the performance results the model outperforms other existing methodologies.
Disciplines :
Computer science
Author, co-author :
Devarajan, Ganesh Gopal;  Department of Computer Science and Engineering, SRM Institute of Science and Technology, Delhi-NCR Campus, Modinagar, India
NAGARAJAN, Senthil Murugan  ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Mathematics (DMATH)
T.V., Ramana;  Computer Science and Engineering Department, Jain University, Bangalore, India
T., Vignesh;  Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India
Ghosh, Uttam;  Department of Computer Science and Data Science, Meharry Medical College, United States
Alnumay, Waleed;  Department of Computer Science, Riyadh Community College, King Saud University, P.O.Box 28095, Riyadh, Saudi Arabia
External co-authors :
yes
Language :
English
Title :
DDNSAS: Deep reinforcement learning based deep Q-learning network for smart agriculture system
Publication date :
September 2023
Journal title :
Sustainable Computing: Informatics and Systems
ISSN :
2210-5379
Publisher :
Elsevier Inc.
Volume :
39
Issue :
September
Pages :
100890
Peer reviewed :
Peer reviewed
Funders :
King Saud University
Funding text :
Waleed Alnumay acknowledges support from the Researchers Supporting Project number ( RSP2023R250 ), King Saud University, Riyadh, Saudi Arabia .
Available on ORBilu :
since 24 November 2023

Statistics


Number of views
114 (2 by Unilu)
Number of downloads
0 (0 by Unilu)

Scopus citations®
 
49
Scopus citations®
without self-citations
47
OpenAlex citations
 
36
WoS citations
 
26

Bibliography


Similar publications



Contact ORBilu