Deep Reinforcement Learning Based Secure Transmission for UAV-Assisted Mobile Edge Computing
DOI:
https://doi.org/10.3991/ijim.v18i17.50729Keywords:
Mobile Edge Computing, UAV Communication, secure transmission, Deep Reinforcement learningAbstract
The increasing computational demand for real-time mobile applications has led to the development of mobile edge computing (MEC), with support from unmanned aerial vehicles (UAVs), as a promising paradigm for constructing high-throughput line-of-sight links for ground users and pushing computational resources to network edges. Users can reduce processing latency and the load on their local computers by delegating tasks to the UAV in its role as an edge server. The coverage capacity of a single UAV is, however, very limited. Moreover, it will be easy to intercept the data that is transferred to the unmanned aerial vehicle. Thus, for UAV-assisted mobile edge computing, we proposed a transmission technique based on multi-agent deep reinforcement learning in this study. The recommended approach to maximize UAV deployment first applies the particle swarm optimization algorithm. Then, deep reinforcement learning is utilized to optimize the secure offloading to maximize the system utility and minimize the quantity of information eavesdropping, taking into consideration different user task types with diverse preferences for processing time and residual energy of computing equipment. The results of the simulation demonstrate that, in comparison to the single-agent strategy and the benchmark, the multi-agent approach can optimize offloading more successfully and produce higher system utility.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 N. Vijayalakshmi, Sagar Gulati, Ben Sujin. B, B. Madhav Rao, K. Kiran Kumar
This work is licensed under a Creative Commons Attribution 4.0 International License.