Skip to main content
Article
Resource Management for Edge Intelligence (EI)-Assisted IoV Using Quantum-Inspired Reinforcement Learning
IEEE Internet of Things Journal
  • Dan Wang, Xidian University, State Key Laboratory of Integrated Services Networks, Xi'an, 710071, China
  • Bin Song, Xidian University, State Key Laboratory of Integrated Services Networks, Xi'an, 710071, China
  • Peng Lin, School of Electronic and Information Engineering, Nanjing University of Information Science and Technology, Nanjing, 210044, China
  • Richard F. Yu, Carleton University, Department of Systems and Computer Engineering, Ottawa, K1S 5B6, ON, Canada
  • Xiaojiang Du, Stevens Institute of Technology, Department of Electrical and Computer Engineering, Hoboken, 07030, NJ, United States
  • Mohsen Guizani, Mohamed bin Zayed University of Artificial Intelligence
Document Type
Article
Abstract

Recent developments in the Internet of Vehicles (IoV) enable interconnected vehicles to support ubiquitous services. Various emerging service applications are promising to increase the Quality of Experience (QoE) of users. On-board computation tasks generated by these applications have heavily overloaded the resource-constrained vehicles, forcing it to offload on-board tasks to other edge intelligence (EI)-assisted servers. However, excessive task offloading can lead to severe competition for communication and computation resources among vehicles, thereby increasing the processing latency, energy consumption, and system cost. To address these problems, we investigate the transmission-awareness and computing-sense uplink resource management problem and formulate it as a time-varying Markov decision process. Considering the total delay, energy consumption, and cost, quantum-inspired reinforcement learning (QRL) is proposed to develop an intelligence-oriented edge offloading strategy. Specifically, the vehicle can flexibly choose the network access mode and offloading strategy through two different radio interfaces to offload tasks to multiaccess edge computing (MEC) servers through WiFi and cloud servers through 5G. The objective of this joint optimization is to maintain a self-adaptive balance between these two aspects. Simulation results show that the proposed algorithm can significantly reduce the transmission latency and computation delay. © 2014 IEEE.

DOI
10.1109/JIOT.2021.3137984
Publication Date
7-15-2022
Keywords
  • Cloud computing,
  • edge intelligence (EI),
  • Internet of Vehicles (IoV),
  • multiaccess edge computing (MEC),
  • quantum-inspired reinforcement learning (QRL),
  • Edge computing,
  • Energy utilization,
  • Job analysis,
  • Markov processes,
  • Quality of service,
  • Reinforcement learning,
  • Resource allocation,
  • Vehicles
Comments

IR Deposit conditions:

OA version (pathway a) Accepted version

No embargo

When accepted for publication, set statement to accompany deposit (see policy)

Must link to publisher version with DOI

Publisher copyright and source must be acknowledged

Citation Information
D. Wang, B. Song, P. Lin, F. R. Yu, X. Du and M. Guizani, "Resource Management for Edge Intelligence (EI)-Assisted IoV Using Quantum-Inspired Reinforcement Learning," in IEEE Internet of Things Journal, vol. 9, no. 14, pp. 12588-12600, 15 July15, 2022, doi: 10.1109/JIOT.2021.3137984.