The Third Generation Partnership Project has standardized cellular vehicle-to-everything (C-V2X) sidelink Mode 4 to support direct communication between vehicles. In Mode 4, the sensing-based semipersistent scheduling (SPS) scheme enables vehicles to autonomously reserve and select radio resources. In particular, SPS has three processes to realize the resource scheduling, including continuously sensing resources, probabilistically reselecting resources, and periodically reserving resources. However, vehicles randomly select resources from the available resource lists in the resource reselection process, resulting in frequent packet collisions especially when radio resources are insufficient. Unlike the traditional SPS, this paper proposes a multiagent deep reinforcement learning-based SPS (RL-SPS) algorithm to help vehicles select appropriate radio resources with the aim of reducing packet collisions. Furthermore, a multi-head attention mechanism is adopted to improve the training efficiency by helping vehicles selectively pay attention to the observations and actions of neighbouring vehicles. It is worth noting that the RL-SPS algorithm fits the characteristics of Mode 4, which selects resources without requiring any global information. Simulation results show that RL-SPS outperforms other decentralized approaches and demonstrate the scalability and robustness of RL-SPS in a dynamic vehicular network. IEEE
- C-V2X Mode 4,
- Heuristic algorithms,
- Interference,
- multiagent deep reinforcement learning,
- Quality of service,
- radio resource selection,
- Resource management,
- sensing-based semipersistent scheduling,
- Sensors,
- Training,
- Vehicle dynamics,
- Deep learning,
- Heuristic algorithms,
- Multi agent systems,
- Quality of service,
- Radio,
- Reinforcement learning,
- Vehicle to Everything,
- Vehicle to vehicle communications,
- Vehicles
IR Deposit conditions:
OA version (pathway a): Accepted version
No embargo
When accepted for publication, set statement to accompany deposit (see policy)
Must link to publisher version with DOI
Publisher copyright and source must be acknowledged