Abstract
Efficient parking distribution is crucial for urban traffic management; nevertheless, variable demand and spatial disparities raise considerable obstacles. Current research emphasizes local optimization but neglects the fundamental challenges of real-time parking allocation, resulting in inefficiencies within intricate metropolitan settings. This research delineates two key issues: (1) A dynamic imbalance between supply and demand, characterized by considerable fluctuations in parking demand over time and across different locations, rendering static allocation solutions inefficient; (2) spatial resource optimization, aimed at maximizing the efficiency of limited parking spots to improve overall system performance and user satisfaction. We present a Multi-Agent Reinforcement Learning (MARL) framework that incorporates adaptive optimization and intelligent collaboration for dynamic parking allocation to tackle these difficulties. A reinforcement learning-driven temporal decision mechanism modifies parking assignments according to real-time data, whilst a Graph Neural Network (GNN)-based spatial model elucidates inter-parking relationships to enhance allocation efficiency. Experiments utilizing actual parking data from Melbourne illustrate that Multi-Agent Reinforcement Learning (MARL) substantially surpasses conventional methods (FIFO, SIRO) in managing demand variability and optimizing resource distribution. A thorough quantitative investigation confirms the strength and flexibility of the suggested method in various urban contexts.