A multi-modal geospatial-temporal LSTM based deep learning framework for predictive modeling of urban mobility patterns

一种基于多模态地理时空LSTM的深度学习框架,用于预测城市交通模式

阅读:1

Abstract

Urban mobility prediction is crucial for optimizing resource allocation, managing transportation systems, and planning urban development. We propose a novel framework, GeoTemporal LSTM (GT-LSTM), designed to address the intricate spatiotemporal dynamics of urban environments. GT-LSTM integrates temporal dependencies with geographic information through a multi-modal approach that combines attention mechanisms and Recurrent Neural Networks (RNNs). This method allows the model to focus on relevant spatial features while capturing sequential relationships in time-series data. The approach uses attention mechanisms to dynamically weight geographic features and LSTM layers to model temporal patterns, resulting in enhanced predictive accuracy. Evaluations using a real-world multi-modal urban transportation dataset demonstrate the performance of GT-LSTM, with significant reductions of 15% in Mean Absolute Percentage Error (MAPE) and 20% in Root Mean Square Error (RMSE) compared to traditional methods. The model also shows substantial improvements over traditional techniques, including Convolutional LSTM and Graph Convolutional Networks. The effectiveness of GT-LSTM in capturing both spatial and temporal dynamics highlights its potential for real-time urban mobility prediction and provides valuable insights for urban planners, policymakers, and transportation authorities to improve decision-making and system efficiency.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。