Toward Foundation Models for Mobility Enriched Geospatially Embedded Objects

面向移动性增强的地理空间嵌入对象的基础模型

阅读:1

Abstract

Recent advances in large foundation models (FMs) have enabled learning general-purpose representations in natural language, vision, and audio. Yet geospatial artificial intelligence (GeoAI) still lacks widely adopted foundation models that generalize across tasks that require joint reasoning over geospatial objects and human mobility. Such tasks are crucial as mobility, along with satellite imagery, street view, and text, is a core modality for understanding the physical world. We argue that a key bottleneck is the absence of unified, general-purpose, and transferable representations for geospatially embedded objects (GEOs). Such objects include points, polylines, and polygons in geographic space, enriched with semantic context and critical for geospatial reasoning. Much current GeoAI research compares GEOs to tokens in language models, where patterns of human movement and spatiotemporal interactions yield contextual meaning similar to patterns of words in text. However, modeling GEOs introduces challenges fundamentally different from language, including spatial continuity, variable scale and resolution, temporal dynamics, and data sparsity. Moreover, privacy constraints and global variation in mobility further complicates modeling and generalization. This paper formalizes these challenges, identifies key representational gaps, and outlines research directions for building foundation models that learn behavior-informed, transferable representations of GEOs from large-scale human mobility data, as well as static contextual information such as points of interest, object shapes and spatio-temporal semantics.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。