HyMSS-GAD: a hybrid multi-stage framework for multi-view graph anomaly detection with structural, contextual, and geometric reasoning

HyMSS-GAD:一种基于结构、上下文和几何推理的多视图图异常检测混合多阶段框架

阅读:1

Abstract

Graph anomaly detection has become an important task in discovering abnormal patterns within attributed networks, where anomalies can occur due to structural, contextual, or geometric mismatch. Current methods are mainly based on either a reconstruction-based or contrastive objective, which seldom consider the relationship between heterogeneous modalities and higher-order graph geometry. To address this gap, we present HyMSS-GAD, a Hybrid Multi-Stage Framework for Graph Anomaly Detection that combines contextual, structural, and geometric reasoning in a five-step pipeline. First, a cross-modal contrastive learning module learns aligned representations from feature and topology-based modalities, utilizing InfoNCE and alignment regularization. Second, a motif-based structural reconstruction module discovers higher-order connectivity roles with deterministic motif enumeration and autoencoder based reconstruction. Third, we apply an attention-driven fusion mechanism to dynamically combine contextual and structural embeddings into a single representation. Fourth, we introduce a curvature-aware decoder to predict and reconstruct Ollivier–Ricci curvature for geometry-based anomaly detection within graph manifolds. Finally, we develop a multi-view anomaly scoring strategy to combine contextual, structural, and geometric residuals into an interpretable anomaly score. In-depth evaluations conducted on five standard benchmark datasets namely, Cora, Citeseer, PubMed, ACM, and Amazon, show that HyMSS-GAD consistently outperforms the state-of-the-art baseline models. Moreover, curvature residuals offer an increased degree of interpretability by indicating the bridge node regions of communities and the anomalous boundary regions. Overall, HyMSS-GAD is shown to be a scalable, explainable, geometrically informed model for graph anomaly detection on a set of diverse attributed networks.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。