FLEX-SFL: A Flexible and Efficient Split Federated Learning Framework for Edge Heterogeneity

FLEX-SFL:一种灵活高效的面向边缘异构性的分裂联邦学习框架

阅读:1

Abstract

The deployment of Federated Learning (FL) in edge environments is often impeded by system heterogeneity, non-independent and identically distributed (non-IID) data, and constrained communication resources, which collectively hinder training efficiency and scalability. To address these challenges, this paper presents FLEX-SFL, a flexible and efficient split federated learning framework that jointly optimizes model partitioning, client selection, and communication scheduling. FLEX-SFL incorporates three coordinated mechanisms: a device-aware adaptive segmentation strategy that dynamically adjusts model partition points based on client computational capacity to mitigate straggler effects; an entropy-driven client selection algorithm that promotes data representativeness by leveraging label distribution entropy; and a hierarchical local asynchronous aggregation scheme that enables asynchronous intra-cluster and inter-cluster model updates to improve training throughput and reduce communication latency. We theoretically establish the convergence properties of FLEX-SFL under convex settings and analyze the influence of local update frequency and client participation on convergence bounds. Extensive experiments on benchmark datasets including FMNIST, CIFAR-10, and CIFAR-100 demonstrate that FLEX-SFL consistently outperforms state-of-the-art FL and split FL baselines in terms of model accuracy, convergence speed, and resource efficiency, particularly under high degrees of statistical and system heterogeneity. These results validate the effectiveness and practicality of FLEX-SFL for real-world edge intelligent systems.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。