Abstract
Feature selection (FS) for multi-label text classification faces issues such as high dimensionality, strong label correlations, and sparse features, which often lead to suboptimal feature subsets. Moreover, most existing methods are centralized and thus ill-suited to real-world distributed or federated settings, where text data are scattered across multiple nodes and effective FS mechanisms are lacking. To overcome these issues, this paper proposes Fed-MSMCGWO, a federated multi-label text feature selection method based on manifold-aware sparse modeling and cooperative grey wolf optimization. Under a federated learning framework, Fed-MSMCGWO integrates manifold-aware sparse modeling (MSM), and incorporates a cooperative grey wolf optimization algorithm (CGWO) to enable multi-label text FS in distributed environments. On each client, Fed-MSMCGWO employs a two-stage optimization. In Stage 1, MSM is learned by constructing sample and label graphs from text embeddings, encoding their manifolds with Laplacians, and imposing a [Formula: see text]-norm on the feature-weight matrix to induce row sparsity and compress high-dimensional features. In Stage 2, CGWO with a three-line cooperative evolution scheme further refines these weights and conducts global search for a near-optimal subset of text features. After the two-stage optimization, each client obtains a locally optimal feature subset and engages in a multi-party privacy-preserving feature aggregation strategy: clients upload only intermediate feature-weight parameters (no raw data) to the server, which aggregates them and sends the result back to guide further local updates, yielding a collaborative cross-client FS framework with preserved privacy. Experiments on several publicly available multi-label text datasets indicate that, with privacy preserved, Fed-MSMCGWO consistently surpasses standard centralized and federated FS methods on multiple evaluation metrics.