Abstract
This study centers on a specific category of constrained convex optimization. The problems under consideration feature an objective function that is explicitly constructed from the combination of multiple differentiable convex functions and one or more non-smooth regularization components, particularly the [Formula: see text] norm.These problems are further subject to local linear and bound constraints. Such formulations commonly arise in practical domains, including power allocation, sensor network coordination, and source localization. To address these challenges efficiently and robustly, a new distributed optimization approach is developed that utilizes a time-varying yet constant step-size mechanism. Distinctively, by relying solely on row-stochastic weight matrices, the proposed method effectively manages constrained optimization tasks over directed communication networks without necessitating knowledge of each node's out-neighbor information. As long as each local objective satisfies the requirements for convexity and Lipschitz continuity and the chosen time-varying constant step size stays within a predefined upper constraint, theoretical analysis verifies that the suggested method converges to the optimal point. Simulation experiments further validate and reinforce the remarkable efficiency and real-world applicability of the developed method.