Strong consistency of random gradient-free algorithms for distributed optimization

Strong consistency of random gradient-free algorithms for distributed optimization

0.00 Avg rating0 Votes
Article ID: iaor2017769
Volume: 38
Issue: 2
Start Page Number: 247
End Page Number: 265
Publication Date: Mar 2017
Journal: Optimal Control Applications and Methods
Authors: ,
Keywords: control, simulation, networks, programming: convex
Abstract:

A distributed multi‐agent convex optimization problem over a network with time‐varying connectivity is considered, where the agents are to collectively minimize a sum of nonsmooth but Lipschitz continuous functions subject to a convex constraint set. Based on Gaussian smoothing of the objective functions, a distributed projective random gradient‐free algorithm is considered for the constrained optimization problem, where each agent performs a local averaging operation, takes the one‐sided or two‐sided randomized gradient approximates instead of the subgradients to minimize its own objective function, and projects on the constraint set. The bounds on the limiting performance of the algorithm in mean are obtained, and it is proven that there is mean consensus between agents for diminishing step sizes. It is showed that with appropriately selected step sizes, the agent estimates generated by the algorithm converge to the same stationary point of the smoothed version of the problem with probability 1. Therefore, with sufficient small smoothing parameters, we obtain the result of consensus and convergence of the agent estimates to an optimal solution with probability 1. A numerical example is provided to justify the theoretical analysis.

Reviews

Required fields are marked *. Your email address will not be published.