On the Convergence of Alternating Direction Lagrangian Methods for Nonconvex Structured Optimization Problems

Abstract

Nonconvex and structured optimization problemsarise in many engineering applications that demand scalableand distributed solution methods. The study of the convergenceproperties of these methods is in general difficult due to thenonconvexity of the problem. In this paper, two distributedsolution methods that combine the fast convergence propertiesof augmented Lagrangian-based methods with the separabilityproperties of alternating optimization are investigated. The firstmethod is adapted from the classic quadratic penalty functionmethod and is called the Alternating Direction Penalty Method(ADPM). Unlike the original quadratic penalty function method,in which single-step optimizations are adopted, ADPM uses analternating optimization, which in turn makes it scalable. Thesecond method is the well-known Alternating Direction Methodof Multipliers (ADMM). It is shown that ADPM for nonconvexproblems asymptotically converges to a primal feasible pointunder mild conditions and an additional condition ensuringthat it asymptotically reaches the standard first order necessary conditions for local optimality are introduced. In thecase of the ADMM, novel sufficient conditions under whichthe algorithm asymptotically reaches the standard first ordernecessary conditions are established. Based on this, completeconvergence of ADMM for a class of low dimensional problemsare characterized. Finally, the results are illustrated by applyingADPM and ADMM to a nonconvex localization problem inwireless sensor networks.

Publication
In IEEE Transactions on Control of Network Systems

Related