
New publication in Optimization :
Algorithms for nonsmooth optimization models under distance-to-set penalties
By W. van Ackooij (OSIRIS, EDF Lab) & W. de Oliveira (CMA, Mines Paris – PSL)
Abstract: We introduce two optimization methods for Difference-of-Convex (DC) optimization problems involving hard and soft constraints. While hard-constraints are strict requirements that must be satisfied for a solution to be valid, soft constraints are preferences that are desirable but not mandatory to be met. In this work, the hard constraints are considered convex, while the soft constraints (potentially nonconvex) are incorporated into the objective function via squared distance-to-set penalty terms. Our first algorithm requires only a difference of convex and weakly convex (CwC) decomposition of the objective function – a milder assumption than the standard DC decomposition, while preserving implementability and broadening the method’s applicability. The second algorithm is an original bundle method that leverages a novel self-stabilizing model and operates under a standard DC decomposition framework. The proposed implementable algorithms come with convergence guarantees to critical points of the penalized, nonsmooth, nonconvex, optimization model. The theoretical framework is grounded in variational analysis and nonsmooth optimization, and our approaches have potential applications in signal processing, machine learning, and operations research.
Key words: Nonsmooth optimization, nonconvex optimization, variational analysis
