Optimality conditions for convex problems on intersections of non necessarily convex sets

We present necessary and sufficient optimality conditions for the minimization of pseudoconvex functions over convex intersections of non necessarily convex sets. To this aim, we use the notion of local normal cone to a closed set at a point, due to Linh and Penot: Optimality conditions for quasiconvex programs, SIAM J. Optim. 17, 500-510 (2006). The technique we use to obtain the optimality conditions is based on the so called canonical representation of a closed set by means of its associated oriented distance function. ∗This author was supported by the MINECO of Spain, Grant MTM2014-59179-C2-2-P, and the Severo Ochoa Programme for Centres of Excellence in R&D [SEV-2015-0563]. He is affiliated to MOVE (Markets, Organizations and Votes in Economics). He thanks the Dipartimento di Economia e Management of the Università degli Studi di Brescia for sponsoring a visit to Brescia to complete this work.


Introduction
The main theoretical tool for dealing with convex optimization problems is the well known KKT Theorem, which provides necessary and sufficient conditions for global optimality under suitable regularity assumptions. In a recent paper [9], Lasserre extended this classical result to the case when the inequality constraints are non necessarily convex but yield a convex feasible set. His elegant result states that, for convex problems with a differentiable objective function and differentiable constraint functions, the KKT conditions are still necessary and sufficient for global optimality provided that the Slater constraint qualification holds and a mild regularity condition at boundary points is satisfied.
Lasserre's result was extended to a nondifferentiable setting by Dutta and Lalitha [2], who proved an analogous result in terms of Clarke generalized gradients. However they observed that their result is not more general than Lasserre's, as it requires Clarke regularity of the involved functions, a condition that does not necessarily hold for differentiable functions unless they are continuously differentiable. A unifying result, from which both Lasserre's and Dutta and Lalitha's results follow as immediate corollaries, was later on provided in [12]. This latter result was stated for tangentially convex functions, a class of functions which contains both the class of differentiable functions and that of Clarke regular functions (hence, in particular, the class of convex functions). The optimality conditions were expressed in terms of tangential subdifferentials, a notion we will recall below.
Some further extensions of the main result in [12] have been obtained in [5,6,7].
In this paper we consider optimization problems with set, rather than inequality, constraints. More specifically, given an extended real valued function f : R n → R∪ {+∞} and a finite number of closed sets C i ⊆ R n (i ∈ I := {1, ..., m}), we will consider the optimization problem Analogously to the above mentioned setting in [2,9,12], we will not assume that each individual set is convex, but that their intersection, that is, the feasible set, is. We will prove a KKT-type theorem for such problems under a local convexity assumption on the defining sets. Our result will be stated in terms of the tangential subdifferential of the objective function and the local normal cones to the defining sets. The notion of local normal cone to a closed set at a point, introduced by Linh and Penot [11], will allow us to state the KKT condition in a concise way. For the sufficiency of the KKT condition, we will assume the objective function to be pseudoconvex in an appropriate sense.
The proof of our main result will rely on the use of oriented distances, thanks to which we will be able to represent the constraint sets by inequalities, using their so called canonical representations. The advantage of using the notion of oriented distance rather than the classical distance function lies in that, under our assumptions, the resulting inequality constraints will satisfy the Slater constraint qualification condition, something impossible to achieve with the classical distance.
We will use standard convex analytic terminology and notation, following the classical reference [15]. The Euclidean inner product of x, y ∈ R n and the Euclidean norm of x will be denoted by x, y and x , respectively. The subdifferential of f : Given C ⊆ R n , by intC, bdC and clC we denote the interior, the boundary and the closure of C, respectively. The convex hull of C, denoted convC, is the smallest convex set containing C. The indicator function of C ⊆ R n is defined by The normal cone to a convex set C ⊆ R n at x ∈ R n defined by is said to be a cone if it is nonempty and x ∈ C, λ ≥ 0 ⇒ λx ∈ C.

Preliminaries
In this section we will recall the KKT-type theorem of [12] as well as the notions of tangentially convex function and tangential subdifferential used in its formulation.
The following class of functions was introduced by Pshenichnyi [14]. They were called "tangentially convex" by Lemaréchal [10].
exists, is finite, and is a convex function of d.
Clearly, every convex function is tangentially convex at every interior point of its domain. Every function f which is Gateaux differentiable at a point x is tangentially convex at x too, since then f (x, ·) is linear. The class of tangentially convex functions at a given point is closed under addition and multiplication by scalars; it therefore contains a large set of nonconvex and nondifferentiable functions. For example, the sum of a convex function with a differentiable function provides an example of a tangentially convex function which, in general, is nonconvex and nondifferentiable. The product of two nonnegative tangentially convex functions at a point is easily seen to be tangentially convex at that point.
Associated to the notion of tangentially convex function, it is natural to introduce the following concept of subdifferential, which was implicitly defined in [14].

Definition 2
The tangential subdifferential of f : Equivalently, It immediately follows from Definition 2 that, for convex functions, the notions of tangential subdifferential and Fenchel subdifferential coincide. It is also clear that the tangential subdifferential of a Gateaux differentiable function at a point is the singleton of its gradient at that point.
It is worth mentioning that, among the rich calculus rules enjoyed by the tangential subdifferentials of tangentially convex functions, there is additivity; indeed, if f and g are tangentially convex at a common point x, then one has We will conclude this section by recalling the KKT-type theorem we will need to prove our main result. Its statement uses the following extension of the well known notion of pseudoconvexity for differentiable functions to the tangentially convex setting.

Definition 3 [12, Definition 7] A tangentially convex function
Theorem 4 [12, Theorem 9] Assume that the functions g i : R n −→ R ∪ {+∞} (i ∈ I) are continuous, the system g i (x) ≤ 0 (i ∈ I) satisfies the Slater condition, for every x ∈ S := {x ∈ R n : g i (x) ≤ 0, i ∈ I} and i ∈ I such that g i (x) = 0 the function g i is tangentially convex at x and ∂ T g i (x) = {0} , the set S is convex, and f : R n −→ R ∪ {+∞} is tangentially convex at x ∈ S. If either g i (x) < 0 for every i ∈ I or there exists some i ∈ I such that g i (x) = 0 and 0 / ∈ ∂ T g i (x) , a necessary condition for x to be a global minimizer of f over S is the existence of real numbers If f is pseudoconvex at x, this condition is sufficient.

The oriented distance function
An important tool in the proof of our main result will be the notion of oriented distance (see [3]). For a nonempty set S ⊆ R n , we will denote by d S : R n −→ R the ordinary Euclidean distance function to S, that is, Definition 5 The oriented distance function to a nonempty proper subset C of R n is the function ∆ C : R n → R defined by When C is closed, the inequality ∆ C (x) ≤ 0, the solution set of which is C, is called the canonical representation of C.
If C is a convex set, then ∆ C is a convex function; conversely, if ∆ C is a convex function, then clC is a convex set [4, Proposition 4].
Definition 6 A vector h ∈ R n is said to be tangent to a closed set C ⊆ R n at x ∈ C if there are a sequence (x k ) k∈N of elements in C and a sequence (λ k ) k∈N of positive real numbers such that The set T (C, x) of all tangent vectors to C at x is called the Bouligand tangent cone (or contingent cone) to C at x (see, e.g., [1]).
In the sequel, we will denote by B(x, ) the Euclidean closed ball with center x ∈ R n and radius > 0. (1) Proof. We will first prove that Since this equality is obvious when y ∈ C, we will assume that y / ∈ C. We clearly have d C (y) ≤ d C∩B(x, ) (y), so we will only prove the opposite inequality. Take a closest pointȳ to y in bdC. Then and henceȳ ∈ C ∩ B(x, ). Therefore This proves (2). By substituting C by R n \ C in (2), we get From (2) and (3), we immediately obtain (1).
The following proposition establishes an easy expression for contingent cones to closed convex sets in terms of oriented distances.
Proposition 8 Let C ⊆ R n be convex and closed and x ∈ bdC. Then Proof. If d / ∈ T (C, x), then there exists a separating hyperplane between d and the closed convex cone T (C, x), so there exists y ∈ R n with ||y|| = 1 such that y, d ≤ 0 < y, d for all d ∈ T (C, x).
We thus have y, z − x ≤ 0 for all z ∈ C, which is equivalent to the inclusion C ⊆ H for H := {z ∈ R n : y, z ≤ y, x }.
to justify the third equality notice that x + td / ∈ C for every t > 0, which we easily get from the fact that d / ∈ T (C, x) combined with the convexity of C. We thus have proved the inclusion ⊇ in (4).
Let us now prove the inclusion ⊆ in (4). For x ∈ bdC, we define Hence, for d ∈R since, when t is close enough to 0, we have (1−tλ)x+tλz ∈ C. By the continuity of ∆ C , the inequality (5)

Local convexity
We will use the following notion of local convexity of a closed set at a boundary point.
Definition 9 (see [3, Section 1, Remark 2]). A closed set C ⊆ R n will be said to be locally convex at x ∈ bdC if there exists > 0 such that C ∩ B(x, ) is convex.
Closed sets locally convex at every boundary point were called locally convex in [8]. In connection with this notion, the following important theorem worths to be mentioned; it states that, for closed connected sets, local convexity is actually equivalent to convexity.
The following simple results show that convexity of an intersection implies that, at every point x, the intersection of the sets that contain x in their boundaries must be locally convex at x.

Proposition 11
Let the sets C i ⊆ R n (i ∈ I) be closed, S := i∈I C i , x ∈ bdS and I(x) := {i ∈ I : x ∈ bdC i }. Then there exists > 0 such that Proof. The inclusion ⊇ clearly holds for every > 0. To prove the opposite inclusion, observe first that for every i ∈ I\I(x) one has x ∈ intC i ; hence there exists i > 0 such that B(x, i ) ⊆ C i . Let := min i∈I\I(x) i . We have > 0 and, for every y ∈ i∈I(x) C i B(x, ), we obviously have y ∈ C i for i ∈ I(x) and y ∈ B(x, ) ⊆ B(x, i ) ⊆ C i for i ∈ I\I(x), which implies that y ∈ S.

Corollary 12
Let the sets C i and S be as in Proposition 11. Then, S is convex if and only if it is connected and, for every x ∈ bdS, the set i∈I(x) C i is locally convex at x.
Proof. The "only if" statement is a direct consequence of Proposition 11. To prove the "if" statement, take > 0 such that > 0 as in Proposition 11, and := min { , } . Then > 0 and which shows that S B(x, ) is convex. Convexity of S now follows from Theorem 10.
We next prove that local convexity of a set at a point implies nonemptiness of the tangential subdifferential of its associated oriented distance function at that point.
Proposition 13 Let C ⊆ R n be closed and locally convex at x ∈ bdC. Then i) ∆ C is tangentially convex at x; Proof. Take as in Definition 9. Then, by Lemma 7, the function ∆ C is convex on B(x 2 ); hence i) holds. To prove ii) we consider two cases. In the case when C ∩ B(x, ) is solid (that is, int (C ∩ B(x, )) = ∅), then the convex function ∆ C∩B(x, ) does not have a minimum at x and therefore the latter equality being a consequence of Lemma 7. In the case when C ∩B(x, ) is not solid, using that ∆ C∩B(x, ) = d C∩B(x, ) we have since the normal cone to a closed convex set at a boundary point contains nonzero vectors.
The following result characterizes convexity of an intersection in terms of contingent cones under a local convexity assunmption.

Proposition 14
Let the sets C i ⊆ R n (i ∈ I) be closed and S := m i=1 C i . Assume that intS = ∅ and, for every x ∈ bdS and i ∈ I(x) := {i ∈ I : x ∈ bdC i }, the set C i is locally convex at x. Then S is convex if and only if Proof. Let us recall that the functions ∆ Ci , i ∈ I, are continuous. Let us also observe that the assumption intS = ∅ is equivalent to saying that the system ∆ Ci (x) ≤ 0 (i ∈ I) satisfies the Slater condition. Moreover, by Proposition 13, for every i ∈ I(x) the function ∆ Ci is tangentially convex at x and ∂ T ∆ Ci (x) = {0}. Therefore, by [12,Proposition 6] applied to the functions g i := ∆ Ci (i ∈ I), we obtain that S is convex if and only if S = y ∈ R n : ∆ Ci (x, y − x) ≤ 0, for all x ∈ bdS and i ∈ I(x) .
The conclusion now immediately follows from Proposition 8.

The local normal cone
Our main result will be expressed in terms of the notion of normal cone, due to Linh and Penot [11]. We have already used the standard notion of normal cone to a convex set in the proof of Proposition 13. One needs to extend this notion to the case of a nonconvex set S ⊆ R n . Given one such set and a point x ∈ S, we set N S (x) := N convS (x) .
This amounts to saying that otherwise .
Definition 15 [11, p. 504] The local normal cone to a closed set C ⊆ R n at

Remark 16
The notion of local normal cone is indeed local, as one can easily prove that using that N C∩B(x, ) (x) decreases as increases.
Since N C (x) ⊆ N C∩B(x, ) (x) for every > 0, one has N C (x) ⊆ N l C (x). Using the following lemma, we will prove that equality holds when C is a closed convex set.
Lemma 17 Let C ⊆ R n be convex, x ∈ C and > 0. Then Proof. Since δ C∩B(x, ) and δ C coincide on B(x, ) and the Fenchel subdifferential is a local notion, we have From Lemma 17, one immediately obtains the following proposition.
Proposition 18 [11, p. 504] Let C ⊆ R n be convex and closed and x ∈ C.
Corollary 19 Let C ⊆ R n be closed and locally convex at x ∈ bdC. Then for sufficiently small > 0.
Proof. Since C is locally convex at x, there exists˜ such that C ∩ B(x,˜ ) is convex. Then, for all ∈ (0,˜ ) we have and hence, thanks to Lemma 17, we deduce that Therefore, since N C∩B(x, ) (x) is decreasing in , it follows that (6) holds for ∈ (0,˜ ].
The following result provides a simple necessary and sufficient condition for the local normal cone not to reduce to the singleton of the origin. Its easy proof, which we omit, is an immediate consequence of the supporting hyperplane theorem.
Proposition 20 Let C ⊆ R n be closed and x ∈ bdC. Then Corollary 21 Let C ⊆ R n be closed and x ∈ bdC. Then Proof. Since C is locally convex at x, there exists > 0 such that C ∩ B(x, ) is convex; hence, as x ∈ bd(C ∩ B(x, )), the result follows from Proposition 20.

Main result
We are now ready to prove our main result.
Theorem 23 Assume that the feasible set S of (P) is convex and solid, for every x ∈ bdS and i ∈ I(x) := {i ∈ I : x ∈ bdC i } the set C i is closed and locally convex at x, and f : R n −→ R ∪ {+∞} is tangentially convex at x ∈ bdS. Then a necessary condition for x to be a global minimizer of (P) is If f is pseudoconvex atx, this condition is sufficient.
Proof. We reformulate problem (P) by using the canonical representation of the constraint sets, that is, as with g i := ∆ Ci .Then the assumptions of Theorem 4 are satisfied thanks to Lemma 7 and Proposition 13. Moreover, if there esists i ∈ I such that ∆ Ci (x) = 0, that is,x ∈ bdC i , then 0 / ∈ ∂ T ∆ Ci (x). Indeed, for > 0 such that C i ∩ B(x, ) is convex, the function ∆ Ci∩B(x, ) is convex and, according to Lemma 7, we have ∆ Ci∩B(x, ) (x) = ∆ Ci (x) = 0 and ∂ T ∆ Ci (x) = ∂ T ∆ Ci∩B(x, ) (x) = ∂∆ Ci∩B(x, ) (x), but 0 / ∈ ∂∆ Ci∩B(x, ) (x) becausex is not a global minimum of ∆ Ci∩B(x, ) in view of the fact that int (C i ∩ B(x, )) = ∅, a fact which easily follows from the nonemptiness of intS together with the convexity of S. We are thus in a position to apply Theorem 4: Let us now prove the sufficiency of (8). Assume that this condition holds.
. Take a strictly positive number ≤ min i∈I(x) { i } such that the sets C i ∩ B(x, ), i ∈ I(x), are convex. We have For every i ∈ I(x), x ∈ S and α ∈ (0, 1), since S is convex we have For α small enough, we further have which, by (9), implies (1 − α)x + αx, x * i ≤ x, x * i or, equivalently, x −x, x * i ≤ 0. Thus, using that x * ∈ ∂ T f (x), we obtain Recalling that f is pseudoconvex, this implies f (x) ≥ f (x), thus showing that x is a global minimum of (P).
The following example illustrates Theorem 23.
The sets C 1 and C 2 are both closed and non convex, while their intersection S := C 1 ∩ C 2 is convex and solid (see Figure 1). It can be easily verified that for any (x 1 , x 2 ) ∈ bdS and i ∈ I (x 1 , x 2 ) the sets C i are locally convex at (x 1 , x 2 ). The function f is tangentially convex since it is differentiable. Let (x 1 ,x 2 ) := (1, 1). This point is a global minimizer for (P), and the necessary optimality condition (8) holds with (−12, −2) ∈ ∂ T f (x 1 ,x 2 ) , (9, 3) ∈ N l C1 (x 1 ,x 2 ) and (3, −1) ∈ N l C2 (x 1 ,x 2 ) . Since f is pseudoconvex at (x 1 ,x 2 ), this condition is also sufficient. We conclude by presenting a formula to compute the normal cone of an intersection.
Corollary 25 Let the sets C i and S and the pointx ∈ bdS be as in Theorem 23. Then N S (x) = i∈I(x) N l Ci (x).
Proof. The result is an immediate consequence of Theorem 23 after observing that, for p ∈ R n , one has p ∈ N S (x) if and only if p minimizes the function x → 1 2 x + p − x 2 over S.