γ-Active Constraints in Convex Semi-Infinite Programming

In this article, we extend the definition of γ-active constraints for linear semi-infinite programming to a definition applicable to convex semi-infinite programming, by two approaches. The first approach entails the use of the subdifferentials of the convex constraints at a point, while the second approach is based on the linearization of the convex inequality system by means of the convex conjugates of the defining functions. By both these methods, we manage to extend the results on γ-active constraints from the linear case to the convex case.


INTRODUCTION
Since its appearance in the 1960s, semi-infinite programming (SIP) has grown to become an independent research branch. The first case of SIP studied was linear semi-infinite programming, which gained the interest of scientists of diverse backgrounds due to its theoretic beauty and wide variety of applications in probability, statistics, control, and assignment games (see [2,15,17,18]). One of the best known applications of semiinfinite programming, Chebyshev approximation, has been the starting point of many important results such as those presented in [6,8,11] to -Active Constraints 1079 name a few. Other applications of linear semi-infinite programming in the areas of risk theory, urban planning and environmental policymaking are mentioned in [7].
In this article, we focus on convex semi-infinite programming, where problems are of the form where h and f t are finite valued convex functions defined on n , for all t ∈ T , T is an arbitrary set of indices. F and F opt denote the feasible and solution set, respectively, of ( 1), considering F = ∅ if necessary. T (x) := t ∈ T | f t (x) = 0 is the set of active indices atx. f * (u) := sup x∈ n u x − f (x) represents the convex conjugate of the function f . It is evident that linear semi-infinite programming (LSIP) is a particular case of (1) when h and f t are affine functions for all t ∈ T . In [3], under the assumption that f t (x) : X −→ ∪ +∞ be a proper lower semi-continuous convex function for all t ∈ T , where X is a locally convex Hausdorff topological vector space, the constraint system f t (x) ≤ 0, t ∈ T , is linearized by means of the convex conjugate function of f t for all t ∈ T , using the fact that under this assumption f * * t = f . This linearization is then used in the presentation of new generalized consistency and optimality theorems for convex infinite programming (CIP) and convex semi-infinite programming (CSIP) when dim X < ∞. The use of the convex conjugate function for the linearization of a CSIP system of inequalities is also used in section 4 of this article as an approach to extending the LSIP results presented in [20].
Because analytical solutions to CSIP problems are rare, researchers have studied different numerical methods for an effective, polynomial time algorithm that converges to an optimal or nearly optimal solution. Among the methods used for the solving of CSIP problems are interior methods, proximal interior methods, logarithmic barrier methods, cutting plane methods, and affine scaling, as shown in [1,5,21,22]. Some of these numerical methods require many assumptions, as are the methods presented in [1], where nine assumptions are required to assure the convergence of the logarithmic barrier method, one of which is the Slater condition, which is substituted by a weaker assumption for the second numerical method presented. Two articles [21,22] present a proximal interior point method and relaxed cutting plane method, respectively. Both present CSIP problems specific to an area of application, asset pricing and the general capacity problem, respectively. In [5], a method combining affine scaling and universal barrier functions is proposed and compared to other algorithms such as the primal dual LP algorithm, classical affine scaling, and the dual problem, using a universal barrier function with favorable results in computational time.
Sufficient and necessary optimality conditions under different assumptions have also been studied for CSIP. Dinh et al. [4] present optimality conditions for CIP, assuming Farkas-Minkowski and that the condition epi h * + cl K is weak * -closed (where K represents the characteristic cone of F ) holds for the inequality system that defines the feasible set. In [13], the authors present a sufficient optimality condition for CSIP by means of Lagrange multipliers and the concept of immobile indices, under the assumption that the immobility order of the inequality system that defines the feasible set is finite. The authors compare this new optimality condition to a sufficient optimality criterion based on the dual equivalent of the problem and necessary and sufficient optimality conditions, under the assumptions that the functions f t possess the uniform mean value property for all t ∈ T . In [14], a comparison is made among the different assumptions that can be made of the inequality system of a CSIP and the consequences they have on optimality conditions of the CSIP problem and the linearization and consistency of the inequality system. Among the assumptions that can be made on an inequality system are the Abadie and basic constraint qualifications, Pshenichnyi-Levin-Valadier and weak Pshenichnyi-Levin-Valadier properties, and Slater and Strong-Slater conditions, all of which are studied and compared in their CSIP generalizations in [14]. Hassouni and Oettli [12] present the convex generalization of the regularity condition presented in [11] as part of the necessary hypotheses for the Karush-Kuhn-Tucker conditions to be necessary and sufficient for optimality in an LSIP problem.
Following the approach taken in classical optimization problems (defined in n and with a finite number of constraints), researchers have studied the relationship between the active constraints of a point x ∈ bd F and the feasible and solution sets, F and F opt , respectively, in semi-infinite programming. However, in semi-infinite programming the fact that a point x is on the boundary of the feasible set F does not assure the existence of an active constraint, that is, sup t ∈T f t (x) = 0 and f t (x) < 0 for all t ∈ T . To illustrate this, we present the following example.
The functions f t are convex, and the solution set F of the system It is important to note that the points of bd F \ 0 have no active constraint in the conventional sense, despite the fact that they are in bd F .
Due to this drawback, two new approaches have been proposed [9,10] for the linear semi-infinite programming case. In this article, we extend the following definition of -active constraints presented in [9,10] for the linear semi-infinite programming case to the convex semi-infinite programming case.
Definition 2. Letx ∈ n and > 0. We define where a t ∈ n and b t ∈ describe the inequality constraints of the (LSIP) problem and B n denotes the open unit ball in n .
Our first approach to extend Definition 2 to the CSIP case is by means of the following definition: Definition 3. Letx ∈ n and > 0. We define the set of (subdifferentially) -active indices atx as (4) and the corresponding set of -active constraints as is the subdifferential of a function f at x 0 .
Using the linearization of the convex inequality system of (1) as presented in [3], we formulate the following definition as our second approach to extending the results of [10] to CSIP. Definition 4. Letx ∈ n and > 0. We define the set of (linearization) -active indices of (1) as (6) and the set of -active constraints as Note that we use the subindex L to distinguish between the definition of -active indices based on the convex constraints given in (4) and the definition based on the linearization of the convex constraints as presented above.
The article is organized as follows. Section 2 contains known results that are used in the proofs of later sections as well as the results from [10] that we extend to the CSIP case. Sections 3 and 4 present the results obtained by extending the definition of -active constraints to the CSIP case by Definitions 3 and 4, respectively.

PRELIMINARIES
Throughout this article, we make use of the following notation: Cone S denotes the convex cone generated by the set S , and K + and K − denote the positive and negative polar cones of the cone K .
In addition, we present the following known results of convex analysis that will be used later on in section 3.
The following proposition is the basis under which we formulate Definition 4 of section 4.
Proof. This equivalence comes directly from the fact that f = f * * for proper lower semi-continuous convex functions.
We also use other well-known results of convex analysis in section 3. The details and proofs of these results are found in [16].
In this section, we use the definition of -active constraints in LSIP along with some lemmas and propositions. The concepts and results that are presented in this section come from [10] for LSIP problems that gave a feasible set F defined by a linear inequality system : Next, we enunciate without proofs the propositions and lemmas presented in [10].
The following lemma provides basic characteristics of Definition 3.
Lemma 8. Givenx ∈ bd F , the following statements hold: The following lemmas show that the -active constraints atx ∈ F allow us to check the feasibility of points in the open ballx + B n and of given directions atx.
Lemma 9. Letx ∈ F and y ∈x + B n , > 0. Then y ∈ F if and only if a t y ≤ b t for all a t ∈ W (x, ).
Lemma 10. Givenx ∈ F and d ∈ n , the following statements hold: D(F ,x) and T is finite, then there exists some 0 > 0 such that a t d ≤ 0 for all a t ∈ W (x, ) and all positive < 0 . In such a case, D(F ,x) − = cone W (x, ).
The following proposition provides necessary conditions for optimality and for certain characteristics of the feasible set.
Proposition 11. Givenx ∈ F and > 0, the following statements hold: These definitions and results have been studied only in the LSIP case; in the following sections, they will be extended to the CSIP case with proofs that will hold for both the CSIP and the LSIP cases. 3.
-ACTIVE CONSTRAINTS IN CSIP VIA THE SUBDIFFERENTIAL As seen in the previous section, the concept of -active constraints in LSIP is useful in determining characteristics of a given pointx ∈ n with respect to the feasible set. Unfortunately, the definition of -active constraints used in [9] is not valid in the general context of CSIP since the inequalities are not of the form a t x ≤ b t . In order to extend this definition to the convex case, we make use of the subdifferential, a very important tool in convex analysis, in the formulation of Definition 3.

Remark 12.
It is easy to see the equivalence between problems of the form 1 and 3 when f t and h are affine functions for all t ∈ T , by simply converting the system of linear constraints 1 := a t x ≤ b t , t ∈ T to the form 1 = f t (x) := a t x − b t ≤ 0, t ∈ T . Definition 3 can be applied, and the set W (x, ) coincides with 2.

0 n as a -Active Constraint
An important characteristic of the definitions of -active constraints is the set of consequences that come from 0 n being a -active constraint for some x ∈ n and > 0. In LSIP, the fact that 0 n ∈ W (x, ) for some x ∈ n and > 0 implies that (3) contains a trivial inequality; however, in CSIP this condition can be used as an indicator that there exist immobile indices, which may lead to the use of the optimality conditions presented in [13].
Proposition 13 proves that 0 n ∈ W (x, ) for somex ∈ n and > 0 is a sufficient condition for the existence of immobile indices. We also comment on the relationship that this condition has with the Slater condition.
Proposition 13. If 0 n ∈ W (x, ) for somex ∈ n and > 0, then there exists s ∈ T such that s ∈ T (x) for all x ∈ F .
Proof. For all x ∈ F and t ∈ T , we have f t (x) ≤ 0. On the other hand, if 0 n ∈ W (x, ) for somex ∈ n and > 0, then there exist y ∈x + B n and s ∈ T such that f s (y) = 0 and 0 n ∈ f s (y). Since f s is convex, then f s (y) ≤ f s (z) for all z ∈ n , in particular for all x ∈ F . So we have that 0 = f s (y) ≤ f s (x) ≤ 0 = f s (y) for all x ∈ F . Hence, for all x ∈ F , f s (x) = 0, which implies that s ∈ T (x).
From the previous proposition one can easily deduce that if 0 n ∈ W (x, ) for somex ∈ n and > 0 then the inequality system does not satisfy the Slater condition. We, thus, have Corollary 14. If satisfies the Slater condition then 0 n W (x, ) for all x ∈ n and > 0.
Proof. This is an immediate consequence of Proposition 13, since the existence of s ∈ T such that s ∈ T (x) for all x ∈ F is incompatible with the Slater condition.
The following example, where the Slater condition is not satisfied and 0 n W (x, ) for all x ∈ n and > 0, shows that the converse of Corollary 14 does not hold.
One can easily see that its solution set is F = (0, 0) , and, therefore, the Slater condition fails. However, we also note that for all x ∈ n and > 0, one has 0 n W (x, ).
With this counterexample we have seen that the failure of the Slater condition is not sufficient for 0 n ∈ W (x, ) for somex ∈ n and > 0.

-Active Indices
The following lemma provides information about the set of -active indices T (x, ) with respect to a pointx ∈ bd F and is the generalization of Lemma 8 to the CSIP case. On the other hand, for all t ∈ T \ t 1 , t 2 , , t m and x ∈x + 0 B n , we have f t (x) ≤ 0. To prove this last assertion, suppose there existx ∈x + 0 B n andt ∈ T \ t 1 , t 2 , , t m such that ft (x) > 0. Then there exists x ∈ ]x,x[⊂x + B n such that ft ( x) = 0. Thent ∈ t 1 , t 2 , , t m , which is a contradiction.
Therefore, for all x ∈x + i B n we have f t (x) ≤ 0 for all t ∈ T , which implies thatx bd F , but this is a contradiction.
iii) If T is finite then T (x, ) is finite and, hence, by (ii), T (x) = ∅. This leads to two possible cases.  Proof. Suppose y ∈ F , then f t (y) ≤ 0 for all t ∈ T , in particular for all t ∈ T (x, ). To prove the converse statement suppose y F ; then there exists s ∈ T such that f s (y) > 0. Since f s (x) ≤ 0, then there exists z ∈ [x, y[ such that f s (z) = 0. Therefore s ∈ T (x, ), which yields a contradiction.
Next we formulate and prove the extension of Lemma 9.  T (x, ), we have f s (x; d) < 0. Therefore, we have through Proposition The next two results complement Proposition 5 and Lemma 18.
Theorem 19. Letx ∈ F , d ∈ n and > 0 and assume that the Slater condition is fulfilled. If f t (y; d) ≤ 0 for all y ∈x + B n and t ∈ T (y), then d ∈ D(F ,x).
Proof. Suppose that d D(F ,x) and letx be a Slater point. Without loss of generality, we will assume that d < − < 0, but this is impossible because, as f t is convex, one has f t (y; −d) ≥ −f t (y; d) ≥ 0. Therefore,x −x and d must be linearly independent. By continuity, there exists ∈]0, 1] such that, for z := (1 − )(x + d) + x, one has z ∈x + B n and f t (z) > 0. By convexity, we also have Hence, again by continuity, there exists ∈]0, 1[ such that, for y : , one has f t (y) = 0; moreover, without loss of generality we assume that is small enough so as to have (1 − )x + x ∈x + B n . This implies that y ∈x + B n and, therefore, by our assumption, f t (y; d) ≤ 0. On the other hand, we have f t (y; Corollary 20. Letx ∈ bd F , > 0 and d ∈ n be such that u d ≤ 0 for all u ∈ W (x, ), and assume that the Slater condition is fulfilled. Then d ∈ D(F ,x).
Proof. Let y ∈x + B n and t ∈ T (y). Since f t (y; d) = max u∈ f t (y) u d ≤ max u∈W (x, ) u d ≤ 0, the conclusion immediately follows from Theorem 19.
Proof. This is an immediate consequence of Corollary 20, in view of the definition of the negative polar of a convex cone K and the fact that The following example shows that the Slater condition is not a superfluous assumption in the preceding results: Example 22. Let be the system x 2 ≤ 0 in . It can be easily seen that the solution set F reduces to 0 and that, for every > 0, one has W (0, ) = 0 . Then cl cone W (0, ) = 0 . However, D(F , 0) = 0 and hence D(F , 0) − = . Therefore the inclusion D(F ,x) − ⊆ cl cone W (x, ) fails in this example. It can also be easily seen that every d ∈ satisfies the assumptions of Theorem 19 and Corollary 20, even though D(F , 0) = 0 .
Corollary 23. Letx ∈ bd F and > 0, and assume that the Slater condition is fulfilled. Then Proof. Assertion (i) is an immediate consequence of Cor. 21 and the fact that D(F ,x) = 0 n , that is, D(F ,x) − = n . To prove (ii), letx ∈ F opt . By Sion's minimax theorem [19,Corollary 3.3], we have Hence, there exists g ∈ h(x) such that g d ≥ 0 for every d ∈ D(F ,x), that is, −g ∈ D(F ,x) − . We thus have −g ∈ (− h(x)) ∩ D(F ,x) − , which shows that this set is nonempty. By Cor. 21, it follows that the set (− h(x)) ∩ cl cone W (x, ) is nonempty, too.

-ACTIVE CONSTRAINTS IN CSIP VIA THE LINEARIZATION OF THE INEQUALITY SYSTEM
In this section, we propose an alternative to the definition of -active constraints given in (5), which will allow us to extend to the convex setting some useful results [20, Proposition 104] for linear problems.
Definition 4 is the application of the definition of -active constraints in the LSIP case to a linearization of the convex constraint system by means of the conjugates f * t of the functions f t that define the feasible set. From Proposition 7, it will immediately follow that the convex system and its linearization have the same solution set.
Next we show the relationship that exists between -active constraints resulting from Definitions 3 and 4.
Proof. Let t ∈ T (x, ). Then there exists y ∈x + B n such that f t (y) = 0. If y is a global minimum of f t , then 0 n y = 0 = f t (y) = f * t (0 n ); hence, t ∈ T L (x, ). If y is not a global minimum of f t , then it is not a local maximum either, so there existsx ∈x + B n such that hence, by continuity of the scalar product, there exists z ∈ [y,x[⊂x + B n such thatû z = f * t (û), which shows that t ∈ T L (x, ).
The following proposition shows that the set W (x, ) is generally smaller than the set W L (x, ) considered in the preceding subsection.
Proposition 25. Letx ∈ bd F and > 0. Then Proof. Suppose g ∈ W (x, ). Then there exists y ∈x + B n and s ∈ T (y) such that g ∈ f s (y). We, thus, have g y = f s (y) + f * s (g ) = f * s (g ) Therefore, g ∈ W L (x, ).
As shown by the following example, the reverse inclusion does not hold in general, even if the Slater condition is fulfilled.
Example 26. Let be the system 1 2 x 2 1 − x 2 ≤ 0 in 2 . Obviously, the Slater condition is fulfilled. Straightforward calculations show that, for every > 0, one has Next we study the consequences of 0 n ∈ W L (x, ) for somex ∈ n and > 0. As in the case of the definition of the preceding subsection, we can also relate 0 n being a -active constraint to the Slater condition. The following proposition is a version of Proposition 13 for our new definition of -active constraints.
Proposition 27. If 0 n ∈ W L (x, ) for somex ∈ n and > 0, then there exists s ∈ T such that s ∈ T (x) for all x ∈ F .
Proof. If 0 n ∈ W L (x, ) for somex ∈ n and > 0, then there exists y ∈ x + B n and s ∈ T such that f * s (0 n ) = 0, and hence inf x∈ n f s (x) = −f * s (0 n ) = 0 Therefore, f s (x) = 0 for all x ∈ n such that f s (x) ≤ 0, in particular for all x ∈ F .
Remark 28. It is a straightforward consequence of the previous proposition that 0 n ∈ W L (x, ) for somex ∈ n and > 0 is a sufficient condition for the Slater condition to be violated; however, as it happens with W (x, ), it is not a necessary condition. To show this, we reanalyze Example 15.