A Note on the Lyapunov and Period Constants

It is well known that the number of small amplitude limit cycles that can bifurcate from the origin of a weak focus or a non degenerated center for a family of planar polynomial vector fields is governed by the structure of the so called Lyapunov constants, that are polynomials in the parameters of the system. These constants are essentially the coefficients of the odd terms of the Taylor development at zero of the displacement map. Although many authors use that the coefficients of the even terms of this map belong to the ideal generated by the previous odd terms, we have not found a proof in the literature. In this paper we present a simple proof of this fact based on a general property of the composition of one-dimensional analytic reversing orientation diffeomorphisms with themselves. We also prove similar results for the period constants. These facts, together with some classical tools like the Weirstrass preparation theorem, or the theory of extended Chebyshev systems, are used to revisit some classical results on cyclicity and criticality for polynomial families of planar differential equations.

this type of vector fields the maximum number of small amplitude limit cycles that can bifurcate from the origin varying λ is governed by the structure of the so called Lyapunov constants, that are polynomials in λ if the dependence on λ of F is as well polynomial. This number is called the cyclicity of the family ( [1,2,9,10]), see Sect. 2 for more details. In fact, the problem of determine the cyclicity, that can be seen as a multiple Hopf bifurcation, goes back to Bautin who in 1954 considered it for planar quadratic vector fields.
To be more precise, for any n ≥ 2 and i, j ≥ 0 such that 1 < i + j ≤ n, fix u i, j , v i, j ∈ R[λ 1 , . . . , λ m ] and let F be the family of polynomial vector fields given by Clearly F is an m-parametric family of polynomial vector fields having a singularity of center or focus type at the origin. Note also that the dependence on the parameters is polynomial. For any λ 0 ∈ R m we denote by F λ 0 the polynomial vector field obtained evaluating the polynomials u i, j , v i, j at λ 0 .
For λ ∈ R m and x > 0 small enough let π(λ, x) be the first intersection with the positive X -axis of the solution of the Cauchy's problem (ẋ,ẏ) = F λ (x, y), x(0) = x, y(0) = 0. ( This map π is called the first return map and it is known that it can be extended to be analytic at x = 0. For completeness, we include a proof of this fact in Lemma 2.1. The corresponding displacement map is defined as δ(λ, x) = π(λ, x) − x and its zeros give rise to periodic orbits of the system. In general, δ can be written as δ(x, λ) = ∞ i=3 w i (λ)x i , where w i (λ) ∈ R[λ 1 , . . . , λ m ], see [3,9]. Recall that it is said that k local limit cycles bifurcate from the origin of F λ 0 if for all δ > 0 and all > 0 there exists λ in an -neighborhood of λ 0 such that the displacement map has k zeros in U = (0, δ), see [9]. The cyclicity of 0 at F λ 0 is defined as the maximum number of limit cycles that can bifurcate from the origin of F λ 0 .
To study the cyclicity it is commonly used the property that all w 2l (λ) belongs to the ideal generated the previous w j (λ) with odd index, namely w 3 (λ), w 5 (λ), . . . , w 2l−1 (λ). Consulting several references we have not been able to find an explicit proof of this fact. What appears detailed in many places is that if for some l ≥ 2, and some fixed λ 0 , w 3 (λ 0 ) = w 4 (λ 0 ) = · · · = w 2l−1 (λ 0 ) = 0 then w 2l (λ 0 ) = 0 but, of course, this fact is in general weaker that the property described above that is the one needed to study the cyclicity. As we will see, this stronger property is essentially due to a symmetry that presents the family of differential equations written in polar coordinates.
The first aim of this paper is to provide a simple and self-contained proof of the above property. The key point of our approach will be that, due to the mentioned symmetry, for x ∈ R small enough, the first return map satisfies π(λ, x) = σ (λ, σ (λ, x)) where for λ ∈ R m and x ∈ R, σ (λ, x) is the first intersection with the full X -axis of the solution of the same Cauchy's problem (1). Then, the proof will be a consequence of the first part of next theorem about analytic maps. As we will see, its second part will be useful to prove similar properties for the so called period function and its expansion at zero, that we will introduce later.
Then the following statements hold: . . , a i ], w 2 = 0 and for any n, w 2n belongs to the ideal generated by w 3 , w 5 , . . . , w 2n−1 in R[a 2 , . . . , a 2n−1 ]. (ii) Assume in addition that f is an involution and let G = ∞ i=0 b i x i be a real analytic map satisfying that G( f (x)) = G(x). Then b 1 = 0 and for any n ≥ 1, b 2n+1 belongs to the ideal generated by b 2 Notice that the above result deals with orientation reversing local real analytic diffeomorphisms f . For such a maps, and in the discrete setting, it is also interesting to study the bifurcation of 2-periodic orbits. A new definition of cyclicity, the named 2-cyclicity and some new constants, the so called stability constants, appear ( [4]) and the first part of Theorem A can also be used to obtain some algebraic properties for them.
We also want to remark that the hypothesis in item (ii) that f is an involution is very important. Otherwise, taking any x 0 close enough to the origin, either the sequence { f i (x 0 )} i∈N or { f −i (x 0 )} i∈N accumulate to the origin, giving rise to an accumulating sequence of zeros of G(x) − G(x 0 ) and proving that any analytic G satisfying G( f (x)) = G(x) must be constant.
Based on some relevant references on the subject ( [5,9]) we revisit in Theorems B and C two well known results for obtaining either the cyclicity or the criticality (see below for a definition) of a family of polynomial vector fields. We will try to make our proofs clear and enough detailed. The first part of both theorems is a corollary of Theorem A. The remainder statements are consequences of Weirstrass preparation Theorem, on a result about analytic functions in several variables stated in [9] and proved in [6,Thm 7] and on the fact that some set of functions appearing in the proof are an extended complete Chebyshev systems (see the definition in Sect. 2). This last step is different to the one appearing in most previous approaches, where the division-derivation algorithm is used.
Theorem B provides an upper bound of the cyclicity, either when for some value λ = λ 0 the origin is a weak focus or finite order: w 2l+1 (λ 0 ) = 0 for some l ≥ 1 and all the previous w i vanish at λ = λ 0 , or when the origin is a center: w i (λ 0 ) = 0 for all i ≥ 3. In the first case the upper bound will be l − 1 and in the second one will depend on the number of generators of the Bautin ideal which recall that is the ideal generated by all w i (λ) i≥3 . For the sake of notation we will say that the map W :R m → R defined as fills a neighborhood of the origin near λ = λ 0 if there is a neighborhood U of 0 ∈ R such that for all z ∈ U there is a λ near λ 0 such that W (λ) = z. A sufficient condition for W to fill a neighborhood of the origin is that the rank of the matrix D W (λ 0 ) is , where D W denotes the differential matrix. Notice that his condition is not necessary: take for instance W 2 : x i be the displacement map associated to F λ . Then, there exists k such that w 3 , w 5 , . . . , w 2k+1 generate the ideal generated by (w i ) i≥3 , and is minimal with this property.
Assume that λ 0 is such that Then the cyclicity of 0 at λ 0 is less or equal to l − 1. Moreover, if the map W l−1 fills a neighborhood of the origin near λ = λ 0 the cyclicity is exactly l − 1.
If, on the contrary, w i (λ 0 ) = 0 for all i ≥ 3, then the cyclicity of 0 at λ 0 is less or equal to k − 1. Moreover, if the map W k fills a neighborhood of the origin near λ = λ 0 the cyclicity is exactly k − 1.
Assume now that the polynomials u i, j (λ), v i, j (λ) that define F are such that for any λ ∈ R m the vector field F λ has a center at the origin. In this case for all λ ∈ R m the map σ (λ, x) is an involution because π(λ, x) = σ (λ, σ (λ, x)) = x. Also we can define the map T (λ, x) that assigns to each λ and each x sufficiently small the period of the orbit of F λ passing trough the point (x, 0), x ∈ R.
In this context it is said that k local critical periods bifurcate from the center of F λ 0 if for all δ > 0 and all > 0 there exists λ in an -neighborhood of λ 0 such that ∂ ∂ x T (λ, x) = 0 has k solutions in U = (0, δ), see [5]. The criticality of 0 at F λ 0 is defined as the maximum number of local periods that can bifurcate from the center of It is well known that the period function T is real analytic at (λ, 0) and that Note that in this situation we have that T (λ, σ (λ, x)) = T (λ, x) for all λ ∈ R m and for all x small enough. So item (ii) of Theorem A applies in this situation. Using this fact and also the same arguments and tools used in the proof of Theorem B we can prove Theorem C.
Similarly that in (2) we consider the function T : Then the criticality of 0 at λ 0 is less or equal than l − 1. Moreover, if the map T l−1 fills a neighborhood of the origin near λ = λ 0 the criticality is exactly l − 1.
If, on the contrary, b i (λ 0 ) = 0 for all i ≥ 2, then the criticality of 0 at λ 0 is less or equal than k − 1. Moreover, if the map T k fills a neighborhood of the origin near λ = λ 0 the criticality is exactly k − 1.

Proof of Theorem
Therefore for all n we have that w n = n i=1 a i j 1 +···+ j i =n a j 1 . . . . .a j i . Hence for n ≥ 2 we obtain w n ∈ R[a 2 , . . . , a n ]. Now we proceed by induction on n. From the above formula we get w 2 = a 1 a 2 + a 2 a 2 1 = 0, w 3 = −2(a 3 + a 2 2 ) and w 4 = a 2 (a 3 + a 2 2 ) = −a 2 w 3 /2 and so the statement holds for n = 2. Assume that it holds for all i ≤ n − 1 and we prove it for n. From the equality Looking at the 2n-coefficient in both sides of the above equality we get Since a 1 = −1 and w 1 = 1 we get Thus w 2n belongs to the ideal generated by w 3 , w 4 , . . . , w 2n−1 in R[a 2 , . . . , a 2n−1 ]. Now by the principle of induction the statement (i) holds. (ii) The proof is also by induction. Since G( f (x) Looking at the coefficient of x in both sides we have that b 1 = −b 1 and therefore b 1 = 0. Similarly, b 3 = a 2 b 2 and the statement holds for n = 1. Now assume that the statement holds for i ≤ n − 1 and we prove it for i = n. Looking at the coefficient of x 2n+1 in both sides we obtain Thus since a 1 = −1 and b 1 = 0 we get and the result follows again by the induction hypothesis.
Next lemma shows that both, the first intersection map and the period function, are analytic on the whole X -axis. Then the map f is analytic at 0 and f (0) = −1.

Assume in addition that F has a center at the origin, and let T be the period function. Then the map T can be extended to 0, T (0) = 2π and it is analytic at 0.
Proof After the polar change of variables we obtain the system where M i and N i are trigonometric homogeneous polynomials of degree i. So near r = 0 we have Note that G(θ +π, −r ) = −G(θ, r ). For ρ small enough let r (θ, θ 0 , ρ) be the solution of (4) that satisfies r (θ 0 , θ 0 , ρ) = ρ. Then with this notation our map writes Since the map r (θ, θ 0 , ρ) is analytic to prove the analyticity of f we only need to show that −r (π, 0, x) = r (2π, π, −x).
For λ ∈ R m and x small enough consider σ (λ, x) the first intersection with the X -axis of the solution of the Cauchy's problem From Lemma 2.1 we know that the map σ is analytic on x. On the other hand from the analytic dependence on the parameters we know that this map is also analytic on λ. So we have for certain analytic functions a i . In fact in [3] it is proved that a i ∈ R[λ 1 , . . . , λ m ] for all i ∈ N.

Lemma 2.2
For all λ ∈ R m the displacement map associated to F λ can be written as Proof Let λ ∈ R m and consider the classical first return map π(λ, x), which is given by σ (λ, σ (λ, x)), where σ is the map described in Lemma 2.1. So it is analytic on x and λ. We get π(λ, x) = x + i≥2 w i (λ)x i . Moreover from Theorem A it follows that w 2 = 0 and hence we get π(λ, To prove our main result we will need the following result. The first statement is a consequence of the Preparation Theorem of Weierstrass, while the second one is proved in [9].

Proposition 2.3 Let I be an open interval containing 0, V an open subset of R m and γ : V × I −→ R a real analytic map that can be developed in the following way
where the functions a i (λ) are real analytic. Let λ 0 ∈ V . In the ring of analytic germs at λ 0 consider the ideal J generated by the functions (a i ) i∈N and let k 0 = k(λ 0 ) be such that J is generated by the germs of a 0 , a 1 , . . . , a k 0 at λ 0 . In the case that a i (λ 0 ) = 0 for all i < l ≤ k 0 and a l (λ 0 ) = 0 we get that there exist V 0 open subset of V containing λ 0 and J open subinterval of I containing 0 such that for all where ϕ i (λ, x) is real analytic for every i = 0, 1, . . . k 0 .
Proof Let λ 0 ∈ R m . First we consider the case when a i (λ 0 ) = 0 for all i < l ≤ k 0 and a l (λ 0 ) = 0. By the Preparation Theorem of Wierstrass we have in a neighborhood of (λ 0 , 0), where h is real analytic and h(λ 0 , 0) On the other hand we know that Thus we get: Therefore it follows inductively that for all i ∈ {0, . . . , l}, c i (λ) = a i (λ) + i−1 j=0 a i, j (λ)a l (λ) for certain analytic functions a i, j (λ). Putting g(λ, x) = h(λ, x)/h 0 (λ), x i the result follows in this case. The proof when a i (λ 0 ) = 0 for all i ∈ N can be found in [9] and it is based on [6,Thm. 7].
Notice that in Proposition 2.3 the value k 0 depends on λ 0 , because the functions a i (λ) are arbitrary real analytic functions. On the other hand, in next theorem, since R[λ 1 , . . . , λ m ] is a noetherian ring, by the Hilbert's basis Theorem the ideal generated by (w i ) i∈N is finite generated and there exists a minimal universal r ∈ N such that w 3 , w 4 , . . . , w r generates the ideal for all λ. Note that from Theorem A it follows that r is odd.
Proof The fact that r is odd follows from Theorem A because the return map is the second iteration of the map σ. Thus effectively r = 2k + 1. Moreover, if λ 0 is such that w s (λ 0 ) = 0 and w i (λ 0 ) = 0 for all i < s. Again from Theorem A we get that s = 2l + 1 with l ≤ k. In this case Proposition 2.3 says that for some analytic maps ϕ i . Theorem A implies that for any n ≥ 2 , w 2n belongs to the ideal generated by w 3 , w 5 , . . . , w 2n−1 . Hence, for each n, and some polynomials r j,n , for some polynomials r j,n and some analytic maps ψ j,n . By using the above relations and rearranging the finite sum (5) we get that xh 2i+1 (λ, x)).
The result when w i (λ 0 ) = 0 for all i ≥ 3 follows by applying again Theorem A, the second part of Proposition 2.3 and the same type of reasonings.

Definition 2.5
Let λ 0 ∈ R m . We say that the singularity of F λ 0 has cyclicity N λ 0 if (a) there exist 0 > 0 , δ 0 > 0, such that every F λ with λ in the 0 -neighborhood of λ 0 cannot have more than N λ 0 limit cycles within the δ 0 -neighborhood of 0 ∈ R 2 . (b) for any choice of positive numbers > 0 and δ < δ 0 , there exists λ in the -neighborhood of λ 0 such that F λ has N λ 0 limit cycles.

Definition 2.6
An ordered set of n analytic functions ( f 0 , f 1 , . . . , f n−1 ) is an extended complete Chebyshev system (in short, ECT-system) on I if, for all k = 1, 2, . . . , n, any nontrivial linear combination has at most k − 1 isolated zeros on I counted with multiplicities. The Wronskian of ( f 0 , .
For the sake of shortness sometimes we will use the notation The following lemma ( [7]) caracteritzes ECT-systems: . . , f n−1 ) is an ECT-system on L if and only if, for each k = 1, 2, . . . , n, Proof of Theorem B From Theorem 2.4 it follows that near (λ 0 , 0) we have where j = k also in the case when w i (λ 0 ) = 0 for all i ≥ 3. So in any case the result follows proving that the ordered set (g 1 , . . . , g j ) := x 3 (1 + xh 3 (λ, x)), . . . , x 2 j+1 (1 + xh 2 j+1 (λ, x)) is an ECT-system in some open interval (0, ) for all λ in a neighborhood of λ 0 , because the number of zeros of δ(x, λ) in this interval is smaller or equal that the number of zeros of That (g 1 , . . . , g j ) is an ECT follows from the following facts that are easily computable:  x f i (λ, x)) for a certain real analytic function f i . Now let U 0 = V 0 ×(− , ) a neighborhood of (λ 0 , 0) be such that 1+x f i (λ, x) = 0 for all i ≤ l and (λ, x) ∈ U 0 . Thus we will have that for all x ∈ (0, ), for all λ ∈ V 0 and for all i ≤ l, W[g 1 , . . . , g i ] = 0. This shows that (g 1 , . . . , g l ) is an ECT system on (0, ) for all λ ∈ V 0 . Thus the cyclicity of 0 at λ 0 is less or equal to l − 1. The map H has clearly l − 1 zeros in U 0 because the values α can be taken arbitrarily and it is well known that ECT systems provide a full unfolding and its bifurcation diagram is equivalent to the one of the polynomials of degree l − 1, see [8]. In the case of all w i (λ 0 ) = 0 the fact that W k fills a neighborhood of zero implies that we can choose λ such that the corresponding δ(x, λ) has k − 1 zeros in (0, ε). For the other cases w 2l+1 (λ 0 ) = 0, 1 ≤ l ≤ k, the results follows similarly because the map W l−1 also fills a neighborhood of zero and we can choose λ near zero such that δ(x, λ) has j − 1 zeros Using the same tools and similar arguments as in the proofs of Theorem 2.4 and Theorem B we can prove Theorem C. For the sake of brevity we omit the details.