On a conjecture on the integrability of Liénard systems

We consider the Liénard differential systems in C2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {C}^2$$\end{document} where F(x) is an analytic function satisfying F(0)=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$F(0)=0$$\end{document} and F′(0)≠0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$F'(0)\ne 0$$\end{document}. Then these systems have a strong saddle at the origin of coordinates. It has been conjecture that if such systems have an analytic first integral defined in a neighborhood of the origin, then the function F(x) is linear, i.e. F(x)=ax\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$F(x)=a x$$\end{document}. Here we prove this conjecture, and show that when F(x) is linear and system (1) has an analytic first integral, this is a polynomial.


Introduction and statement of the main results
One of the main problems about the differential systems in C 2 , and in particular for the Liénard differential systemsẋ = y + F(x),ẏ = x, with the function F(x) analytic, is to know when they are integrable or not. If the function F satisfies F(0) = F (0) = 0, then the eigenvalues of the linear part of system (2) at the singular point located at the origin of coordinates are ±1, and consequently the origin is a weak saddle. Recall that a saddle is weak if its eigenvalues are ±λ with 0 = λ ∈ R, and a saddle is strong when its eigenvalues are λ 1 , λ 2 ∈ R, λ 1 < 0 < λ 2 and λ 2 = −λ 1 . The vector field associated to the Liénard differential system (2) is We recall that the function H = H (x, y) is a first integral of system (2) in an open subset U or C 2 if From Theorem 1 of Gasull and Giné [4] it follows the next result.
Theorem 1 The Liénard analytic differential systeṁ and F(x) = j≥2 a j x j , is locally integrable at the origin if and only if F(x) is an even polynomial (i.e. F(−x) = F(x)) .
Note that in Theorem 1 as already noticed by the authors the origin is a weak saddle. Theorem 1 extends to C 2 and for all non-zero complex number b the well known results on the existence of a local first integral in a neighborhood of the origin for a polynomial Liénard differential (4) in R 2 having at the origin a center (i.e. b = −1 obtained by Poincaré [10,11]), or a weak saddle (i.e. b = 1, see [1,12,16]).
In all the paper Z + and Q + denote the sets of non-negative integer numbers and nonnegative rational numbers, respectively. Consider analytic differential systems in C 2 of the formu where λ and μ are non-zero complex numbers. In (5) the dots · · · denotes nonlinear terms. From Poincaré [10,11] and Furta [3] we know that a necessary condition for the existence of an analytic first integral in a neighborhood of the origin of system (2) is that λ/μ = p/q ∈ Q + \ {0} with gcd( p, q) = 1. When λ and μ satisfies this condition we say that the origin is in [ p : −q] resonance. A [ p : −q] resonant differential system (5) after a scaling of time if necessary can be written asu with p, q ∈ Z + \ {0}. The next result follows from Theorem 4 of [5].

Theorem 2
The Liénard analytic differential system (2) with a strong saddle at the origin can be transformed into a system with a [ p : −q] resonant saddle at the origin.
The study of the existence or not of a first integral in a neighborhood of a [ p : −q] resonant saddle is a difficult problem, see for instance [2,7,8,[13][14][15] and the references quoted there. Hence Theorem 2 says that the study of the existence or not of a first integral in a neighborhood of a strong saddle for the Liénard differential system (2) is also difficult. When a planar differential system has a (local) first integral we say that it is (locally) integrable. In [4] the authors left open the following problem (see the last sentence of their paper): (2).

Open Theorem We do not know if there are nonlinear integrable cases in systems
Later on in [5] appears explicitly the following:

Conjecture The unique integrable case of the Liénard system (2) is the linear one.
We remark that this conjecture is made for Liénard analytic differential systems having a strong saddle at the origin.
The objective of this note is to prove the previous conjecture restricted to polynomial first integrals and restricted to Liénard polynomial differential systems (2), i.e. the function F(x) is polynomial. Thus our first main result is: Theorem 3 If a Liénard analytic differential system (2) has a local analytic first integral defined in a neighborhood of the origin, then where a = 0 and k 1 and k 2 are coprime positive integers.
The proof of Theorem 3 is given in Sect. 2. Note that since we are interested in systems that are integrable, we must have a satisfying (7).
When a = F (0) does not satisfy (7) the analytical integrability of the Liénard analytic differential system has been studied in [9]. Theorem 4 If a Liénard analytic differential system (2) with a as in (7) has a polynomial first integral, then the degree of the polynomial F(x) must be one, i.e, F(x) = ax, and the polynomial first integral H is Note that Theorem 4 proves the conjecture restricted to polynomial first integrals. The next result proves the conjecture.
Theorem 5 If a Liénard analytic differential system (2) with a as in (7) has an analytic first integral defined in a neighborhood of the origin, then the degree of the polynomial F(x) must be one, i.e, F(x) = ax, and the polynomial first integral H is the one given in Theorem 4.
Theorem 4 is proved in Sect. 3, while Theorem 5 is proved in Sect. 4.

Proof of Theorem 3
Before proving Theorem 3 we recall the following result whose proof can be found in [3,6,10,11].
Theorem 6 Assume that the eigenvalues λ 1 and λ 2 of the Jacobian matrix of system (2) at the singular point (0, 0) do not satisfy the condition then system (2) has no local analytic first integral defined in a neighborhood of the origin.
We first note that the origin is the unique singular point of system (2) and that the eigenvalues of the Jacobian matrix at this point satisfy So we have that λ 1 λ 2 = −1, yielding λ 2 = −1/λ 1 . Moreover, since by assumptions the system has a local analytic first integral in a neighborhood of the origin, in view of Theorem 6 we must have that Note that k 1 , k 2 ∈ Z + \ {0} because λ 1 and λ 2 are not zero. Therefore or equivalently Hence, Hence Moreover, k 1 and k 2 are different, otherwise a = 0 which is not possible because the origin would be a weak saddle. Finally, we observe that k 1 and k 2 are coprime. Otherwise setting k 1 = g.c.d{k 1 , k 2 }k 1 and k 2 = g.c.d{k 1 , k 2 }k 2 we get This completes the proof of Theorem 3.

Proof of Theorem 4
Without loss of generality we may write the polynomial first integral H = H (x, y) as H = g 0 (x)y n + g 1 (x)y n−1 + · · · + g n−1 (x)y + g n (x), where the g i (x) for i = 0, . . . , d are polynomials, and g 0 (x) is not the zero polynomial. Substituting H into (3) we get X h = (y + F(x)) g 0 y n + g 1 y n−1 + · · · + g n−1 y + g n +x ng 0 y n−1 + (n − 1)g 1 y n−2 + · · · + 2g n−2 y + g n−1 = 0, where the prime denotes derivative with respect to the variable x. Now we rewrite this equality as g 0 y n+1 + (g 0 F + g 1 )y n + (g 1 F + g 2 + ng 0 x)y n−1 + · · · +(g n−1 + g n−2 F + 3g n−3 x)y 2 Since all coefficients of the previous polynomial in the variable y must be zero, we get the following system of differential equations From the first two equations of (8) we get that g 0 and g 1 are constants, and additionally by assumptions we have that g 0 = 0. From the third equation of (8) we obtain that g 2 (x) is a polynomial of degree 2.
From the fourth equation, since g 1 , g 2 and g 3 are polynomials we get that F must be a polynomial. Assume that the degree of the polynomial F is d ≥ 1, then from the fourth equation of (8) it follows that the degree of the polynomial g 3 is d + 2. Now from the fifth equation of (8) we get that the degree of the polynomial g 4 is 2d + 2, and from the sixth we obtain that the degree of the polynomial g 5 is 3d + 2.
Thus recursively we have that the degree of the polynomial g k for k = 2, . . . , n is (k − 2)d + 2. From the last equation of (8) we obtain that the degree 1 + (n − 3)d + 2 of the polynomial xg n−1 must be equal to the degree d + (n − 2)d + 1 of the polynomial Fg n , but this equality is only possible if d = 1.
It is easy to check that the Liénard analytic differential system (2) of degree 1, i.e.
with a as in (7) has the polynomial first integral H as in the statement of the theorem. This completes the proof of Theorem 4.

Proof of Theorem 5
Consider system (2) with one of the conditions given by Theorem 3, namely the coefficient a of x in F(x) is equal to (k 1 − k 2 )/ √ k 1 k 2 (the case in which a = −(k 1 − k 2 )/ √ k 1 k 2 follows in the same way). Then If a j = 0 for j ≥ 2 it follows from Theorem 4 that system (2) has a polynomial first integral. Therefore we assume, first that a j = 0 for some j ≥ 2, and second that system (2) has an analytic first integral H defined in a neighborhood of the origin, and we will reach a contradiction.
Under the assumptions on F we havė Making the change of variables with inverse change and the rescaling of the time t = √ k 1 k 2 T , we have that system (9) becomes where the prime denotes derivative in the new variable T .
If k 1 > k 2 (and so k 1 > 1), we change from the variables (u, v) to the variables (u, z) where z = u k 2 v k 1 and so v = z 1/k 1 u −k 2 /k 1 .
If k 2 > k 1 (and so k 2 > 1), we change from the variables (u, v) to (z, v) where z = u k 2 v k 1 and so u = z 1/k 2 v −k 1 /k 2 .
From now on we assume that k 1 > k 2 because the other case is done in a similar manner. Hence we take Then from (11) we have We write H (x, y) as a formal first integral of system (9). ThenĤ (u, v) = H (x, y) is a formal first integral of system (11) andH (u, z) =Ĥ (u, v) is a formal first integral of system (13).
SinceH is a first integral we can assume that it has no constant term. Note thatH satisfies with (u , z ) as in (13). We will show by induction that H j (u) = 0 for j ≥ 0.
Note that to conclude the proof of the theorem it is enough to show that (15) holds, because in this case we reach to a contradiction. First note that Eq. (14) restricted to z = 0 becomes where the prime denotes derivative with respect to the variable u. ThusH 0 is a constant. SinceH 0 has no constant terms we getH 0 = 0. This proves (15) for j = 0.