ON THE INTEGRABILITY OF SOME THREE-DIMENSIONAL LOTKA–VOLTERRA EQUATIONS WITH RANK-1 RESONANCES

We investigate the local integrability in C3 of some three-dimensional Lotka–Volterra equations at the origin with (p : q : r)-resonance, ẋ = P = x(p+ ax+ by + cz), ẏ = Q = y(q + dx+ ey + fz), ż = R = z(r + gx+ hy + kz). Recent work on this problem has centered on the case where the resonance is of “rank-2”. That is, there are two independent linear dependencies of p, q and r over Q. Here, we consider some situations where there is only one such dependency. In particular, we give necessary and sufficient conditions for integrability for the case of (i,−i, λ)-resonance with λ / ∈ iR (after a scaling, this is just the case p+ q = 0 with q/r / ∈ R), and also the case of (i− 1,−i− 1, 2)-resonance (a subcase of p+ q+ r = 0) under the additional assumption that a = k = 0. Our necessary and sufficient conditions for integrability are given via the search for two independent first integrals of the form xαyβzγ(1 + O(x, y, z)). However, a new feature in the case of rank-1 resonance is that there is a distinguished choice of analytic first integral, and hence it makes sense to seek conditions for just one (analytic) first integral to exist. We give necessary and sufficient conditions for just one first integral for the two families of systems mentioned above, except that for the second family some of the cases of sufficiency have been left as conjectural. 2010 Mathematics Subject Classification: 34C20.

In the works cited above, the resonances at the origin have all been of "rank-2". That is, there are two independent linear dependencies of p, q and r over Q. This condition is satisfied if and only if we can rescale the system so that p, q and r are in Z. Necessary and sufficient conditions for integrability can therefore be obtained via the search for two independent first integrals of the form x α y β z γ (1 + O(x, y, z)), with α, β and γ in Z. In the case where p, q and r do not all share the same sign we can reduce our considerations to the search for two analytic first integrals.
Our aim here is to consider the integrability of the origin of (1) in the case where there resonance is of rank-1. That is, there is only one linear dependency of p, q and r over Q. As above, to prove integrability, we still seek two first integrals of the form x α y β z γ (1 + O(x, y, z)), but at most one of these can be analytic in this case.
In particular, we give necessary and sufficient conditions for the origin of system (1) to be integrable in the case of (i : −i : λ)-resonance for λ / ∈ iR, and also for the case of (i − 1 : −i − 1 : 2)-resonance under the additional assumption that a = k = 0. The first case corresponds, after a scaling, to the condition p + q = 0 with q/r / ∈ R. The latter case is a subset of those systems with p + q + r = 0. Unfortunately, the calculations for more general systems with p + q + r = 0 were beyond our computational resources, and even in this subcase, the assumption of a = k = 0 was necessary to make the computations tractable. We do not envisage that the more general cases will bring forth any essentially different phenomena to the ones found here, although it would be interesting to see if these computations could be pursued further.
A new feature in the case of rank-1 resonances in the Siegel domain is that there is a distinguished choice of analytic first integral of the form x α y β z γ (1 + O(x, y, z)), and hence it makes sense to seek conditions for just one (analytic) first integral to exist. We give necessary and sufficient conditions for just one first integral in the cases mentioned above, except that for (i−1, −i−1, 2)-resonance, a few cases of sufficiency have been left as conjectural.

Definitions
Let X = P ∂ ∂x + Q ∂ ∂y + R ∂ ∂z be the associated vector field to system (1). Given a polynomial F ∈ C[x, y, z], a surface F = 0 is called an invariant algebraic surface of system (1), if the polynomial F satisfies the equation . Such a polynomial is called the cofactor of the invariant algebraic curve F = 0. One can note that from equation (3) that any cofactor has at most degree one since the polynomial vector field has degree two.
To complete the study of integrals of parametric families, we will also need the notion of exponential factor which plays the same role as an invariant algebraic surface in the case when two such surfaces coalesce. Let E(x, y, z) = exp(f (x, y, z)/g(x, y, z)) where f, g ∈ C[x, y, z], then E is an exponential factor if for some polynomial C E of degree at most one. The polynomial C E is called the cofactor of E.
A Darboux function is a function of the form where the F i are invariant algebraic surfaces of the system, and E = exp(f /g) is an exponential factor. Given a Darboux function, D, we can compute Clearly, the function D is a non-trivial first integral of the system if and only if the cofactors C Fi and C E are linearly dependent. For Darboux integrability in two dimensions, we seek a Darboux function which is either a first integral or integrating factor for the system. From the latter, it is possible to find a first integral by quadratures.
In higher dimensions, the role of the integrating factor is taken by the Jacobi multiplier. In the context of Darboux integrability, we usually consider the corresponding reciprocals: inverse integrating factors, and inverse Jacobi multipliers [5]. A function M is an inverse Jacobi multiplier for the vector field X if it satisfies the equation In three dimensions, the existence of two independent first integrals implies the existence of an inverse Jacobi multiplier. Conversely, given just one first integral, φ, and an inverse Jacobi multiplier, M , one can construct another first integral by integrating along the level surfaces of the first integral, noting that M gives rise to an integrating factor on each level surface.
Unfortunately, this prescription breaks down near a critical point, where the leaves become singular. However, the following theorem allows us to construct a second first integral at a critical point in many cases. We use the usual multi-index notation X I = x i y j z k to simplify the notation.
Theorem 1. Suppose the analytic vector field ) with at least one of α, β, γ = 0 and a Jacobi multiplier M = x r y s z t (1 + O(x, y, z)) and suppose that the cross product of (r is bounded away from zero for any integers i, j, k ≥ 0. Then the system has a second analytic first integral of the form ψ = x 1−r y 1−s z 1−t (1 + O(x, y, z)), and hence system (5) is integrable.
We note finally that it is sometimes possible to reduce the critical point to one in the Poincaré domain. That is, the origin does not lie in the convex hull of the eigenvalues of the critical point. In this case, it is only necessary to check that a finite number of resonant terms vanish in the normal form to conclude that the system is linearizable.
We make use of this fact in two ways: either by decoupling two of the three equations of (1) to get one in the Poincaré domain; or performing a blow-down to a critical point in the Poincaré domain. In the former case, ad-hoc arguments are used to show that the third equations can also be linearized. In the latter, we can pull back the two first integrals of the linear system to first integrals of the original system.

Systems with (i : −i : λ)-resonance
In this section we study the local integrability of the origin for the three dimensional Lotka-Volterra equations, with λ / ∈ iR. Necessary conditions for the existence of one first integral were found by searching for a first integral of the form xy(1 + O(x, y, z)) and, for integrability, a second first integral of the form y λ z i (1 + O(x, y, z)). The computations were carried as far as the resonant terms in (xy) 2 to obtain the conditions below. Sufficiency of these conditions was then proved case by case. The computations were carried out in Maple.
Some care needs to be taken over the integrability conditions in this case since polynomials in λ appear in the coefficients of the first integrals. Any value of λ which appears as a root in these denominators will have to be checked separately. However, in the case above, these roots are purely integer multiples of i and so have already been excluded.
Lemma 2. Let X be the vector field associated to (6), and X 0 the restriction of X onto the plane z = 0. If X 0 is integrable (that is, it has an analytic first integral of the form xy + · · · ), and X 0 (φ 0 (x, y)) = f (x, y) for some analytic functions φ 0 and f then, for any analytic function g(x, y, z), there exists an analytic function φ(x, y, z) = φ 0 (x, y) + z φ(x, y, z) such that X(φ) = f (x, y) + zg(x, y, z).
Proof: We write then it is clear that we need to solve for each m ≥ 0. It is easy to see that there are no obstructions to obtaining a unique formal series solution for φ m in this way and its convergence follows from the following argument.
Since the system X 0 is integrable, there is a change of coordinates (x, y) = (X + o(X, Y ), Y + o(X, Y )), which brings X 0 to the form for some analytic function r(X, Y ) = 1 + O(X, Y ). Thus (7) is now of the form where φ m (X, Y ) = φ m (x, y) and similarly and g m are just the functions (λ(1−r)+gx+hy)/r and g/r in X and Y coordinates (where r is already defined in X and Y coordinates). Furthermore, X 1 = P ∂ ∂X + Q ∂ ∂Y is just the transformation of X 1 to the coordinates X and Y .
Let u = X + Y + Z and v = X + Y . Choose N such that g m z m is majorized by N (1−N u) −1 , then g m is majorized by N m+1 (1−N v) −m−1 . We write φ m as a sum of homogeneous terms, φ m = ϕ m,i , and choose M > N such that r −1 P , r −1 Q, and r −1 are all majorized by 1/(1−M v) and all the coefficients a to k are less than M in absolute value. We also let κ denote | (λ)|, so that if A majorizes (iX∂ X − iY ∂ Y + (m + 1)λ)B then B is majorized by (m + 1) Suppose that ϕ m ,i is majorized by K m ,i v i for all i when m < m and for all i < i when m = m and that

Thus we can choose
Hence, if we take M large enough so that φ 0 is majorized by Theorem 3. Consider three dimensional Lotka-Volterra system (6).
1) The origin has one analytic first integral of the form φ = xy(1 + O(x, y, z)) if and only if ab = ed.

2) The origin is integrable if and only if
Proof: Necessary conditions are obtained as explained above. To prove sufficiency we make use of Lemma 2 above. 1) If ab − ed = 0 then, restricting to z = 0 we obtain a vector field When either a or e are non-zero then this vector field has an invariant algebraic surface F = 1 − iax + iey with cofactor ax + ey, and also the surfaces x = 0 and y = 0 with cofactors i + ax + by + cz and −i + dx + ey + f z respectively. Thus the system has a first integral φ 0 = xyF k , where k = −1 − d/a (a = 0) or k = −1 − b/e (e = 0). If a = e = 0 there is an exponential factor E = e i(dx−by) with cofactor −dx − by and hence a first integral φ 0 = xyE.
From Lemma 2, we can therefore construct a first integral φ of the original system (6).
2) We seek a first integral of the form y λ z i e −ψ for some analytic ψ. Such an integral exists if and only if we can solve the equatioṅ ψ = (λd + ig)x + (λe + ih)y + (λf + ik)z.
If either b or d are non-zero, then we let r = a/d (d = 0) or r = e/b (b = 0). We let ξ = ln(1 + ir(−dx + by))/r for r = 0 and ξ = i(−dx + by) for r = 0. Then X 0 (ξ) = (dx + by) and so we can choose ψ 0 = sξ, where we take s = λ + ig/d if d = 0 or s = λe/b + h/b if d = 0 and b = 0 (whence g = a = 0).  In this section we shall study the local integrability of the origin for the three dimensional Lotka-Volterra equation, The assumption that a = k = 0 from (1) is a somewhat arbitrary choice, but was chosen to bring the computations to a manageable form. It would be interesting to compute the integrability conditions for the general case of arbitrary a and k, but this appears to require much more computational power.
As in the previous case, we will give necessary conditions for the origin of (8) to have one analytic first integral of the form xyz(1+O(x, y, z)) and also for the existence of a second first integral of the form x 2 z 1−i (1 + O(x, y, z)). The computations were carried out in Maple up to terms in (xyz) 5 .
The proof of sufficiency is again handled case by case. However, in the case of one first integral, we have three cases which are are unable to give a complete explanation of why these first integrals exist. This is surprising since the conditions themselves seem very simple. In particular, all three have explicit expressions for an inverse Jacobi multiplier.
Theorem 4. Consider three dimensional Lotka-Volterra system (8). The origin is integrable if an only if one of the following conditions hold: Furthermore, a necessary condition that the origin has an analytic first integral of the form φ = xyz(1 + O(x, y, z)) is that either 1), 2) or 3) hold above, or one of the following conditions hold: The sufficiency of conditions 1)-3) follows immediately from their integrability.
Conjecture 5. We conjecture that the conditions 4'), 5') and 6') are also sufficient conditions for one analytic first integral, but are unable to prove this at the moment.
Proof: The necessity of the conditions is proved as indicated above. We shall treat the proofs of sufficiency case by case.  O(x, z))) which bring the system to the forṁ We now seek an invariant analytic surface of the form α(X, Z) + yβ(X, Z) = 0 such that d dt (α(X, Z)+β(X, Z)y) = (α(X, Z)+β(X, Z)y)(dx(X, Z)+eY+f z(X, Z)).
Since d dt r,s≥0 a r,s X r Z s = r,s≥0 (r(i − 1) + 2s)a r,s X r Z s .
The first equation can be solved uniquely term by term and is clearly convergent. The second equation can be solved likewise if and only if the coefficient of XZ in α is zero. However, a small computation shows that this condition is just 2gf + (1 + i)d(f + ic) = 0.
Suppose there exists a function γ such thatγ = gx(X, Y ) + hy(X, Y ). The transformation Z = ze −γ will linearize third equation. Writing gx(X, Y ) + hy(X, Y ) = n+m>0 a n,m X n Y m , we see that which is clearly convergent.
Case 3: This case is effectively the same as the previous case, except that now the second and third equations are in the Poincaré domain and therefore there exists a change of variables (Y, Z) = (y (1+O(y, z)), z(1+ O(y, z))) such thaṫ We seek a function γ such thatγ = by(Y, Z) + cz(Y, Z). Then the transformation X = xe −γ givesẊ = (i − 1)X. To find such a function, we write by(Y, Z) + cz(Y, Z) = n+m>0 a n,m Y n Z m , then we have which gives a convergent expression for γ.
In this case the system has an invariant algebraic plane = 1+ i+1 2 hy = 0 and an exponential factor E = exp( gx−cy 1+ (i+1) 2 hy ) with cofactors L = −ihy and L E = (i − 1)gx − 2cz producing a first integral φ = x 2 z 1−i i−3 E and inverse Jacobi multiplier M = xyz . Theorem 1 then guarantees the existence of a second first integral of the form φ = 1 + O(x, y, z), which must be analytic. Consideration of the first non-constant terms in this expansion imply that φ − 1 = (xyz) k (c + O(x, y, z)) for some c = 0 and therefore the system is integrable.
If dx(X, Z) + f z(X, Z) = n+m>0 a n,m X n Z m , then we have γ = n+m>0 a n,m (i − 1)n + 2m X n Z m , which gives a convergent expression for γ.

Case 6:
The equations guarantee that the cofactors of x, y and z are linearly dependent, so there exists a first integral of the form x α y β z γ . We also have an inverse Jacobi multiplier xyz. Theorem 1 therefore guarantees the existence of a first integral of the form φ = 1 + O(x, y, z), which must be analytic. Consideration of the first non-constant terms in this expansion imply that φ − 1 = (xyz) k (c + O(x, y, z)) for some c = 0 and therefore the system is integrable.