Inverse Problems in Darboux’ Theory of Integrability

The Darboux theory of integrability for planar polynomial differential equations is a classical field, with connections to Lie symmetries, differential algebra and other areas of mathematics. In the present paper we introduce the concepts, problems and inverse problems, and we outline some recent results on inverse problems. We also prove a new result, viz. a general finiteness theorem for the case of prescribed integrating factors. A number of relevant examples and applications is included.

and the associated ordinary differential equatioṅ x = P (x, y), y = Q(x, y). (2) There is a simple approach, initiated by Darboux [15] in 1878, to construct first integrals and integrating factors for polynomial equations (2) which admit sufficiently many invariant algebraic curves. An account of this is contained in the classical monograph by Ince [16], Chap. II, Sect. 2.2. Jouanolou [17] extended this method to obtain criteria for the existence of rational first integrals. Darboux' theory of integrability for planar polynomial vector fields was developed further in recent years, see for instance [5,6,8,9]. An inverse problem generally consists in determining all differential equations satisfying some given properties, such as admitting a given integrating factor. Given irreducible pairwise relatively prime polynomials f 1 , . . . , f r and nonzero complex constants d 1 , . . . , d r , one says that X (or (2)) admits the Darboux integrating factor if div( f −d 1 1 · · · f −dr r X) = 0.
Recall that divergence zero characterizes vector fields with volume-preserving local flow. In dimension two a local first integral of a divergence zero vector field can be determined explicitly by quadratures. Of course one knows that, due to the straightening theorem (or to standard existence theorems for quasi-linear partial differential equations), the local existence of first integrals or integrating factors near a non-stationary point is guaranteed. But our interest lies in constructive approaches. Necessary for the existence of a Darboux integrating factor is the invariance of all complex zero sets C i of f i for X, thus every C i is an invariant set for (2). Equivalently, due to Hilbert's Nullstellensatz, there exist polynomials L 1 , . . . , L r such that f i is then called a semi-invariant of X, with cofactor L i , and one also says that X admits f i .
With f := f 1 · · · f r the latter set of conditions is also equivalent to Xf = (L 1 + · · · + L r ) · f.

Given invariance, X admits the Darboux integrating factor (3) if and only if
We refer to [13,26] and [11] for more details and proofs. All these concepts and results are also applicable to real polynomial (or analytic) systems, via complexification, and complex invariant curves and integrating factors are relevant for the analysis of real systems. Why is there interest in these topics? First of all, they are classical, going back to Poincaré and Darboux. Moreover, there is a correspondence between integrating factors and Lie symmetries of first-order equations (see Olver [19]). Lie symmetries of such equations are not algorithmically accessible although (or rather, because) they abound, and therefore one seeks for approaches which avoid invoking the non-constructive straightening theorem. Integrating factors from a restricted class of functions (such as given in (3)) open a possible path.
The following result goes back to Lie; for the particular statement and a proof see e.g. [26], Prop. 1.1 and Cor. 1.3.

Proposition 1 Given (1) and a planar vector field
the following identity holds for the Lie bracket: In particular Y defines a local one-parameter orbital symmetry group for (2), thus [Y, X] = λX for some analytic λ, if and only if X(δ) = div X · δ.
In view of this connection to symmetries, it is not surprising that the existence of integrating factors has consequences for qualitative properties. For instance, a stationary point of a (real) polynomial vector field with inverse polynomial integrating factor f −1 is a center if it is a center by linearization (see e.g. [27]).
As a second motivation we recall the strong connection to differential-algebraic questions and results, in particular to the work by Prelle and Singer [20] on elementary first integrals for planar (polynomial) differential equations. The following statements are based on Prelle and Singer [20], respectively Singer [23].

Theorem 2
(a) The differential equation (2) with arbitrary complex exponents d i , some positive integer n and some polynomial g.
Moreover, Prelle and Singer note that the "missing link" for an algorithmic decision concerning elementary first integrals is in deciding the existence of a Darboux integrating factor. The more general type of integrating factor in part (b) arises from integrating factors of type (3) by coalescence of curves; see e.g. [10]. The result in part (b) is essentially due to Singer [23], who showed that a system with a Liouvillian first integral admits an integrating factor whose logarithmic differential is a closed rational 1-form. It is well known that the integrating factor must then be of the form above; see e.g. [7] for an algebraic proof of this. The result is also stated and proved in Chavarriga et al. [6], Theorem 8.
To indicate that algebraic invariant curves are of direct interest for elementary questions, we state a well-known result on particular solutions of non-autonomous equations in one dependent variable, and include the (elementary) proof: Lemma 3 If P , Q are relatively prime and f is irreducible with f y = 0 then the invariance of its zero set C for X is equivalent to the following: Any algebraic function φ = φ(x) defined by f (x, φ(x)) = 0 solves the differential equation Proof From f (x, φ(x)) = 0 for all x in some open set, and φ solving (7), one finds hence by a density argument which implies the invariance of C. Conversely, if the invariance condition holds and f (x, φ(x)) = 0 for all x in some open set, one finds by differentiation: while on the other hand one has Therefore φ solves (7).
At this point we note yet another motivation to discuss invariant algebraic curves and integrating factors: In some settings, e.g. when (7) is of Riccati type, there also exists a connection to differential Galois theory (for the linear system associated to the Riccati equation). See Acosta-Humánez et al. [1], in particular the introductory sections, for details and some applications. (A general reference for Galois theory of linear systems is van der Put and Singer [24].) In the present paper we will introduce and briefly discuss the problem to either determine a Darboux integrating factor for a given vector field, or to ensure that no such integrating factor exists, as well as the underlying problem to determine all invariant algebraic curves for a given vector field, or to ensure that no such curves exist. Then, in a more extensive manner, we will discuss the corresponding inverse problems, viz. to determine all polynomial vector fields which admit a given set of invariant curves, resp. a given Darboux integrating factor. In contrast to work focusing on foliations of the projective plane, such as Camacho and Sad [2], Carnicer [3], Cerveau and Lins Neto [4] we will investigate these problems for the affine plane, which is obviously of interest in its own right. From a technical perspective, the affine problems allow the employment of different methods, which are less intricate and open a straightforward path to explicit computations.
We emphasize that the inverse problems are relevant even if one's primary interest lies in the "direct" problems: One needs to know and understand the structure of vector fields admitting given invariant curves (resp., a given Darboux integrating factor) for the purpose of identification and classification. These inverse problems on the affine plane have been discussed systematically in recent work by the authors; see [10][11][12]. Since the original papers contain (unavoidably, it seems) quite technical sections, it is sensible to present the straightforward basic ideas in a survey, and to illustrate them by examples. In particular we include a discussion of non-autonomous differential equations with one dependent variable. The presentation in the corresponding sections is relatively informal. However, we also prove a substantial new result in the final section of this paper: The vector fields admitting a prescribed Darboux integrating factor (4) clearly form a linear space, and it is straightforward to find a subspace of "trivial" vector fields which can be written down explicitly. Extending the main theorem of [11], we show that the factor space modulo this trivial subspace is always finite dimensional.

The Direct Problems
We consider the polynomial vector field (1) and its associated planar system (2). In this section we deal with the following direct problems: Problem 1 Find all invariant algebraic curves for system (2). (2), and find it if the answer is affirmative.

Problem 2 Decide whether a Darboux integrating factor exists for
Example 4 Consider the one-dimensional non-autonomous polynomial equation and the question whether it has algebraic solutions. As was noted in Lemma 3, this is equivalent to the existence of an irreducible polynomial f such that the zero set C of f is invariant forẋ which, in turn, is equivalent to the existence of some polynomial K such that There is a special property of semi-invariants in this case: f and f y cannot have a common zero, because such a zero z would also be a zero of f x and thus a singular point of C, hence stationary for the two-dimensional system.
The above example is exceptional in the sense that one directly obtains rather strong restrictions for (possible) semi-invariants. Generally, the problem to find all invariant algebraic curves of a given vector field is still unresolved. A relatively successful strategy, which will be briefly outlined, is to use local information at the stationary points; see [27]: Consider a local analytic (or formal) vector field X with stationary point 0, and non-nilpotent linearization, thus without loss of generality P (x, y) = λx + · · · , Q(x, y) = μy + · · · with μ = 0.
Then the local problem for curves is to determine analytic (or formal) g and L, g(0) = 0 such that X(g) = L · g. (Note that g is far from unique here, since it may be multiplied by any invertible series.) The eigenvalue ratio of the linearization at 0 is the important parameter in this situation. Perhaps surprisingly, the case when λ/μ is a positive rational number (the "dicritical case") turns out to be the problematic one. We provide a partial picture of the local setting; more and more detailed information is available; see e.g. [27], Theorem 2.3. The first statement in the following proposition is classical; see e.g. Seidenberg [21].

Proposition 5
In the non-dicritical case there are at most two different local semi-invariants (up to invertible factors) for X at 0. Moreover, the corresponding curves intersect transversally.
If λ/μ is not a rational number then there exists-up to constants-one and only one local integrating factor, which is of the type (x + · · · ) −1 · (y + · · · ) −1 .
An isolated intersection point of two invariant curves is necessarily stationary for any vector field admitting these curves. If the linearization at this stationary point satisfies the hypotheses of Proposition 5 then one knows that at most two invariant algebraic curves may intersect at this stationary point, and the exponents in the integrating factor problem equal −1. With Bezout's theorem, one may thus hope to obtain degree bounds for the possible semi-invariants. This approach works well for the stationary points at infinity; see [27] for technicalities and a stronger version of the following statement, and for details on stationary points at infinity. Here it suffices to know that stationary points at infinity correspond to invariant straight lines of the homogeneous term (P (m) , Q (m) ) of highest degree of the vector field in (2).

Theorem 6 Let X be a polynomial vector field of degree m such that all stationary points at infinity have non-nilpotent linearization and that no stationary point at infinity is dicritical.
(a) Then all irreducible semi-invariants have degree ≤ m + 1.

(b) If one stationary point at infinity admits non-rational eigenvalue ratio then a Darboux
integrating factor is necessarily of the form h −1 , with h a polynomial of degree m + 1.
A result related to part (a) is given in [5], Theorem 3 and Corollary 4, and a result related to part (b) is given in [5], Theorem 5. For the curve problem (in the projective setting, in particular) there exist sharper global results, which employ much stronger machinery than used in [27]. We mention work from the 1980s and 1990s by Camacho and Sad [2], Cerveau and Lins Neto [4], and Carnicer [3]. But it seems that the question of integrating factors was discussed for the first time in [27]. Recently an algorithmic approach for curves (in the affine setting) was proposed by Coutinho and Menasché Schechter [14].
Example 7 To illustrate one of the results in [27] that go beyond Theorem 6, and how it works, we continue Example 4 related to the differential equation (8), thuṡ We make some additional assumptions on the homogeneous highest degree term Q (m) of Q, m > 1: The polynomial Q (m) is a product of m pairwise relatively prime linear factors, and neither x nor y divide Q (m) .
The stationary points at infinity can be discussed exactly as in [27], Example 3.12, which deals with a slightly different problem involving the same requisite calculations. A transfer of the arguments in [27], Example 3.12 (with only marginal modifications) shows that for δ(Q) ≥ 3 the system admits no Darboux integrating factor. The same holds true for δ(Q) = 2: As in [27] one sees that if there exists a Darboux integrating factor then there also exists an integrating factor of the form g −1 , with a quadratic polynomial g satisfying g (2) = Q (2) . But g and g y have no common zero, as noted earlier, and this implies that either x divides g (2) = Q (2) , or that g (2) is a square. Thus, with the given assumptions on the highest degree term, no Darboux integrating factor can exist.

Inverse Problems
In this section we consider given irreducible pairwise relatively prime polynomials f 1 , . . . , f r , and set f := f 1 · · · f r . We deal with the following inverse problems: The inverse problem for curves Find (characterize) the polynomial vector fields X = P ∂ ∂x + Q ∂ ∂y that admit f (equivalently, admit all f i ). The inverse problem for integrating factors Given, furthermore, nonzero complex con- These inverse problems are of interest and contain legitimate questions in their own right. For instance, it is legitimate to ask what derivations leave some given ideal invariant, or which polynomial vector fields admit a given infinitesimal symmetry. Moreover, the approach via integrating factors rather than symmetries avoids invocation of the straightening theorem and yields global results, thus providing examples and information from a different perspective. In addition, understanding the inverse problems is necessary for characterization (or ideally, classification), especially in the integrating factor case. As will be seen, the discussion of inverse problems provides insight into the obstacles to elementary integrability in Prelle and Singer [20]. And finally, particular solutions of inverse problems yield vector fields with special properties. As a case in point, Llibre and Rodriguez [18] used prescribed integrating factors to explicitly determine polynomial vector fields with a given configuration of limit cycles. This result also illustrates that complex invariant curves and Darboux integrating factors are relevant even for the analysis of real systems.

The Inverse Problem for Curves
We first turn to the inverse problem for curves. This is relatively easy to discuss, requiring only some Commutative Algebra. Recall that The vector fields admitting f form a linear space which we call V. Some of its elements are obvious: (i) The Hamiltonian vector field of f , defined by lies in V; its cofactor is 0.
(ii) More generally vector fields of type with an arbitrary polynomial a and an arbitrary polynomial vector field X, lie in V. These form a subspace which we call V 0 (the trivial vector fields admitting f ). (iii) Refinement: All vector fields of type To obtain a general picture of the vector fields admitting f , we first look at the cofactors, which form an ideal of C[x, y]. From one obtains (by definition of ideal quotients) the necessary and sufficient condition for L to be a cofactor. Geometrically, the structure of the quotient is determined by the singular points of the curve; see [26] and [11]: The ideal f x , f y has a primary decomposition with primary ideals q j , and (See e.g. van der Waerden [25], Chap. 15, Sect. 15.3 for background information.) A nontrivial primary ideal of the polynomial ring in two variables corresponds either to a point or to an irreducible curve in the plane, and one verifies: If q j corresponds to a curve, or to a point which does not lie on the curve C, then q j : f = q j . If q j corresponds to a point which is on the curve C (which is then necessarily singular) then q j : f = q j .
The map sending a vector field to its cofactor induces an isomorphism Sketch of proof We prove the nontrivial direction of (a) only for the case that f x and f y have no common prime factor. With L = R · f x + S · f y one obtains By relative primeness of f x and f y one finds that there is an a such that Part (b) is then essentially the homomorphism theorem. Finite dimension follows from the primary decomposition above, since C[x, y]/q j is finite dimensional whenever q j corresponds to a point.
This result allows a complete description of V if the underlying geometry is sufficiently simple.

Corollary 9
Assume that the zero sets C i of f i are smooth, that all pair intersections are transversal and there are no triple intersections. Then V = V 1 ; in other words, every vector field admitting f has the form Sketch of proof The smoothness and transversality assumptions imply that C[x, y]/q j has dimension one (and q j is maximal) whenever q j corresponds to a singular point z of the curve. If z lies in the common zero set of f k and f then the vector field has a cofactor which does not vanish at z (thanks to transversality). Multiplying Y by a suitable polynomial one obtains a vector field with cofactor not vanishing at z but vanishing at every other singular point. With Proposition 8(b) the proof is finished.
In view of this result, and of applications below, we will say that the inverse curve problem satisfies the geometric nondegeneracy condition if all the zero sets C i of f i are smooth, all pair intersections are transversal and there are no triple intersections.
Example 10 Consider the inverse problem corresponding to the non-autonomous polynomial differential equation (8) from Example 4; thus let f have no common zeros with its partial derivative f y . Then the curve C has no singular points, hence Proposition 8(a) shows that V = V 0 and every vector field admitting f has the form with suitable polynomials a, R and S. Since f and f y have no common zero, by Hilbert's Nullstellensatz there exist polynomials a * and R * such that −a * · f y + R * · f = 1, and every ordinary differential equation y = Q(x, y) which admits an algebraic solution φ defined by f (x, φ(x)) = 0 has the form with arbitrary S. In particular it is now clear that every polynomial f which has no common zero with its partial derivative f y defines an algebraic solution for some non-autonomous polynomial equation (8).

Remark 11
It seems appropriate to take a closer look at polynomials f which have no common zero with their partial derivative f y . Clearly, this holds for any irreducible polynomial f = p 1 (x)y + p 0 (x). But irreducible polynomials f with y-degree greater than one and this property must satisfy quite restrictive conditions. For instance, irreducible polynomials of a given degree > 1 (being identified with the tuple of their coefficients), with the property that f and f y have no common zero are contained in a proper Zariski-closed set of the coefficient space. To verify this, we show that f (x, y) = p n (x)y n + p n−1 (x)y n−1 + · · · + p 0 (x), n > 1 with polynomials p i then p n cannot be constant. The argument requires some facts about resultants and discriminants; see van der Waerden [25], Chap. 5, Sects. 5.7-5.9: Assume that p n is constant. Then, on the one hand, the discriminant of f is equal to the resultant of f and f y (seen as a polynomial in y), up to a constant, hence it cannot have a zero. On the other hand, the discriminant is a constant multiple of i<j (q j (x) − q i (x)) 2 , with the q j denoting the zeros of f in some algebraic extension of C(x), and hence all q j − q i are constant. Since the sum of the q i is equal to the rational function −p n−1 /p n , each q j must be rational, hence f is reducible. However, there exist irreducible polynomials of any y-degree n > 1 which do not have a common zero with their y-derivative: Let q 0 , q 1 and q 2 be polynomials in one variable x such that: (i) Every zero of q 2 is also a zero of q 0 ; (ii) q 0 and q n 1 + q 2 have no common zero. Then f := (q 0 y + q 1 ) n + q 2 , and f y = n(q 0 y + q 1 ) n−1 q 0 have no common zero: If the first factor of f y vanishes at (x 0 , y 0 ) then q 2 (x 0 ) = 0, otherwise q 0 (x 0 ) = 0 by (i), and (ii) leads to a contradiction. Likewise, the second factor of f y cannot lead to a common zero of f and f y by (ii). For suitable choice of the q i (e.g. for q 0 (x) = q 2 (x) = x and q 1 = 1) the polynomial f is irreducible.

Example 12
In continuation of Remark 11, we take a closer look at irreducible polynomials f with y-degree δ y (f ) = 2, thus with p 0 , p 1 and p 2 having no common zero. One computes f = 1 2 yf y + r, r := 1 2 p 1 y + p 0 , Note that f is reducible if and only if is a square in C[x, y]. Since we are interested in irreducible polynomials, we will assume that, in particular, is not constant. If (x 0 , y 0 ) is a common zero of f and f y then it is also a zero of r, and x 0 is a zero of . Conversely, if x 0 is a zero of and p 1 (x 0 ) = 0 then there is a unique y 0 such that r(x 0 , y 0 ) = 0, and then f (x 0 , y 0 ) = f y (x 0 , y 0 ) = 0. Thus, if f and f y have no common zero then every zero of is also a zero of p 1 , and thus of p 0 p 2 . If furthermore p 0 (x 0 ) = 0 then (x 0 , 0) is a common zero of f and f y ; a contradiction. Therefore every zero of must also be a zero of p 1 and p 2 .
Since this implies p 0 (x 0 ) = 0, f and its derivative with respect to y have no common zero. One example is given by Results for the nondegenerate geometric setting are of general interest since in the affine plane it is possible to "create" nondegenerate geometry by using blow-ups (seen as morphisms of the affine plane). See Sect. 4 for more on this. But one should note that the determination of ideal quotients (thus of the cofactor ideal) and of vector fields admitting f with given cofactors is a standard task for algorithmic algebra, and can in principle be performed via existing software packages; see [11].

The Inverse Problem for Integrating Factors
The inverse problem for integrating factors is more complicated, and moreover it seems that one cannot resolve it exclusively by algebraic means . Given f 1 , . . . , f r , f = f 1 · · · f r as in the previous subsection, and nonzero complex constants d 1 , . . . , d r , the vector fields with Darboux integrating factor of the form (3) form a linear space F = F (d 1 , . . . , d r ), which is a subspace of V. As in the curve case, we start by exhibiting a class of vector fields that admit this prescribed integrating factor: Given an arbitrary polynomial g, define · · · f dr −1 r .
Then the vector field is polynomial and by construction admits the integrating factor (f d 1 1 · · · f dr r ) −1 . Vector fields of type (9) form a subspace F 0 = F 0 (d 1 , . . . , d r ) of F . We will call these the trivial vector fields admitting the given integrating factor. Nontrivial vector fields exist for certain exponents, as the next result shows (see [10,26] and [12]).

Theorem 13
(a) For all constants α i and every vector field X with divergence zero, the vector field admits the integrating factor f −1 , and X is not trivial if some α i = 0. (b) If the nondegenerate geometry condition holds then every vector field admitting the integrating factor f −1 is of this type.
Sketch of proof We prove part (b) only for the case of irreducible f = f 1 ; the essential part of the argument is still visible here. Thus let X admit the integrating factor f −1 . Due to Corollary 9 one has X = a · X f + f · X, with some polynomial a and some polynomial vector field X. The cofactor of X is equal to hence the integrating factor condition is equivalent to X f (a) + f · div X = 0. From this one finds and by a familiar invariance criterion the set V β of common zeros of f and a − β is invariant for X f . Let β be such that V β = ∅. Then no irreducible component of V β is a point, since this point would be stationary for X f , hence a common zero of f , f x and f y , contradicting the nondegenerate geometry assumption. Therefore every component of V β is a curve and contained in the irreducible curve given by f = 0. Hilbert's Nullstellensatz shows the existence of a polynomial g such that which allows to rewrite The assertion follows.

Corollary 14
Given f = f 1 · · · f r , all non-autonomous polynomial equations (8) with integrating factor f −1 are determined as follows: There are constants α 1 , . . . , α r such that and with an arbitrary polynomial S in one variable x.
Proof Since f and f y have no common zero in this case, Theorem 13 is applicable, hence with constants α i and R x + S y = 0. On the other hand, the requirement Comparing the y-degrees one sees that R = 0 (hence S depends only on x), and finds (10). The assertion follows.
A complete characterization of polynomials f 1 , . . . , f r satisfying condition (10) for some α i seems to be nontrivial. We just look at some special cases:

Example 15
(a) Condition (10) enforces that f i and f i,y have no common zero, but this alone is not sufficient: For f 1 = xy + 1 and f 2 = xy + 2 one verifies that f and f y have no common zero, hence there are vector fields 1 Q which admit f . But no vector fields of this type admit the integrating factor f −1 , since and therefore condition (10) cannot be satisfied. (b) If f = f 1 is irreducible then (10) implies that f y is constant, hence f = y + p 0 (x) with no loss of generality. For the corresponding differential equations one finds all of these are linear.
(c) If r = 2 and δ y (f ) = 2 then one has f 1 = p 1 y + p 0 and f 2 = q 1 y + q 0 with polynomials p i and q i in one variable x. One verifies that condition (10) holds if and only if Without loss of generality we may take γ = 1. For given relatively prime p i , all q i which satisfy this relation can be found via Euclid's algorithm. The corresponding nonautonomous differential equations are special Riccati equations.
The following two auxiliary results are elementary but useful, both for examples and for theoretical purposes.
From now on and until further notice, nondegenerate geometry will be assumed. Then Lemma 17 yields a reduction strategy, with the starting point that any vector field X admitting f is congruent to some f · X * modulo F 0 (d 1 , . . . , d r ), unless d = 1. In one scenario this strategy works perfectly: If all d i are positive integers then one can reduce them all to 1 and apply Theorem 13.

Theorem 18
Let the nondegenerate geometry condition be satisfied. If all d i are positive integers then the elements of F are precisely those of the form with constants α i and div X = 0.
In the general case, one might initially hope to apply Lemma 17 repeatedly and reduce the degree of the vector field in every step. But this works only if additional nondegeneracy conditions hold for the stationary points at infinity, see [13] and [10], and will fail in general. However, one can instead consider the degree δ y with respect to y, and start from the assumption f j = γ j y n j + terms of smaller degree in y, with constants γ j = 0, for all j . This can always be achieved by a suitable invertible linear transformation and thus involves no loss of generality. A consequence is that in the representation of X in Lemma 17 one may assume δ y (a i ) < δ y (f i ) for all i, since every power y k with k ≥ δ y (f i ) can be replaced by a sum of powers of smaller degree, modulo f i . Up to some point, one can use this to reduce y-degrees via Lemma 17. A precise statement can be found in [12], Proposition 8, but for our purposes the following version will suffice.

Lemma 19
Let the nondegenerate geometry condition be satisfied. There exist positive integers m 1 and m 2 (depending only on f 1 , . . . , f r ) such that "modulo F 0 " only vector fields with δ y (P ) < m 1 and δ y (Q) < m 2 , remain to be investigated in F . Moreover, one may assume that all Re d j ≤ 1.
We emphasize that the proof will provide m 1 and m 2 explicitly. The next result is a consequence of the fact that any submodule of a free C[x]-module is free.
. Moreover the u i are uniquely determined.

Example 21
(a) f = y 2 − p(x), with p nonconstant and having only simple roots; thus f is irreducible with smooth zero set. (This is a special case of an example discussed in [12], which we present here step-by-step to illustrate the approach and computations.) By Corollary 9 we know that every vector field admitting f has the form using the relation y 2 ≡ p(x) mod f to shift higher powers of y in a to the second term. Lemma 19 in this case provides m 1 = m 2 = 1. Rewriting the first term once more with the help of the relation, we obtain Since the remaining terms have degree > 1 in y, one finds (with suitable labelling) (b) Consider the irreducible cubic polynomial with smooth zero set C (see [12]). Here one can show with Lemma 19 that it is sufficient to consider vector fields of y-degree ≤ 1 in the first entry and ≤ 2 in the second. Moreover, a vector field with polynomials u 1 , u 2 in one variable and (c) Consider polynomials

. , p r in one variable
and f = f 1 · · · f r . Then it is sufficient to consider vector fields with y-degree < r − 1 in every entry, and every such vector field can be written in the form . . . , d r ).
See [12] for computations and details.  . . . , d r ) if and only if The matrices V and B have s columns and at most max{m 1 , m 2 } − 1 rows. The entries of V do not depend on d 1 , . . . , d r , and V has maximal rank s.
Partial sketch of proof Let K i,j be the cofactor of f j with respect to Y i . For X as above, the cofactor of f j equals Evaluation of the integrating factor condition div X = d j L j yields Now compare powers of y.
Again we look at examples, continuing Example 21.

Example 23
(a) Here we have to deal with the vector fields with cofactors K 0 = 0 and K 1 = 2p . Let d not be a positive integer. The integrating factor condition for f −d yields d · 2p u 1 = 2u 0 y + 2pu 1 + p u 1 and therefore u 0 = 0, This immediately implies that u 0 is constant. We will show that u 1 = 0 by an approach which is unnecessarily complicated for this special problem, but open to generalization: By Lemma 17 one may assume Re d < 0. Write the equation as which in the present case yields where the dots indicate terms of higher order in v, and n denotes he degree of p. Comparing coefficients of v −m−1 shows which yields a contradiction in view of Re d < 0. This approach is generalizable to vector-matrix equations u = A(x)u, with rational entries of A: If the system has a weak singular point at infinity (thus −v −2 A(v −1 ) = A ∞ v −1 + · · · ) and u is a nonzero polynomial solution of the meromorphic linear system, then its degree m must be an eigenvalue of −A ∞ . For the example at hand, we have shown that u 1 = 0, therefore only the Hamiltonian vector field of f admits f −d as integrating factor. Thus F (d) = F 0 (d) whenever d is not a positive integer. (b) We continue with the degree 3 polynomial f = y 3 + p 1 (x) · y + p 0 (x). Assume that d is not a positive integer. Then the vector field Y given above admits the integrating factor f −d if and only if the polynomials u 1 and u 2 satisfy with the discriminant = 27p 2 0 + 4p 3 1 of f , and .
For many classes of such curves one has F (d) = F 0 (d). For instance, if δ(p 0 ) ≥ 2δ(p 1 ) then the meromorphic linear system admits a first order pole at infinity, with the coefficient matrix of the lowest order term equal to , and therefore F (d) = F 0 (d) whenever d is not a positive integer. This follows from the fact that one may assume Re d < 0, whence both eigenvalues of A ∞ have positive real parts. The question whether F (d) = F 0 (d) holds for all such cubic curves is still open. (c) We continue the example from above with It is convenient to abbreviate θ i := d i − 1; and one may assume Re (θ i ) ≤ 0. As is shown in [12], one has if and only if the polynomials a i satisfy the meromorphic linear system The nonzero constant solution a 1 = θ 1 , . . . , a r = θ r yields a scalar multiple of f It is interesting to take a closer look at constant solutions. For the case of nondegenerate geometry this is a consequence of Theorem 22, as was shown in [12]. The general proof will be given in the following section. However, one may say that the crucial argument for the proof of Theorem 24, invoking finiteness results for solutions of linear differential equations, rests in the nondegenerate geometry case.

Proof of Theorem 24
The strategy of the proof is to reduce arbitrary geometry to the nondegenerate setting, via sigma processes and the Bendixson-Seidenberg theorem [21]: By application of finitely many suitable sigma processes one will achieve smooth irreducible curves with only simple intersections. The main problem will be to control the behavior of trivial vector fields.
We first introduce the necessary ingredients. In the affine plane the basic sigma process with center 0 and direction {x = 0} is represented by the birational morphism : x y → x xy .
Generally, a sigma process will be composed of a basic sigma process, a linear transformation and a translation. This morphism induces a map from polynomials to polynomials, sending g toĝ withĝ(x, y) = g(x, xy); and a map ι from polynomial vector fields to polynomial vector fields, sending X = P (x, y) Q(x, y) to X = xP (x, xy) −yP (x, xy) + Q(x, xy) .
Both maps are linear and injective. We collect some useful criteria and auxiliary results; see [11] for the straightforward proofs.
Lemma 25 Let 0 be a singular point for f and a polynomial vector field X = (P , Q) on C 2 be given.
(a) The following are equivalent: The zero set of f is invariant for X with cofactor K.
(ii) The vector field is polynomial, and the zero set off := f (x, xy) is invariant for X with cofactor K(x, y) = K(x, xy).
and conversely.
(a) For any analytic f one has the identity . . , f r and g, and the vector field defined in (9), one has the identityf
Then Y = X for some X if and only if δ(v * ) ≤ − 1 and δ(w * ) ≤ − 1 for all .
Using (9), one verifies directly that this last expression is equal tô Next we record an auxiliary result from Linear Algebra, including a proof for the sake of completeness.

Now we write
and note that δ(f i, ) ≤ s i + by construction. Moreover we may assume that δ(f i,0 ) = s i : This can be achieved via a linear automorphism x → x + αy, y → y with suitable α. In other words, this can be achieved by suitable choice of direction of the sigma process, and only finitely many directions have to be excluded. Moreover, we are only interested in the case of a degenerate singular point at 0, thus s i ≥ 2.
(iii) We will prove by induction on : If the vector field Y from Lemma 28 lies in ι(F) then with the tacit understanding that δ(h ) < 0 means h = 0. We will use the degree conditions in (16) and (17). = 0: In the case that s i d i − 1 = 0, the assumption h 0 = 0 and (17) lead to s 1 + · · · + s r + δ(h 0 ) = δ(f 1,0 · · · f r,0 · h 0 ) ≤ 1; a contradiction. In the case s i d i − 1 = 0, the assumption h 0 = 0, part (ii) and the degree condition in (16) which also gives a contradiction.
For the induction step we assume that the assertion holds for all h −j , with 1 ≤ j ≤ . Since the degree of f i,j i is at most equal to s i + j i , every term on the right-hand side of (16), with the possible exception of those involving h , has degree ≤ . By the same argument, every term on the right-hand side of (17), with the possible exception of those involving h , has degree ≤ + 1. Therefore we have This leads us to the proof of our main result.

Proof of Theorem 24
The Bendixson-Seidenberg theorem for X f , cf. Seidenberg [21], and Lemma 26 show that a finite number of sigma processes, with suitable centers and directions, will transform the f i to polynomialsf i which satisfy the geometric nondegeneracy condition. In every single sigma process at most finitely many directions have to be excluded, as can be seen from [21]. Forf 1 , . . . ,f r and d 1 , . . . , d r finiteness holds by Theorem 11 and Theorem 3 of [12]. For a single sigma process at a degenerate singular point, with a suitably chosen direction, Lemma 29 and Lemma 30 show that finiteness holds for the original setting if it holds for the transformed polynomials. Induction on the number of sigma processes finishes the proof.
Remark 31 Classically, one knows that sigma processes can be used directly to simplify singular points of curves; see Shafarevich [22], Chap II, §4 (including Exercises). We chose the detour via vector fields because Seidenberg gives a complete proof which shows clearly that the exclusion of finitely many directions does not matter.