Inverse Approach in Ordinary Differential Equations: Applications to Lagrangian and Hamiltonian Mechanics

This paper is on the so called inverse problem of ordinary differential equations, i.e. the problem of determining the differential system satisfying a set of given properties. More precisely we characterize under very general assumptions the ordinary differential equations in RN\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb {R}^N$$\end{document} which have a given set of either M\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M$$\end{document} partial integrals, or M<N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M<N$$\end{document} first integral, or M<N\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$M<N$$\end{document} partial and first integrals. Moreover, for such systems we determine the necessary and sufficient conditions for the existence of N-1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N-1$$\end{document} independent first integrals. We give two relevant applications of the solutions of these inverse problem to constrained Lagrangian and Hamiltonian systems respectively. Additionally we provide the general solution of the inverse problem in dynamics.


Introduction
In the theory of ordinary differential equations we can find two fundamental problems. The direct problem which consists in a broad sense in to find the solutions of a given ordinary differential equation, and the inverse problem. An inverse problem of ordinary differential equations is to find the more general differential system satisfying a set of given properties. For instance what are the differential systems in R N having a given set of invariant hypersurfaces, or of first integrals?
Probably the first inverse problem appeared in Celestial Mechanics, it was stated and solved by Newton in [22], and it concerns with the determination of the potential field of force that ensures the planetary motion in accordance to the observed properties, namely the Kepler's laws.
Bertrand in [4] proved that the expression for Newton's force of attraction can be obtained directly from the Kepler first law. He stated also a more general problem of determining a positional force, under which a particle describes a conic section under any initial conditions. Bertrand's ideas were developed in particular in the works [6,8,11,14,25,28,32].
In the modern scientific literature the importance of the inverse problem in Celestial Mechanics was already recognized by Szebehely (see [35]).
Clearly in view of the second Newton law, acceleration is equal to force, it follows that the mentioned inverse problems are equivalent to determine second order differential equations from the given properties on the right hand side of the equation.
The first statement of an inverse problem as the problem of finding the more general differential systems of first order satisfying a set of given properties was stated by Erugin [9] in dimension 2 and developed by Galiullin in [11].
The new approach of an inverse problem which we propose uses as an essential tool the Nambu bracket. We deduce new properties of this bracket which plays a very important role in the proof of all the results of this work and in its applications. We observe that the applications of the Nambu bracket which we will give in this paper are original and represent a new direction in order to develop the Nambu ideas.
In the first section we present two different kind of results. First under very general assumptions we characterize the ordinary differential equations in R N which have a given set of either M partial integrals, or M < N first integrals, or M ≤ N partial and first integrals. Second in R N we provide necessary and sufficient conditions on the integrability, in the sense that the characterized differential equations admit N − 1 independent first integrals.
In the third section applying results of the first section, we state and solve the inverse problem for the constrained Lagrangian Mechanics: For a given natural mechanical system with N degrees of freedom determine the most general field of force depending only on the position of the system satisfying a given set of constraints which are linear in the velocity. One of the main objectives in this inverse problem is to study the behavior of the constrained Lagrangian systems with linear constraints with respect to the velocity in a way different from the classical approach deduced from the d'Alembert-Lagrange principle.
The section four is devoted to study the inverse problem for constrained Hamiltonian systems. That is, for a given submanifold M of a symplectic manifold M we determine the differential systems having the submanifold M invariant by their flow.

The Nambu Bracket. New Properties
In the seventies Nambu in [19] proposed a new approach to the classical dynamics based on an N dimensional Nambu-Poisson manifold replacing the even dimensional Poisson manifold and on N −1 Hamiltonian H 1 , . . . , H N −1 instead of a single Hamiltonian H . In the canonical Hamiltonian formulation, the equation of motion (Hamilton equations) are defined via the Poisson bracket. In Nambu's formulation, the Poisson bracket is replaced by the Nambu bracket. Nambu had originally considered the case N = 3.
Although the Nambu formalism is a generalization of the Hamiltonian formalism its real applications are not as rich as the applications of this last one.
Let This last bracket is known in the literature as the Nambu bracket [3,17,19,36]. The objective of this section is to provide properties of the Nambu bracket, some of them are new. These properties will play an important role in some of the proofs of the main results of this paper.
(i) It is a skew symmetry bracket, i.e.
where f 1 , f 2 , . . . , f N −1 , g 1 , . . . , g N are arbitrary functions. For more details see [10,17,19,36]. The property (i) follows directly from the properties of the determinants. The property (ii) is obtained using the properties of the derivative plus the properties of the determinants. The property (iii) will be the property (ix) with λ = 1, and we shall prove it in Proposition 2.
Unfortunately the properties listed above of the Nambu bracket are not sufficient for solving some of the problems that we shall consider.
We observe that the applications of the Nambu bracket that we will do are original and represent a new direction in order to develop the Nambu ideas.
We shall need the next results (new properties of the Nambu bracket).

Proposition 1
The following four identities hold. Here the functions g 1 , . . . , g N , f 1 , . . . , f N , G and f are arbitrary.
The proof of (v) follows easily from the definition of the Nambu bracket. The proof of (vi) is done by using the mathematical induction. Without loss of generality we shall prove that For N = 2 we obtain Now we suppose that . . , g N −1 } = 0 holds and we shall prove (2). Indeed, by considering that we deduce that Here we apply the assumption that K N −1 In view of the property (iv) we obtain that Hence consequently the property (vi) is proved. The proof of (vii) is easy to obtain by considering that the value of determinant can be obtained by developing by the last row and by the last column.
can be written as , and using (iv) we obtain This proves identity (viii). The proof of (ix) is as follows. Taking G = x j in the identity of (viii) and multiplying it by λ we obtain Using (vi) from the last expression we have . Now using (iv) the previous expression becomes This complete the proof of the identity (ix).
The identity (viii) was proved by first time in [29].

Remark 3
We note that (i x) has obtained from (viii). So in some sense (viii) is more basic.
In fact from the proof of (i x) we obtain Now we establish the relationship between the Nambu bracket and the classical Poisson bracket. We suppose that N = 2n, and x j = x j and x j+n = y j for j = 1, . . . , n. The Poisson bracket {H, F} * of the functions H and F is defined as Proposition 4 Between the Poisson bracket and the Nambu bracket the following two equalities hold for arbitrary functions H, f, G, f 1 , . . . , f 2n : Proof The identity (x) is obtained by using the property (iv). Indeed, in view of (iv) we get in the previous equalities we consider that x 0 = x 1 . Since Here in the first equality we have used (viii), and in the second equality we have used (x).

Ordinary Differential Equations in R N with M ≤ N Partial Integrals
Let D be an open subset of R N . By definition an autonomous differential system is a system of the formẋ where the dependent variables x = (x 1 , . . . , x N ) are real, the independent variable (the time t) is real and the C 1 function The C 1 function g : D −→ R and the set {x ∈ D : g = g(x) = 0} are called partial integral and invariant hypersurface of the vector field X respectively, if X(g)| g=0 = 0.
In this section we construct the most general autonomous differential system in D ⊂ R N having the set of partial integrals g j for j = 1, 2, . . . , M, with M ≤ N .
Our first result characterizes the differential systems (3) having a given set of M partial integrals with M ≤ N .
Theorem 5 Let g j = g j (x) for j = 1, 2, . . . , M with M ≤ N be a given set of independent functions defined in an open set D ⊂ R N . Then any differential system defined in D which admits the set of partial integrals g j for j = 1, 2, . . . , M can be written aṡ if |S| is vanishes in some zero Lebesgue measure set ⊂ D. Then from (4) we get the following differential systeṁ where μ 1 , . . . μ M and ν 1 , . . . , ν M are arbitrary functions defined in D such that Proof of Theorem 5. We consider the vector field where |S| ≡ 0, S jk for k, j = 1, . . . , N is the determinant of the adjoint of the matrix S after removing the row j and the column k, S −1 is the inverse matrix of S, and P = (P 1 , . . . , From (6) by developing the determinant with respect to the last column and denote by {g 1 , . . . , g k−1 , * , g k+1 , . . . , g N } the following vector field . . . . . .
Then it is easy to obtain the relationship The vector field X given in (6) already was used in [26,28]. Now we shall prove that system (4) is the most general differential system which admits the given set of independent partial integrals. Indeed leṫ x =X(x) = X 1 (x), . . . ,X N (x) be another differential system having g 1 , g 2 , . . . , g M as partial integrals, that isX(g j )| g j =0 = 0 for j = 1, 2, . . . , M. Then taking j =X(g j ) = for k = 1, . . . , N − M (here we use the identity (v)) and substituting j and λ M+k into formula (7) we get for an arbitrary function F Here we have used the identity (iv) and the identity (ix). Hence, in view of the arbitrariness of F the theorem has been proved.  from (8) we get the following differential systeṁ for j = 1, 2, . . . , N , respectively. In particular if M = N = 2 then differential system (8) and (9) takes the forṁ for j = 1, 2, where 1 , 2 are arbitrary functions such that if and if j = {g 1 , g 2 }μ j , being {g 1 , g 2 }μ j g j =0 = 0 for j = 1, 2, from (10) we get the following differential systeṁ for j = 1, 2, respectively.

Differential Equations with Partial and First Integrals
In this section we construct the most general if |S| is nonzero in D, and Proof Let X be the vector field which is the vector field associated to differential system (11). Clearly this vector field is well defined in view of the assumptions. From X(g j ) = j , | g j =0 = 0, for j = 1, . . . , M 1 we deduce that g j are partial integrals of vector field X, and X( f k ) = 0 for k = 1, . . . , M 2 we obtain that f k are first integrals of vector field X.
Now we prove that system (11) is the most general differential system admitting the partial integrals g j and the first integrals f k . Indeed letẋ =X(x) be another differential system which admits g j for j = 1, . . . , M 1 partial integrals and f k for k = 1, . . . , . . , N and doing an analogous proof to the Proof of Theorem 5 we deduce that the vector fieldX is a particular case of the vector field X. Thus the theorem is proved.
The next two results follow easily from the Proof of Theorem 7.

Corollary 8
Under the assumptions of Theorem 7 but without partial integrals, i.e. if M 1 = 0, and M 2 = M < N , then the most general differential system in D which admit the first integrals f k for k = 1, . . . , M 2 iṡ for j = 1, 2, . . . , N .

Corollary 10
Under the assumptions of Theorem 7 the following statements hold.
(a) If M 2 = N − 1 and M 1 = 1, then the differential system (14) takes the forṁ (b) If M 2 = N − 1 and M 1 = 0, then the differential system (13) takes the forṁ where λ N and g N are arbitrary functions such that Consequently the vector field X in both cases admits the representation where μ is an arbitrary function.

Differential Equations and Integrability
In what follows we study the integrability of systems (4). We say that system (3) is integrable if it admits N − 1 independent first integrals.

Theorem 11 Under the assumptions of Theorem 5 differential system (4) is integrable if and only if
Proof Assume that the vector field X associated to differential system (4) is integrable, i.e. admit N − 1 independent first integrals F 1 , . . . , F N −1 , without loss of generality we suppose that we obtain where μ is an arbitrary function. Thus for l = 1, . . . , M and k = M + 1, . . . , N . So the "only if" part of the theorem follows. Now we shall prove the "if" part. We suppose that l = μ{F 1 , . . . , Thus the vector field associated to differential system (4) takes the form In view of the identity (ix) we have that Thus functions F 1 , . . . , F N −1 are first integrals of X. Hence the vector field is integrable.

Ordinary Differential Equations in R N with M > N Partial Integrals
In this section we determine the differential systems (3) having a given set of M partial integrals with M > N . Then the most general differential systems in D which admit the partial integrals g j for j = 1, 2, . . . , M areẋ for j = 1, 2, . . . , M, and g M+ j = x j for j = 1, 2, . . . , N .
Proof of Theorem First of all we determine the differential systems having the N independent partial integrals g j = g j (x) for j = 1, 2, . . . , N . Thus we obtain system (8). Clearly this differential system admits additional partial integrals g j for j = N + 1, . . . , M if and only if X(g ν ) = ν , ν | g ν =0 = 0, for ν = N + 1, . . . , M, or equivalently, Now we prove that is a solution of (18 are arbitrary functions satisfying (17). Indeed, in view of (18) and (19) we obtain which is identically zero by the identity (viii). Inserting (19) into (8) and from the identity (v) we obtain from the equatioṅ . . , N . Now we prove that this differential system which coincides with (16) is the most general one. Indeed using that (16) admits the representatioṅ . . , N is another differential system having the given set of partial integrals, then by choosing conveniently functions G M+2, M+3 ...,M+N , . . . , G M+1, M+2 ...,M+N −1 we deduce that the vector field (20) contains the vector fieldX = X 1 ,X 2 , . . . ,X N . So the Proof of Theorem 12 follows.
The following result is proved in a similar way to the Proof of Theorem 11.

Theorem 13
Under the assumptions of Theorem 12 the differential system (16) is integrable if and only if
Proof It follows easily from Theorem 12.

Remark 15
We note that conditions (22) hold identically if where λ j = λ j (x, y) for j = 1, . . . , M + 2 are arbitrary functions. Inserting (23) into (21) we obtain the following differential systeṁ In particular if in (24) we assume that λ j = λ for j = 1, . . . , M, then we obtain the following differential systemẋ By introducing the function g = M m=1 g m . we get the differential systeṁ Indeed by considering that we easily obtain from (24) differential system (25).

Inverse Problem for Constrained Lagrangian Systems
The aim of this section is to provide a solution of the inverse problem of the constrained Lagrangian Mechanics which can be defined as follows: Determine for a given natural mechanical system with N degrees of freedom the most general field of forces depending only on the positions and satisfying a given set of constraints with are linear in the velocities. The statement of the inverse problem for constrained Lagrangian systems is new. As we observe from Sect. 1 (see for instance Theorem 5) the solutions of the inverse problem in ordinary differential equations have a very high arbitrariness due to the undetermined functions which appear in them. To obtain more exact solutions we need additional conditions for reducing this arbitrariness. In this section we will obtain these additional conditions for getting the equations of motion for the constrained Lagrangian mechanics.
One of the main objectives in this inverse problem is to study the behavior of the nonholonomic systems with linear constraints with respect to the velocity in a way different of the classical approach deduced from the d'Alembert-Lagrange principle or from the vakonomic approach, see for instance [24]. We explain this in more detail in the Remark 20.
We shall introduce the notations and definitions that we need for presenting our applications of Theorem 5.
The following definitions can be found in [2].
A Lagrangian system is a pair (Q,L) consisting of a smooth manifold Q, a functioñ L : T Q −→ R. The point x ∈ Q denotes the position of the system and we call each tangent vectorẋ ∈ T x Q the velocity of the system at the point x. A pair (x,ẋ) is called a state of the system. In Lagrangian mechanics it is usual to call Q, the configuration space, the tangent bundle T Q is called the phase space,L is the Lagrange function or Lagrangian, and the dimension N of Q is the number of degrees of freedom.
The equations in all the points of Q, except in a zero Lebesgue measure set, define M independent constraints for the Lagrangian systems (Q,L), i.e. we want that the orbits (x(t),ẋ(t)) of the mechanical system satisfy (26). Let M * be the submanifold of T Q defined by the Eq. (26), i.e.
We call the inverse problem for the constrained Lagrangian system the problem of determining for a given constrained Lagrangian system (Q,L, M * ), the field of force in such a way that the given submanifold M * is invariant by the flow of the second order differential equations We shall study the case when the constraints are linear in the velocities in M * , i.e.
Our first main result provides the equations of motion of a constrained mechanical system with Lagrangian functioñ where T is a Riemannian metric on Q (the kinetic energy of the system), and M = N linear constraints given by where ) is a given vector field.
In particular if we take . . . , x N ) are convenient functions for j = 1, . . . , N , then from (32) we obtaiṅ Taking the arbitrary functions λ N + j and j as follows for j = 1, . . . , N , whereG jn =G jn (x 1 , . . . , x N ) are elements of a symmetric definite positive matrixG, and We can write g j as On the other hand in view of the relations we finally deduce that Eq. (33) can be written as the Lagrangian differential equations d dt After computation and in view of the constraints (28) we finally obtain differential system (30). This completes the proof of the theorem.
In view of the second Newton law: acceleration is equal to force (see for instance [34]), we obtain that the right hand side of the equations of motion (30) are the generalized forces acting on the mechanical system which depends only on its position. Consequently the field of force F with components is the most general field of force depending only on the position of the natural mechanical system which is constrained to move on the N dimensional subset of the phase space given by (28). In short the equations of motion (30) provide a complete answer to the inverse problem (i) when the constraints are given in the form (28). Now we want to solve the inverse problem (i) for the classical constraints We recall that the equations of motion of a constrained Lagrangian system with Lagrangiañ where μ j = μ j (x,ẋ) are the Lagrangian multipliers. As we can observe the forcesF are composed by the potential forces with components − ∂U ∂ x k and the reactive forces generated by constraints with components M j=1 μ j a jk for k = 1, . . . , N . For more details see [2].
In short we have two equations of motions: the ones given in (29), or what is the same (30) for constraints of type (28), and the classical ones given in (35) for the constraints (34). In order to solve the problem (i) for the constraints (34) we establish the relationship between these two sets of equations. For doing this we shall choose conveniently the vector field v which appear in (28).
In view of that the constraints (28) and we choose the 1-forms j for j = M + 1, . . . , N arbitrarily, but satisfying that the determinant |ϒ| of the matrix ϒ = (a jk ) : is nonzero. The ideal case would be when this determinant is constant. In other words the N 1-forms j for j = 1, . . . , N are independent. Now we define the vector field v as where P = (0, . . . , 0, ν M+1 , . . . , ν N ) T , the functions ν j = ν j (x) are nonzero arbitrary functions due to the arbitrariness of j for j = M + 1, . . . , N .
Proof First we prove that the vector field (37) is such that Indeed, from the relation v(x) = S −1 P we get that Thus we obtain (38). Consequently the vector field v satisfies the constraints. Now we show that vector field v is the most general vector field satisfying these constraints. Letṽ = (ṽ 1 , . . . ,ṽ N ) be another vector field satisfying the constraints, i.e.
Thus Proposition 17 is proved.
We define Theorem 18 Let be a constrained Lagrangian mechanical system with configuration space Q, kinetic energy T given in (27), and constraints given by (28) with v = (v 1 , . . . , v N ) T given by (37).
The equations of motion of are d dt Then the 2-form dσ admits the development Here we have used that the 1-forms 1 , . . . , N are independent, and consequently they form a basis of the 1-form space. Hence k ∧ n for k, n = 1, . . . , N form a basis of the 2-form space. From (44) we have that the inner product of vector field v and dσ i.e. ı v dσ is such that where the matrix H is Again from (44) we have that Now from the last equality and (44) we have Clearly from these relations it follows that H v(x) = ϒ T , hence here we have used the equality v(x) = ϒ −1 P.
Remark 19 Equation (42) define a system of first order partial differential equations with unknown functions ν M+1 , . . . , ν N (see (37), (40) and (42)). We observe that equations (42) can be rewritten as follows for j = 1, . . . , N , has the same structure than the field of forces determine in (35), but there are three important differences. First the potential and reactive components in (50) are related through the vector field v (which itself is determined by the constraints), while in (35)the potential U is completely independent of the reactive forces with components M k=1 μ k a k j . Second the multipliers 1 , . . . , M in (50) depend only on the position of the mechanical system, while in (35) the Lagrangian multipliers μ j depend on the position and velocity, and finally system (41) was deduced from Lagrangian differential system (29), while system (35) in general has no relations with the Lagrangian equations.
In the applications of Theorem 18 we will determine the functions ν M+1 , . . ., ν N as solutions of (42) together with the condition where h is a constant. Under the potential (51) we obtain that between the fields of forcẽ F given in (35) and F given in (50) the only difference consists in the coefficients which determine the reactive forces.
The following two questions arises: Do exist solutions of Eqs. (42) and (51) in such a way that the solutions of the differential system d dt The second question is: What is the mechanical meaning of the differential equations generated by the vector field (37), i.e.
under the conditions (42) and of the differential equations d dt Partial answer to theses questions are given in the examples of the next section.

Examples
In this section we illustrate in some particular cases the relation between three mathematical models: (i) the classical model deduced from the d'Alembert-Lagrange principle (see (35)), (ii) the model deduced from the Lagrangian equations (29) (see (41)), and (iii) the model obtained from the first order differential Eq. (52) under the conditions (42).
Example 21 Suslov problem. In this example we study the classical problem of nonholonomic dynamics formulated by Suslov [33]. We consider the rotational motion of a rigid body around a fixed point and subject to the nonholonomic constraint <ã, ω >= 0 where ω = (ω 1 , ω 2 , ω 3 ) is the angular velocity of the body,ã is a constant vector, and < , > is the scalar product. Suppose that the body rotates in a force field with potential U (γ ) = U (γ 1 , γ 2 , γ 3 ). Applying the method of Lagrange multipliers we write the equations of motion (35) in the form where γ = (γ 1 , γ 2 , γ 3 ) = (sin z sin x, sin z cos x, cos z), (x, y, z) = (ϕ, ψ, θ) are the Euler angles, and I is the tensor of inertia. We observe that the equationsγ = γ ∧ ω is called the Poisson equations.
Using the constraint equation <ã, ω >= 0, the Lagrange multiplier μ can be expressed as a function of ω and γ as follows We shall suppose that the vector a = (0, 0, 1), consequently the constraint takes the form ω 3 =ẋ +ẏ cos z = 0.

Example 22
Nonholonomic Chaplygin systems. We illustrated Theorem 18 in the noholonomic Chaplygin systems. In many nonholonomic systems the generalized coordinates x 1 , . . . , x N can be choosen in such a way that the equations of the non-integrable constraints can be written in the forṁ A constrained Chaplygin mechanical system is the mechanical system with LagrangianL =L(x M+1 , . . . , x N ,ẋ 1 , . . . ,ẋ N ), subject to M linear nonholonomic constraints (57) (see [20]). We shall solve the inverse problem for this constrained system when the Lagrangian function isL We determine the vector field (37) and the differential system (41) for constrained Chaplygin-Lagrangian mechanical system with Lagrangian (58).
First we determine the 1-forms j for j = 1, . . . , N . Taking we obtain that Thus |ϒ| = 1 and consequently Then the vector field (37) in this case generates the following differential equationṡ Differential system (41) in this case can be written as Therefore we obtain Example 23 We shall illustrate this case in the following system which we call Gantmacher's system (see for more details [12]). Two material points m 1 and m 2 with equal masses are linked by a metal rod with fixed length l and small mass. The system can move only in the vertical plane and so the speed of the midpoint of the rod is directed along the rod. It is necessary to determine the trajectories of the material points m 1 and m 2 .
Let (q 1 , r 1 ) and (q 2 , r 2 ) be the coordinates of the points m 1 and m 2 . Introducing the following change of coordinates: 1 2 , , we obtain the mechanical system with configuration space Q = R 4 , 3 , where as usual g denotes the gravitational acceleration and the constraints are The equations of motion (35) obtained from the d'Alembert-Lagrange principle arë where μ 1 , μ 2 are the Lagrangian multipliers which we determine as follows After the integration of (60) we obtain (for more details see [12]) where (ϕ, r ) are the polar coordinates: x 1 = r cos ϕ, x 2 = r sin ϕ, and f is a solution of the differential equationḟ = − 2g r x 2 . To construct the differential systems (52) and (37) we determine the 1-forms j for j = 1, 2, 3, 4 as follow (see [28]) Here 1 and 2 are given by the constraints, and 3 and 4 are chosen in order that the determinant |ϒ| becomes nonzero. Hence we obtain that |ϒ| = −( By considering that in this case N = 4 and M = 2, then from (49) we obtain Differential system (52) takes the forṁ It is easy to show that the functions ν 3 and ν 4 are where h is an arbitrary constant and g 3 is an arbitrary function in the variable x 2 1 + x 2 2 , are solutions of (63) as a consequence from the relation (51) we have The solutions of (64) with ν 3 and ν 4 given in (65) are x 1 = r cos α, x 2 = r sin α, α = α 0 + g 3 (r )t, cos α, where C, r, α 0 , u 0 3 are arbitrary constants and g 3 is an arbitrary function of r 2 . To compare these solutions with the solutions obtained from (62) we observe that they coincide. We note that we have obtained the trajectories of the masses m 1 and m 2 solving the differential equations of first order (64) with the functions (65).
Finally we observe that for the Gantmacher system the system (41) takes the form and admits as solutions the ones given in (66) (see Remark 20).

Remark 24
From these examples we give a partial answer to the questions stated in Remark 20. Differential equations generated by the vector field (37) under the conditions (42) can be applied to study the behavior of the nonholonomic systems with linear constraints with respect to the velocity (at least for certain class of such systems. Is it possible to apply this mathematical model to describe the behavior of the nonholonomic systems with linear constraints with respect to velocity in general? For the moment we have no answer to this question.

Inverse Problem in Dynamics. Generalized Dainelli Inverse Problems
Now we consider a mechanical system with configuration space Q of the dimension N and kinetic energy T given by (27). The problem of determining the most general field of force depending only on the position of the system, for which the curves defined by are formed by orbits of the mechanical system, is called as the generalized Dainelli's inverse problem in dynamics. If we assume that the given family of curves (68) admits the family of orthogonal hypersurfaces S = S(x) = c N , then this problem is called the generalized Dainelli-Joukovski's inverse problem.
If the field of force is potential in the generalized Dainelli inverse problems, then such problems coincide with the Suslov's inverse problem, or the inverse problem in Celestial Mechanics and the generalized Dainelli-Joukovski's inverse problem coincides with the Joukovski problem (for more details see [28]).
The solutions of the generalized Dainelli's problem for N = 2, and of the Joukovski's problems for N = 2, 3 can be found in [6,11,14,37]. A complete solution of the Suslov problem can be found in [32], but this solution in general is complicated to implement.
The following result provides a solution of these inverse problems.
is an arbitrary function. Clearly F j are the components of the most general field of force that depends only on the position under which a given N − 1 parametric family of curves (68) can be described as orbits of the mechanical system.
, then the family of curves (68) can be freely described by a mechanical system under the influence of forces derived from (c) If we assume that the given family of curves (68) admits the family of orthogonal hypersurface S = S(x) = c N defined by then the most general field of force that depends only on the position of the system under which the given family of curves are formed by orbits of (69) is where ν = ν(x) is an arbitrary function on Q. If we choose ν and h = h( f 1 , . . . , f N −1 ) satisfying the first order partial differential equation then the field of force F is given by the potential If (68) is such that f j = x j for j = 1, . . . , N − 1, then (74) takes the form We note that statement (a) of Theorem 25 provides the answer to the generalized Dainelli's inverse problem, which before was only solved for N = 2 by Dainelli. Statement (b) of Theorem 25 gives a simpler solution to the Suslov's inverse problem, already solved by the same Suslov. Statement (c) of Theorem 25 provides the answer to the generalized Dainelli-Joukovski's problem solved by Joukovski for the case when the field of force is potential and N = 2, 3. Finally statement (d) of Theorem 25 is the well known Thomson's Theorem (see [15]) in our context.
Proof of Theorem 25. In this case we obtain that the vector field (37) is Condition (42) in this case takes the form N = A N N ν N = 0. Since the matrix A is skew symmetric, then A N N = 0. On the other hand from j = A N j ν N , for j = 1, . . . , N − 1, we deduce that system (41) takes the form From these relations we obtain the proof of statement (a) of the theorem. The statement (b) follows trivially from the previous result. The proof of statement (c) follows by considering that under the assumption (71) we have where and = (x 1 , . . . , x N ) are arbitrary functions. Hence the 1-form associated to (43)). Thus dσ = dν ∧ d S and consequently from (45) we have After some computations we deduce that the field of force F which from (47) can be written Hence we obtain (72).
If the curve is given by intersection of the hyperplanes x j − c j = 0 for j = 1, . . . , N − 1, then the condition (71) takes the form whereG is the inverse matrix of the matrix G.
By solving these equations with respect to ∂ S ∂ x k for k = 1, . . . , N − 1, we obtain that By using these relations and in view of (77), after some computations by considering that N n=1 L nG N n = |G|, we deduce that Consequently we obtain the following expression for the Eq. (73) From (78) we obtain that the potential function V takes the form We observe that ifG α N = 0 for α = 1, . . . , N − 1, then |G| = G N N and S N = x N = c N is an orthogonal family of the hyperplane to x j − c j = 0 for j = 1, . . . , N − 1. After integrating (79) we obtain that (x 1 , . . . , x N −1 ) are arbitrary functions.
The proof of statement (c) follows. Now we prove statement (d). We use the homotopy formula L v = ı v d + dı v , see [13]. The condition (70) in view of (46) is equivalent to σ is a function of t. In view of the well-known formula (see [15] L v σ, we obtain thatİ = 0. In short Theorem 25 is proved. In the two following sections we illustrate the statements (b) and (c) of Theorem 25.

Generalized Inverse Bertrand's Problem
For a particle with kinetic energy T = 1 2 (ẋ 2 +ẏ 2 ) we determine the most general field of force F = (F x , F y ) capable of generating the family of planar orbits f (x, y) = const. From (69) we obtain for N = 2 the equation This field of force coincides with the solutions of Dainelli's problem given in [37]. Clearly if the arbitrary function ν which appears in the expression of F is chosen as a solution of the equation then the vector field F is potential with potential function In order to apply this result we prove that the potential-energy function U capable of generating an one-parameter family of conics r + bx = c, where r = x 2 + y 2 is the function when b = 0, where a j and K j are real constants and H j , for j ∈ Z are functions where C j are arbitrary constants, τ = cos θ, = d dτ . We have that when b = 0, where = (τ ) and h = h(r ) are arbitrary functions (see [28]). Indeed, from (80) it follows that the require potential field of force exists if and only if x This equation in polar coordinates x = r cos θ, y = r sin θ , takes the form where f = r (1 + bτ ), τ = cos θ. Now we shall study the case when b = 0 and h is such that where ν j for j ∈ Z, are real constants, and λ is determined in such a way that It is clear that we are assuming that the series (83) and (84) converge. By inserting (83) and (84) into (82) we obtain The general solution of these equations are the functions (81).
Consequently the required potential function U must be for j = −1 and We will study the subcase when b = 1 separately from the subcase when b = 1.
for j = −1 and Easily verifies that If b = 1, and b = 0, it is easy to prove that , where Under these conditions the potential function U takes the form Summarizing the above computations we deduce that if b = 0 the function U becomes where α is a constant and β = β(τ ) is a convenient function.
If b = 0, then f = r and condition (82) takes the form Therefore or equivalently where is an arbitrary function. Hence

From (86) it follows that the metric G is diagonal with
The inverse Stäckel problem is the problem of determining the potential field of force that under which any curve of the family (85) is a trajectory of the mechanical system. The solution is as follows (see [28]).

Proposition 26 For a mechanical system with a configuration space Q and kinetic energy
(86), the potential field of force F = ∂ V ∂x , for which the family of curves (85) are trajectories is a function such that the hypersurface S − c N = 0 is orthogonal to the given hypersurfaces f j − c j = 0.
We observe that from (87) and (88) it follows that the metric G and potential function U can be determined from the given functions (85).
Proof After some tedious computations we get the equality , hence in view of the identity (viii) of the Nambu bracket we obtain for j = 1, . . . , N − 1, thus it follows the orthogonality of the surfaces. On the other hand the following equalities hold Here we used the identity (iv) of the Nambu bracket, where d = N j=1 k (x k )dx k . We observe that if we choose ν = ν(S), then from (73) we obtain that the field of force which generates the given family of orbits (85) is potential with potential function given by (88). In particular if ν = 1 and h 0 = α N then we obtain the classical Stäckel potential (see [5]).
Example 27 Joukovski's example We shall study a mechanical systems with three degrees of freedom. If we denote by x 1 = p, x 2 = q, x 3 = r, then we consider the mechanical system with kinetic energy respectively. Clearly if a j = 0 for j ∈ Z, then we get the potential also obtained by Joukovski in [14]. On the other hand, if = 0, a j = 0 for j ∈ Z \ 2 and 4a 2 = a, then we obtain the potentials V = az 4 − h 0 given in [28].

Inverse Problem for Constrained Hamiltonian Systems
Constrained Hamiltonian systems arise in many fields, for instance in multi-body dynamics or in molecular dynamics. The theory of such systems was mainly developed by Dirac (see for instance [7]). See general references for constrained dynamics in [30].
The inverse problem for constrained Hamiltonian systems can be stated as follows: for a given submanifold M of a symplectic manifold M we must determine the differential systems having M invariant by their flows.
We find the equations of motion of a constrained Hamiltonian system in the following cases: (i) The given constraints are l first integrals with l ∈ [dim M/2, dim M). In particular the differential equations obtained solving this inverse problem are Hamiltonian only if the first integrals are in involution. (ii) The given constraints are M < dim M/2 partial integrals. We deduce the differential equations which can be interpreted as a normal form of the equations of motion of a nonholonomic system with nonlinear constraints with respect to the momenta.
We observe that these two statements of the inverse problem for constrained Hamiltonian systems are new. Now we consider M a 2N -dimensional smooth manifold with local coordinates (x, y) = (x 1 , . . . , x N , y 1 , . . . , y N ) . Let 2 be a closed non-degenerate 2-form, then (M, 2 ) is a symplectic manifold. Let H : M −→ R be a smooth function, and let M be a submanifold of M.
The quaternary (M, 2 , M, H ) is called a constrained Hamiltonian system (see [2]). We essentially study two inverse problems for the constrained Hamiltonian systems, for the first the submanifolds M are obtained fixing the values of the given first integrals, and for the second these submanifolds are defined by the hypersurfaces given by partial integrals. Now we can formulate the inverse problem for constrained Hamiltonian systems with equations, i.e. we want to determine the vector fields W with components (W 1 , . . . , W 2N ), , y), satisfying that the submanifold M is invariant by the flow of the differential systemẋ is the Poisson bracket. In this chapter we solve this inverse problem. We note that if W k = 0 for k = 1, . . . , N , then the Eq. (92) are the standard Hamiltonian equations for a mechanical system which is under the action of a external force with components W N +1 , . . . , W 2N .

Hamiltonian System with Given First Integrals
We have the following result. (M, 2 , M 1 , H ) be a constrained Hamiltonian system and let f j = f j (x, y) for j = 1, . . . , N be a given set of independent functions defined in M.

Theorem 28 Let
Assume that then the manifold where c j for j = 1, . . . , N are arbitrary constants, is invariant by the flow of differential systeṁ the submanifold M 1 is invariant by the flow of the differential systeṁ x k = {H, x k } * , for k = 1, . . . , N − 1, where λ = λ (x, y) is an arbitrary function.
Proof of Theorem 28. Under the assumptions of Corollary 8 taking the N of the corollary as 2N , introducing the notations y j = x N + j , and choosing g N + j = x j for j = 1, . . . , N , we obtain that the differential systems (13) takes the forṁ for j = 1, 2, . . . , N . These equations are the most general differential equations which admits N independent first integrals and satisfy the condition The Proof of Theorem 28 is obtained by choosing the arbitrary functions λ N + j as follows λ N + j = {H, x j } * , where H is the Hamiltonian function for j = 1, . . . , N . From the identity (xi) of the Nambu bracket with G = y k , f N + j = x j for j = 1, . . . , N , we obtain that differential system (97) can be rewritten aṡ Clearly if the first integrals are in involution and W j = 0, then we obtain that  1 , we obtain that differential system (97) can be rewritten aṡ Therefore by choosing λ 2N as λ 2N = {H, y 1 } * + λ{ f 1 , . . . , f N , x 1 , . . . , , x N −1 , y 1 }, we get the differential system (96).
In view of the identity (vii) with G = f j from (96) we obtain the relationṡ Example 29 Neumar-Moser integrable systems. We shall illustrate these theorems in the Neumann-Moser's integrable system. Now we study the case when we have N independent involutive first integrals of the form for ν = 1, . . . , N , where A, B and C are constants such that C(A 2 + B 2 ) = 0. Thus we study the constrained Hamiltonian system R 2N , 2 , M, , H . The case when A = 0, B = 1, C = 1 and A = 1, B = 0, C = 1 was studied in particular in [18]. The case when AB = 0 was introduced in [27]). In particular if C = (A + B) 2 then from (98) we obtain that f ν = A 2 f It is easy to show that the following relations hold { f These equations are the most general differential equations which admits N +r first integrals which satisfies the condition { f 1 , . . . , f N +r , x 1 , . . . , x N −r } = 0. By choosing in (92) the arbitrary functions W j = 0 and λ N + j = {H, x j } * for j = 1, . . . , N − r, where H is the Hamiltonian and by using the identity (xi) of the Nambu bracket with G = x k , f N +r + j = x j for j = 1, . . . , N − r, and G = y k , f N +r + j = x j for j = 1, . . . , N − r, we obtain that differential system (101) can be rewritten aṡ Theorem 32 If a Hamiltonian system has N independent first integrals in involution, which can be solved with respect to the momenta, then its motion can be obtained with quadratures, that is, the equation of motion can be solved simply by evaluating integrals, In general the given set of first integrals is not necessarily in involution. The solution of the inverse problem in constrained Hamiltonian system shows that in this case the differential equations which have as invariant the submanifold M 1 is in general not Hamiltonian. The origin of the theory on noncommutative integration is the Nekhoroshev's Theorem (see [21]). The following result holds (see [15]).
Theorem 33 If a Hamiltonian system with N degrees of freedom has N +r independent first integrals f j for j = 1, . . . , N + r , such that the f 1 , . . . , f N −r first integrals are in involution with all integrals f 1 , . . . , f N +r . Then the Hamiltonian system is integrable by quadratures.
If f 1 , f 2 , . . . , f N −r are the first integrals which are in involution with all the first integrals and H = H ( f 1 , f 2 , . . . , f N −r ), then the differential system (100) is Hamiltonian and is integrable by quadratures.