On Simultaneous Differential Equations of the First Order in Which the Number of the Variables Exceeds by More Than One the Number of the Equations
Author(s)
George Boole
Year
1862
Volume
152
Pages
19 pages
Language
en
Journal
Philosophical Transactions of the Royal Society of London
Full Text (OCR)
XXX. On Simultaneous Differential Equations of the First Order in which the Number of the Variables exceeds by more than one the Number of the Equations. By George Boole, F.R.S., Professor of Mathematics in Queen's College, Cork.
Received June 19.—Read June 19, 1862.
It is a fundamental proposition of analysis that a system of $n$ differential equations of the first order containing $n+1$ variables admits of $n$ integrals, each of which is expressed by a function of the variables equated to an arbitrary constant.
But when a system of $n$ differential equations of the first order connects $n+r$ variables, $r$ being greater than unity, no existing theory assigns in a general manner the number of theoretically possible integrals of the above species, or shows us how to discover them. Yet such cases are of great importance.
I wish to develope here the theory of a method for the solution of the above classes of equations, which was published by me in the 'Proceedings of the Royal Society' for March 6th of the present year, and which enables us to assign the number of theoretically possible integrals, and to reduce their discovery to the solution of a system of simultaneous differential equations equal in number to the number of integrals, and expressible as exact differential equations.
The solution of the problem as thus reduced may be effected by known methods, but I have thought it desirable to discuss this part of the subject also in direct sequence to the other, and in conformity with its method.
Of the Connexion between ordinary and partial Differential Equations.
It has been found convenient, in researches bearing upon the general theory of differential equations, to use the term 'integral' in two distinct senses, viz. to denote, as above, a relation satisfying the differential equation or system of equations, and expressed by the equating of a function of the variables to a constant, and to denote the function itself. The particular sense intended will always be shown by the connexion.
With this convention two systems of differential equations will be said to be equivalent when they have in either of the above senses (and the one implies the other) the same system of integrals. This will explain the meaning of the following proposition.
Proposition I.—A system of $n$ ordinary differential equations of the first order connecting $n+r$ variables may be converted into an equivalent system of $r$ linear partial differential equations of the first order.
Let $x_1, x_2, \ldots x_{n+r}$ be the variables, then the supposed given system of differential equa-
tions may, by algebraic solution with respect to \(dx_1, dx_2, \ldots, dx_n\), be reduced to the form
\[
\begin{align*}
dx_1 &= A_{11} dx_{n+1} + A_{12} dx_{n+2} + \cdots + A_{1r} dx_{n+r}, \\
dx_2 &= A_{21} dx_{n+1} + A_{22} dx_{n+2} + \cdots + A_{2r} dx_{n+r}, \\
&\vdots \\
dx_n &= A_{n1} dx_{n+1} + A_{n2} dx_{n+2} + \cdots + A_{nr} dx_{n+r},
\end{align*}
\]
the coefficients \(A_{11}, A_{12}, \ldots\) being functions of the variables.
Let \(P = c\) be an integral of the system. Then
\[
\frac{dP}{dx_1} dx_1 + \frac{dP}{dx_2} dx_2 + \cdots + \frac{dP}{dx_{n+r}} dx_{n+r} = 0.
\]
Substituting in this equation the values of \(dx_1, dx_2, \ldots, dx_n\) given by (I.), we have
\[
\begin{align*}
\left( \frac{dP}{dx_{n+1}} + A_{11} \frac{dP}{dx_1} + A_{21} \frac{dP}{dx_2} + \cdots + A_{n1} \frac{dP}{dx_n} \right) dx_{n+1} \\
+ \left( \frac{dP}{dx_{n+2}} + A_{12} \frac{dP}{dx_1} + A_{22} \frac{dP}{dx_2} + \cdots + A_{n2} \frac{dP}{dx_n} \right) dx_{n+2} \\
\vdots \\
+ \left( \frac{dP}{dx_{n+r}} + A_{1r} \frac{dP}{dx_1} + A_{2r} \frac{dP}{dx_2} + \cdots + A_{nr} \frac{dP}{dx_n} \right) dx_{n+r} = 0.
\end{align*}
\]
As the differentials \(dx_{n+1}, dx_{n+2}, \ldots, dx_{n+r}\) are now independent, we have, on equating their coefficients separately to 0,
\[
\begin{align*}
\frac{dP}{dx_{n+1}} + A_{11} \frac{dP}{dx_1} + A_{21} \frac{dP}{dx_2} + \cdots + A_{n1} \frac{dP}{dx_n} &= 0, \\
\frac{dP}{dx_{n+2}} + A_{12} \frac{dP}{dx_1} + A_{22} \frac{dP}{dx_2} + \cdots + A_{n2} \frac{dP}{dx_n} &= 0, \\
&\vdots \\
\frac{dP}{dx_{n+r}} + A_{1r} \frac{dP}{dx_1} + A_{2r} \frac{dP}{dx_2} + \cdots + A_{nr} \frac{dP}{dx_n} &= 0,
\end{align*}
\]
a system of \(r\) linear partial differential equations, the common integrals of which will be the integrals of the system (I.). We say ‘the common integrals of which,’ because in fact these equations express the conditions to all of which \(P\) must be subject in order that \(P = c\) may be an integral of the system (I.).
The formal connexion between the systems (I.) and (II.) deserves to be carefully noticed. The several partial differential equations of the system (II.) may be formed by inspection from the columns in the right-hand member of the system (I.) by the following rule. For the differential \(dx_{n+i}\) in any column write the differential coefficient \(\frac{dP}{dx_{n+i}}\), to this add the series of differential coefficients \(\frac{dP}{dx_1}, \frac{dP}{dx_2}, \ldots, \frac{dP}{dx_n}\) multiplied in succession by the descending coefficients of the column, and equate the final result to 0.
The symmetrical form of the equation
\[
\frac{dP}{dx_1} dx_1 + \frac{dP}{dx_2} dx_2 + \cdots + \frac{dP}{dx_{n+r}} dx_{n+r} = 0
\]
shows that by an exactly similar rule any system of \( n \) partial differential equations of the first order, the terms of which consist of the differential coefficients of \( P \) multiplied by functions of the independent variables \( x_1, x_2, \ldots x_{n+r} \), may be converted into an equivalent system of \( r \) common differential equations of the first order.
For the proposed system of partial differential equations is by algebraic reduction expressible in the form
\[
\begin{align*}
\frac{dP}{dx_1} &= A_{11} \frac{dP}{dx_{n+1}} + A_{12} \frac{dP}{dx_{n+2}} + \cdots + A_{1r} \frac{dP}{dx_{n+r}}, \\
\frac{dP}{dx_2} &= A_{21} \frac{dP}{dx_{n+1}} + A_{22} \frac{dP}{dx_{n+2}} + \cdots + A_{2r} \frac{dP}{dx_{n+r}}, \\
\vdots & \quad \vdots \\
\frac{dP}{dx_n} &= A_{n1} \frac{dP}{dx_{n+1}} + A_{n2} \frac{dP}{dx_{n+2}} + \cdots + A_{nr} \frac{dP}{dx_{n+r}}.
\end{align*}
\]
(III.)
If the values of \( \frac{dP}{dx_1}, \frac{dP}{dx_2}, \ldots \frac{dP}{dx_n} \) in this system be substituted in the previous general equation, and the coefficients of the differential coefficients \( \frac{dP}{dx_{n+1}}, \frac{dP}{dx_{n+2}}, \ldots \frac{dP}{dx_{n+r}} \) in the result be separately equated to 0, we shall have
\[
\begin{align*}
dx_{n+1} + A_{11} dx_1 + A_{21} dx_2 + \cdots + A_{n1} dx_n &= 0, \\
dx_{n+2} + A_{12} dx_1 + A_{22} dx_2 + \cdots + A_{n2} dx_n &= 0, \\
\vdots & \quad \vdots \\
dx_{n+r} + A_{1r} dx_1 + A_{2r} dx_2 + \cdots + A_{nr} dx_n &= 0.
\end{align*}
\]
(IV.)
These equations may, in like manner be formed by inspection from the columns of the second member of (III.), by writing for \( \frac{dP}{dx_{n+i}} \) in any column \( dx_{n+i} \), adding to this \( dx_1, dx_2, \ldots dx_n \) multiplied by the descending coefficients of the column, and equating the final sum to 0. The rule for the one case differs from that for the other only in that differentials take the place of differential coefficients.
As an objection may be felt to the legitimacy of that step of the above process in which, the differential coefficients \( \frac{dP}{dx_1}, \frac{dP}{dx_2}, \ldots \frac{dP}{dx_n} \) being eliminated, the coefficients of the remaining ones are separately equated to 0, I will point out another mode of procedure which leads to the same result, and which is founded upon Lagrange's method of solution. Let the equations of the system (III.) be added together after having been multiplied respectively by \( \lambda_1, \lambda_2, \ldots \lambda_n \), which are to be regarded as indeterminate functions of the variables \( x_1, x_2, \ldots x_{n+r} \). The result will be a linear partial differential equation of the first order, of which the Lagrangian auxiliary system of ordinary differential equations will be
\[
\frac{dx_1}{\lambda_1} = \frac{dx_2}{\lambda_2} = \cdots = \frac{dx_n}{\lambda_n} = \frac{-dx_{n+1}}{A_{11} \lambda_1 + \cdots + A_{n1} \lambda_n} = \cdots = \frac{-dx_{n+r}}{A_{1r} \lambda_1 + \cdots + A_{nr} \lambda_n}.
\]
Hence, eliminating \( \lambda_1, \lambda_2, \ldots \lambda_n \), or more strictly speaking the ratios
\[
\frac{\lambda_1}{\lambda_n}, \frac{\lambda_2}{\lambda_n}, \ldots \frac{\lambda_{n-1}}{\lambda_n},
\]
we have
\[ A_{11} dx_1 + \cdots + A_{n1} dx_n = -dx_{n+1}, \]
\[ \cdots \cdots \cdots \cdots \cdots \cdots \cdots \]
\[ A_{r1} dx_1 + \cdots + A_{nr} dx_n = -dx_{n+r}, \]
which agrees with the system (IV.).
**Of the Determination of the Number of Integrals of a system of Differential Equations of the First Order and Degree.**
We still suppose the given system of differential equations to be expressed under the general form (I.), and reduced by Prop. I. to the equivalent partial differential system (II.).
Now if for the expression of that system we introduce a series of symbols \( \Delta_1, \Delta_2, \ldots, \Delta_r \) defined as follows, viz.
\[ \Delta_i = \frac{d}{dx_{n+i}} + A_{i1} \frac{d}{dx_1} + A_{i2} \frac{d}{dx_2} + \cdots + A_{in} \frac{d}{dx_n}, \]
(1.)
the system will assume the form
\[ \Delta_1 P = 0, \Delta_2 P = 0, \ldots, \Delta_r P = 0, \]
and we shall now establish the following proposition
**Proposition II.**—*If* \( \Delta_i P = 0, \Delta_j P = 0 \) *represent any two linear partial differential equations of the system* (II.), *then will the equation of which the symbolical expression is*
\[ (\Delta_1 \Delta_j - \Delta_j \Delta_1)P = 0 \]
also be a linear partial differential equation of the first order, and it will be satisfied by all the common integrals of the two equations from which it is formed.
For, representing any one of the quantities \( x_1, x_2, \ldots, x_n \) by \( x \), and any function of those quantities by \( X \), \( \Delta_i \) consists of a series of terms of the form \( X \frac{d}{dx} \). Again, representing any one of the same series of quantities \( x_1, x_2, \ldots, x_n \) by \( y \), and any function of them by \( Y \), \( \Delta_j \) will consist of terms of the form \( Y \frac{d}{dy} \). Hence \( (\Delta_1 \Delta_2 - \Delta_2 \Delta_1)P \) will consist of terms of the form
\[ \left( X \frac{d}{dx} Y \frac{d}{dy} - Y \frac{d}{dy} X \frac{d}{dx} \right) P. \]
Effecting the differentiations, this term becomes
\[ X \frac{dY}{dx} \frac{dP}{dy} + XY \frac{d^2P}{dydx} - Y \frac{dX}{dy} \frac{dP}{dx} - YX \frac{d^2P}{dydx}, \]
or
\[ X \frac{dY}{dx} \frac{dP}{dy} - Y \frac{dX}{dy} \frac{dP}{dx}, \]
which involves only the first differential coefficients of \( P \). Hence (3.) will be a linear partial differential equation of the first order.
Hence also the ultimate form of (3.) will be the same as if \( \Delta_i \) when operating on \( \Delta_j P \), operated only on the coefficients \( A_{ij}, \ldots, A_{nj} \) involved in \( \Delta_j \), and *vice versa*. Thus the
ultimate form of (3.) will be
\[(\Delta_i A_{ij} - \Delta_j A_{ji}) \frac{dP}{dx_1} + (\Delta_i A_{2j} - \Delta_j A_{2i}) \frac{dP}{dx_2} + \ldots + (\Delta_i A_{nj} - \Delta_j A_{ni}) \frac{dP}{dx_n} = 0.\]
Secondly, the equation \((\Delta_i \Delta_j - \Delta_j \Delta_i) P = 0\) will be satisfied by any common integral of \(\Delta_i P = 0\) and \(\Delta_j P = 0\).
For let \(f = c\) be a common integral of the latter equations. Then, identically,
\[\Delta_i f = 0, \quad \Delta_j f = 0;\]
therefore, since \(\Delta_j\) and \(\Delta_i\) involve only operations of differentiation together with algebraic ones,
\[\Delta_j \Delta_i f = 0, \quad \Delta_i \Delta_j f = 0;\]
\[\therefore \quad \Delta_i \Delta_j f - \Delta_j \Delta_i f = 0;\]
whence (3.) is also identically satisfied.
**Proposition III.**—If, by the successive application of Prop. II., and by permitted processes of algebraic elimination, we derive from the system of partial differential equations
\[\Delta_1 P = 0, \quad \Delta_2 P = 0, \quad \ldots, \quad \Delta_r P = 0,\]
into which the given system of differential equations has been converted, a final system of partial differential equations which, while including the above system, shall be such that the application of Prop. II. to any pair of the equations contained shall lead only to an identity, then the number of integrals of the given system of differential equations will be equal to the number of variables they contain, diminished by the number of partial differential equations of the above final system.
The developed form of the system
\[\Delta_1 P = 0, \quad \Delta_2 P = 0, \quad \ldots, \quad \Delta_r P = 0\]
is the following:
\[
\begin{align*}
\frac{dP}{dx_{n+1}} + A_{11} \frac{dP}{dx_1} + \ldots + A_{1n} \frac{dP}{dx_n} &= 0, \\
\frac{dP}{dx_{n+2}} + A_{12} \frac{dP}{dx_1} + \ldots + A_{2n} \frac{dP}{dx_n} &= 0, \\
&\vdots \\
\frac{dP}{dx_{n+r}} + A_{1r} \frac{dP}{dx_1} + \ldots + A_{rr} \frac{dP}{dx_n} &= 0.
\end{align*}
\]
Comparing these with (4.), Prop. II., which is the developed form of the equation \((\Delta_i \Delta_j - \Delta_j \Delta_i) P = 0\), we see that the latter equation is necessarily algebraically independent of the above system; for no equation derived from that system by algebraic processes could be free, as (4.) is, from all the differential coefficients \(\frac{dP}{dx_{n+1}} \ldots \frac{dP}{dx_{n+r}}\).
Again, as \((\Delta_i \Delta_j - \Delta_j \Delta_i) P = 0\) is satisfied by all the common integrals of \(\Delta_i P = 0\) and \(\Delta_j P = 0\), it follows that the system of \(r+1\) equations,
\[\Delta_1 P = 0, \quad \ldots, \quad \Delta_r P = 0, \quad (\Delta_i \Delta_j - \Delta_j \Delta_i) P = 0,\]
will be satisfied by all the common integrals of the system (1.). To this system of \(r+1\)
equations we can also give a form analogous to the developed form of the system (2.). It will be noticed that the differential coefficients \( \frac{dP}{dx_{n+1}} \ldots \frac{dP}{dx_{n+r}} \) appear there, each in only one equation, and each with the coefficient unity. Now let the last equation of (3.) in its developed form be divided by the coefficient of \( \frac{dP}{dx_n} \), and let also the value of \( \frac{dP}{dx_n} \) which it gives be substituted in the other equations of (3.); then we shall have in the whole a system of \( n+1 \) equations possessing the same general character as the system (2.). To this new system the same procedure may be applied, viz. the genesis of a new equation by means of Prop. II., and the transference of another differential coefficient \( \frac{dP}{dx_{n-1}} \) to the list of those which form the respective first terms of the equations of the system. We will suppose this procedure to have been repeated until a system composed of \( m \) partial differential equations such that the further application of Prop. II. leads only to identities has been formed. If \( n+r-m=p \), that system will be of the form
\[
\begin{align*}
\dfrac{dP}{dx_{p+1}} + H_{i1} \dfrac{dP}{dx_1} + \ldots + H_{ip} \dfrac{dP}{dx_p} &= 0, \\
\dfrac{dP}{dx_{p+2}} + H_{i2} \dfrac{dP}{dx_1} + \ldots + H_{ip} \dfrac{dP}{dx_p} &= 0, \\
&\vdots \\
\dfrac{dP}{dx_{p+m}} + H_{im} \dfrac{dP}{dx_1} + \ldots + H_{pm} \dfrac{dP}{dx_p} &= 0.
\end{align*}
\]
And if, in analogy with former notation, we write
\[
\dfrac{d}{dx_{p+i}} + H_{i1} \dfrac{d}{dx_1} + H_{i2} \dfrac{d}{dx_2} + \ldots + H_{ip} \dfrac{d}{dx_p} = \Delta_i,
\]
it will take the symbolical form
\[
\Delta_1 P = 0, \Delta_2 P = 0, \ldots \Delta_m P = 0;
\]
but it will differ from all former systems of equations in that all the conditions represented by
\[
(\Delta_i \Delta_j - \Delta_j \Delta_i) P = 0 .
\]
will be identically satisfied,—satisfied, in consequence not of any ascertained peculiarity of the integral \( P \), but of the constitution of the system of symbols \( \Delta_1, \Delta_2, \ldots \Delta_m \).
The course of argument has shown that the common integrals of the system (4.) will be identical with those of the parent system (2.). Now we shall show that the existence of the condition (5.) renders the integration theoretically possible; that the system of \( p \) ordinary differential equations into which, by Prop. II., the final system of partial differential equations (4.) is resolvable admits of exactly \( p \) integrals. As \( p=n+r-m \), this is to say that the number of integrals is equal to the number of original variables diminished by the number of final partial differential equations.
The proof of this will consist of two parts:—1st. It will be shown that, if a system of \( p \) integrals exists, the conditions represented by (8.) will be identically satisfied.
2ndly. It will be shown that, when the conditions represented by (5.) are identically satisfied, the solution either of the final system of partial differential equations (4.), or of the corresponding system of ordinary differential equations, by a system of \( p \) integrals is theoretically possible.
It will follow from these conjoined, that the number of actually existing integrals is exactly \( p \).
1st. The system of ordinary differential equations corresponding to (4.) may be expressed in the form
\[
\begin{align*}
dx_1 &= H_{11} dx_{p+1} + H_{12} dx_{p+2} \cdots + H_{1m} dx_{p+m}, \\
dx_p &= H_{p1} dx_{p+1} + H_{p2} dx_{p+2} \cdots + H_{pm} dx_{p+m}.
\end{align*}
\]
Now suppose this system to have \( p \) integrals. Then, by means of these, \( x_1, x_2, \ldots x_p \), can be eliminated from the coefficients \( H_{ij} \), &c. in the second members, which will thus become exact differentials of those functions of the variables \( x_{p+1} \ldots x_{p+m} \) which express the values of \( x_1, \ldots x_p \). Hence we shall have the system of conditions
\[
\frac{d}{dx_{p+i}} H_{kj} = \frac{d}{dx_{p+j}} H_{ki} \quad \ldots \quad \ldots \quad \ldots \quad \ldots \quad (7.)
\]
\( k \) representing any integer from 1 to \( p \), and \( i, j \) any integers from 1 to \( m \), and the bracketed symbols of differentiation referring to \( H_{kj}, H_{ki} \) as transformed. Hence, the unbracketed symbols referring to the prior state of the functions, we have
\[
\left( \frac{d}{dx_{p+i}} \right) = \frac{d}{dx_{p+i}} + \frac{dx_1}{dx_{p+i}} \frac{d}{dx_1} \cdots + \frac{dx_p}{dx_{p+i}} \frac{d}{dx_p} \\
= \Delta_i.
\]
In the same way
\[
\left( \frac{d}{dx_{p+j}} \right) = \Delta_j;
\]
so that (7.) becomes
\[
\Delta_i H_{kj} - \Delta_j H_{ki} = 0. \quad \ldots \quad \ldots \quad \ldots \quad \ldots \quad (8.)
\]
Now if we construct, in analogy with (4.), Prop. II., the developed form of the conditional equation (5.) of the present section, we shall have
\[
(\Delta_i H_{ij} - \Delta_j H_{ii}) \frac{dP}{dx_1} \cdots + (\Delta_i H_{ij} - \Delta_j H_{pi}) \frac{dP}{dx_p} = 0,
\]
or, \( \Sigma \) denoting summation from \( k=1 \) to \( k=p \),
\[
\Sigma (\Delta_i H_{kj} - \Delta_j H_{ki}) \frac{dp}{dx_k} = 0.
\]
Now as by (8.) the coefficients vanish identically, the equation, and generally the system of conditional equations of which it is the type, will be identically satisfied.
2ndly. We proceed to show that the system of \( m \) linear partial differential equations (7.), represented under the form
\[
\Delta_1 P = 0, \quad \Delta_2 P = 0, \quad \ldots \quad \Delta_m P = 0,
\]
and satisfying identically the system of conditions represented by
\[
(\Delta_i \Delta_j - \Delta_j \Delta_i) P = 0,
\]
will admit of \( p \) integrals expressing distinct values of \( P \); and the system of ordinary differential equations (6.) corresponding to the above system of partial differential equations will be expressible as a system of exact differential equations, and will by integration give the above systems of integrals.
Beginning with the first partial differential equation of the system (4.), and forming the corresponding Lagrangian system of ordinary differential equations
\[
dx_{p+1} = \frac{dx_1}{H_{11}} \cdots = \frac{dx_p}{H_{p1}},
\]
\[
dx_{p+2} = 0, \quad dx_{p+m} = 0,
\]
we see that the integrals of this system will be of the form
\[
u_1 = c_1, \quad \ldots \quad u_p = c_p,
\]
\[
x_{p+2} = c_{p+2}, \ldots x_{p+m} = c_{p+m},
\]
\( u_1 \ldots u_p \) being functions of all the variables \( x_1, \ldots x_{p+m} \), among which, by virtue of the integrals of the second line, \( x_{p+2}, \ldots x_{p+m} \) may be regarded as constant. The general integral will be
\[
F(u_1, \ldots u_p, x_{p+2}, \ldots x_{p+m}) = 0,
\]
the form of \( F \) being perfectly arbitrary.
Now the general form of any equation of the system (4.) is
\[
\frac{dP}{dx_{p+i}} + H_{i1} \frac{dP}{dx_1} \cdots + H_{pi} \frac{dP}{dx_p} = 0. \quad \ldots \quad \ldots \quad \ldots \quad \ldots \quad (9.)
\]
Let us transform this by assuming
\[
u_1 \ldots u_p, \quad x_{p+1} \ldots x_{p+m}
\]
as independent variables. Then referring the right-hand members of the following equations to the new, the left-hand to the old system of variables, we have
\[
\frac{dP}{dx_{p+i}} = \frac{dP}{dx_{p+i}} + \frac{dP}{du_1} \frac{du_1}{dx_{p+i}} \cdots + \frac{dP}{du_p} \frac{du_p}{dx_{p+i}},
\]
\[
\frac{dP}{dx_1} = \frac{dP}{du_1} \frac{du_1}{dx_1} \cdots + \frac{dP}{du_p} \frac{du_p}{dx_1};
\]
\[
\frac{dP}{dx_p} = \frac{dP}{du_1} \frac{du_1}{dx_p} \cdots + \frac{dP}{du_p} \frac{du_p}{dx_p};
\]
whence, substituting in (9.),
\[
\frac{dP}{dx_{p+1}} + \frac{dP}{du_1} \left( \frac{du_1}{dx_{p+1}} + H_{1i} \frac{du_i}{dx_1} + \cdots + H_{pi} \frac{du_p}{dx_p} \right) \\
+ \frac{dP}{du_p} \left( \frac{du_p}{dx_{p+1}} + H_{pi} \frac{du_i}{dx_1} + \cdots + H_{pp} \frac{du_p}{dx_p} \right) = 0,
\]
or
\[
\frac{dP}{dx_{p+1}} + (\Delta u_1) \frac{dP}{du_1} + \cdots + (\Delta u_p) \frac{dP}{du_p} = 0; \quad \ldots \ldots \ldots \ldots \quad (10.)
\]
and in this equation we may give to \( i \) the successive values 1, 2, \ldots p. If, then, we write
\[
\frac{d}{dx_{p+1}} + (\Delta u_1) \frac{d}{du_1} + \cdots + (\Delta u_p) \frac{d}{du_p} = \Delta'_i,
\]
we see that the proposed transformation will have the effect of converting
\[
\Delta_1, \Delta_2, \ldots \Delta_p
\]
into
\[
\Delta'_1, \Delta'_2, \ldots \Delta'_p
\]
respectively, and the system of partial differential equations into
\[
\Delta'_1 P = 0, \Delta'_2 P = 0, \ldots \Delta'_p P = 0.
\]
The developed form of the first of these equations is
\[
\frac{dP}{dx_{p+1}} + (\Delta u_1) \frac{dP}{du_1} + \cdots + (\Delta u_p) \frac{dP}{du_p} = 0.
\]
But since \( u_1, \ldots u_p \) are integrals of \( \Delta_1 P = 0 \), we have
\[
\Delta_1 u_1 = 0, \ldots \Delta_1 u_p = 0,
\]
so that the equation \( \Delta'_1 P = 0 \) reduces to
\[
\frac{dP}{dx_{p+1}} = 0.
\]
We learn from this that \( x_{p+1} \) will not explicitly appear in \( P \) after the transformation which introduces \( u_1, \ldots u_p \).
The developed form of the remaining \( p-1 \) equations represented by (10.) will be
\[
\begin{align*}
\frac{dP}{dx_{p+2}} + (\Delta_2 u_1) \frac{dP}{du_1} + \cdots + (\Delta_2 u_p) \frac{dP}{du_p} &= 0, \\
\vdots & \\
\frac{dP}{dx_{p+m}} + (\Delta_m u_1) \frac{dP}{du_1} + \cdots + (\Delta_m u_p) \frac{dP}{du_p} &= 0; \\
\end{align*}
\]
and we shall next show that the variable \( x_{p+1} \) will not present itself in the coefficients \( (\Delta s u_i), \&c. \)
The general form of such coefficients is \( \Delta_i u_s \), where \( i \) has any value from 2 to \( m \), and \( s \) any value from 1 to \( p \).
Now if in (5.), which is true independently of the nature of the function \( P \), we make
$j=1$ and $P=u_s$, we have
$$ (\Delta_i \Delta_1 - \Delta_1 \Delta_i) u_s = 0. $$
But $\Delta_i u_s = 0$; therefore, by the above,
$$ \Delta_i \Delta_1 u_s = 0. $$
Now $\Delta_i u_s$ is expressible at most as a function of $u_1, \ldots, u_p, x_{p+1}, \ldots, x_{p+m}$. But this transformation converts, as has been seen, $\Delta_1$ into $\frac{d}{dx_{p+1}}$. Thus we have
$$ \frac{d}{dx_{p+1}} \Delta_i u_s = 0, $$
so that $\Delta_i u_s$ is free from $x_{p+1}$. Thus the system (11.) is free from $x_{p+1}$.
Lastly, since by the above transformation $\Delta_2, \ldots, \Delta_p$ are converted into $\Delta'_2, \ldots, \Delta'_p$, the system of conditions $(\Delta_i \Delta_j - \Delta_j \Delta_i) P = 0$ is converted into $(\Delta'_i \Delta'_j - \Delta'_j \Delta'_i) P = 0$.
It is thus seen that the system of $p$ partial differential equations
$$ \Delta_1 P = 0, \quad \Delta_2 P = 0, \ldots, \Delta_p P = 0, $$
containing $p+m$ independent variables $x_1, x_2, \ldots, x_{p+m}$, and satisfying the conditions
$$ (\Delta_i \Delta_j - \Delta_j \Delta_i) P = 0, $$
is convertible into a system of $p-1$ partial differential equations,
$$ \Delta'_2 P = 0, \quad \Delta'_3 P = 0, \ldots, \Delta'_p P = 0, $$
containing $p+m-1$ independent variables $u_1, \ldots, u_p, x_{p+2}, \ldots, x_{p+m}$, and satisfying the condition
$$ (\Delta_i \Delta_j - \Delta_j \Delta_i) P = 0. $$
And as this system possesses the same character as that upon which the previous transformation depended, it will admit of transformation into a system of $p-2$ partial differential equations containing $p+m-2$ independent variables; and so on until we arrive at a single final partial differential equation containing $p+1$ independent variables, and having therefore $p$ distinct integrals, which will be the common integrals of the primary system of partial differential equations as well as of the system of ordinary differential equations to which they correspond.
Cor. The property of the coefficients $\Delta_2 u_1, \ldots$ of the system (11.), of being free from the variable $x_{p+1}$, enables us, by properly determining the integrals of the partial differential equation $\Delta_i P = 0$, to reduce the system to a form of great simplicity.
Let $\Delta_i u_j$ be any one of those coefficients. Its developed form is
$$ \left( \frac{d}{dx_{p+1}} + H_{i1} \frac{d}{dx_1} + \cdots + H_{ip} \frac{d}{dx_p} \right) u_j. $$
Now as this expression will, after the performance of the differentiations, be free from $x_{p+1}$, and as the differentiations are none of them with respect to $x_{p+1}$, we can give to $x_{p+1}$ in it any particular value before differentiation without affecting the final result. Let us then suppose that in $H_{i1}, \ldots, H_{ip}$, and in $u_j$, $x_{p+1}$ is made equal to 0. Now it is possible so to determine the integrals $u_j$ as functions of the variables $x_1, \ldots, x_{p+r}$, that
when \( x_{p+1} = 0 \) each \( u_j \) shall reduce to \( x_j \). For this purpose it is only necessary to choose as arbitrary constants a set of arbitrary values of \( x_1, x_2, \ldots, x_p \) corresponding to \( x_{p+1} = 0 \).
Let \( x'_1, x'_2, \ldots, x'_p \) be such arbitrary constants, and let the given system of integrals be reduced to the form:
\[
u_1 = x'_1, \ldots, u_p = x'_p,
\]
and the functions \( u_1, \ldots, u_p \) will possess the required property. Changing, then, each \( u_j \) into \( x_j \), the expression (12.) reduces to \( H_{ij} \), and it only remains to express this in terms of \( u_1, \ldots, u_p \); which, as \( x_{p+1} = 0 \), is done by merely changing \( x_1, \ldots, x_p \) into \( u_1, \ldots, u_p \).
Thus the system (11.) is reduced to
\[
\frac{dP}{dx_{p+2}} + (H_{12}) \frac{dP}{dx_1} \ldots + (H_{p2}) \frac{dP}{dx_p} = 0,
\]
\[
\frac{dP}{dx_{p+m}} + (H_{1m}) \frac{dP}{dx_1} \ldots + (H_{pm}) \frac{dP}{dx_p} = 0,
\]
where the brackets denote that in the enclosed portion \( x_{p+1} \) is to be made 0, and \( x_1, x_2, \ldots, x_p \) converted into \( u_1, u_2, \ldots, u_p \).
Now this form is identical, the above conversion of letters excepted, with that of the system (4.), omitting that equation of the latter system by the integration of which the forms of \( u_1, \ldots, u_p \) are determined.
It follows from the above that, obtaining the integrals of \( \Delta_i P = 0 \) in such a form that the arbitrary constants shall represent the arbitrary values of \( x_1 \ldots x_p \) when \( x_{p+1} = 0 \), and representing the functions which are equal to those arbitrary constants by \( x'_1, x'_2, \ldots, x'_p \),
then if in the remaining equations \( \Delta_2 P = 0 \ldots \Delta_m P = 0 \) we change \( x_1, \ldots, x_p, \frac{d}{dx_1}, \ldots, \frac{d}{dx_p} \) to \( x'_1, \ldots, x'_p, \frac{d}{dx'_1}, \ldots, \frac{d}{dx'_p} \), and \( x_{p+1} \) and \( \frac{dP}{dx_{p+1}} \) to 0, the common integrals of the transformed system of \( p - 1 \) will be the same as those of the previous system of \( p \) partial differential equations. In the same way a third system of \( p - 2 \) partial differential equations may be formed, and so on, till we obtain a single final partial differential equation which will have the common integrals of the parent system. By this method, which is due to Jacobi and Nutani, all the labour of the successive transformations is avoided. The successive integrals thus introduced are termed 'Haupt-integrale.'
Instead of applying the foregoing methods, general or particular, to the final system of partial differential equations, we may apply it to the solution of the corresponding final system of ordinary differential equations. In this case they would really represent the method of solution known as the variation of parameters, and the conditions \( (\Delta_i \Delta_j - \Delta_j \Delta_i) P = 0 \) would secure the sufficiency of that method. If in the system of ordinary differential equations, (6.) we regard \( x_{p+2}, \ldots, x_{p+m} \) as constant, we get
\[
dx_1 - H_{11} dx_{p+1} = 0, \\
\ldots \\
dx_p - H_{pp} dx_{p+1} = 0. \\
\]
(13.)
Integrate this in the form
\[ u_1 = c_1, \ldots, u_p = c_p; \]
then, treating \( c_1, \ldots, c_p \) as functions of the variables before regarded as constant, and endeavouring to satisfy the unreduced equations (6.), we obtain, in virtue of the above conditions, a system of differential equations equal in number to the system given, but containing one variable fewer. The system (13.) by which the forms of \( u_1, \ldots, u_p \), are here determined, is the Lagrangian auxiliary of the first partial differential equation \( \Delta_1 P = 0 \) integrated in the other method; and so in each subsequent stage. And with respect to the other parts of the process, it obviously makes no difference whether we take as the new variables \( u_1, \ldots, u_p \), or \( c_1, \ldots, c_p \) under the condition (necessarily involved in the method of the variation of parameters) that they shall after integration be replaced by \( u_1, \ldots, u_p \). But it would not have sufficed simply to refer the solution of the final system of ordinary differential equations to the method of the variation of parameters, first, because the necessity and sufficiency of the conditions which form the ground of that method and are the warrant of its success were to be shown; secondly, because the connexion of the systems of ordinary differential equations which arise in the method of parameters with the successive partial differential equations forms an essential part of the demonstration.
**General Rule.**
The results of the foregoing inquiry may be collected into the following General Rule:
To find the number \( p \) of possible integrals of a system of \( n \) differential equations of the first order connecting \( n + r \) variables \( x_1, x_2, \ldots, x_{n+r} \), and to determine those integrals.
**Rule.**—Suppose \( P \) an integral of the given system. Determine from the given system \( dx_1, dx_2, \ldots, dx_n \) as functions of the other differentials. Substitute these values in the equation
\[
\frac{dP}{dx_1} dx_1 + \frac{dP}{dx_2} dx_2 + \cdots + \frac{dP}{dx_{n+r}} dx_{n+r} = 0,
\]
and equate to 0 the coefficients of the remaining differentials. This will give a system of \( r \) partial differential equations of the form
\[
\frac{dP}{dx_{n+1}} + A_{11} \frac{dP}{dx_1} + \cdots + A_{1n} \frac{dP}{dx_n} = 0,
\]
\[
\frac{dP}{dx_{n+r}} + A_{r1} \frac{dP}{dx_1} + \cdots + A_{rn} \frac{dP}{dx_n} = 0,
\]
\( r \) of the differential coefficients appearing each only in one equation and with coefficient unity. Representing these equations in the symbolical form
\[
\Delta_1 P = 0, \Delta_2 P = 0, \ldots, \Delta_r P = 0,
\]
deduce any equation or equations of the form
\[
(\Delta_i \Delta_j - \Delta_j \Delta_i) P = 0.
\]
If all such prove to be identities, the given system of differential equations admits of $n$ integrals, and is reducible to a system of exact differential equations. But if any such equation is not an identity, it will constitute a new partial differential equation of the form
$$B_1 \frac{dP}{dx_1} + B_2 \frac{dP}{dx_2} + \cdots + B_n \frac{dP}{dx_n} = 0.$$
And this, combined with the previous ones, will enable us to form a system of $r+1$ partial differential equations, in which $\frac{dP}{dx_n}, \frac{dP}{dx_{n+1}}, \ldots, \frac{dP}{dx_{n+r}}$ appear each in only one equation and with coefficient unity. Upon this system let the same process be repeated as upon the previous system of $r$ partial differential equations, and so continually repeated until we arrive at a final system of partial differential equations such that, if that system be represented in the form
$$\Delta_i P = 0 \ldots \Delta_m P = 0,$$
the condition
$$(\Delta_i \Delta_j - \Delta_j \Delta_i) P = 0$$
shall be identically satisfied for every pair.
Then, the number of such partial differential equations being $m$, the number of integrals of the original system of partial differential equations will be $n+r-m$, i.e. it will be equal to the number of the original variables diminished by the number of final partial differential equations.
And if by that final system we eliminate $m$ of the differential coefficients from
$$\frac{dP}{dx_1} dx_1 + \frac{dP}{dx_2} dx_2 + \cdots + \frac{dP}{dx_{n+r}} dx_{n+r} = 0,$$
and equate to 0 the coefficients of the remaining differential coefficients, we shall have a system of $n+r-m$ differential equations expressible as exact differential equations for the determination of the integrals.
Actually to determine these, we should endeavour in the first instance to reduce the final system of differential equations, as such reduction is theoretically possible, to a system of exact differential equations. If the means of doing this are not obvious, the method of the variation of parameters or the equivalent methods of Prop. III. must be applied.
Lastly, if the process which consists in the application of the theorem $(\Delta_i \Delta_j - \Delta_j \Delta_i) P = 0$ do not stop with the formation of the final system of partial differential equations, but lead to algebraic relations among the variables, the given system of differential equations will have no integrals properly so called, but it may admit of solutions analogous to those the theory of which has been developed by Pfaff, Jacobi, and others for the differential equation
$$X_1 dx_1 + X_2 dx_2 + \cdots + X_n dx_n = 0.$$
Applications.
1st. Suppose it required to find the number of integrals of the form $P = c$, which the system of differential equations
$$dz = (t + xy + xz)dx + (xzt + y - xy)dy,$$
$$dt = (y + z - 3x)dx + (zt - y)dy$$
admits, and to determine such integrals.
Eliminating $dz$ and $dt$ from the equation
$$\frac{dP}{dx} dx + \frac{dP}{dy} dy + \frac{dP}{dz} dz + \frac{dP}{dt} dt = 0,$$
and equating to 0 the coefficients of $dz$ and $dt$ in the result, we have
$$\frac{dP}{dx} + (t + xy + xz) \frac{dP}{dz} + (y + z - 3x) \frac{dP}{dt} = 0, \quad \ldots \ldots \ldots \quad (1.)$$
$$\frac{dP}{dy} + (xzt + y - xy) \frac{dP}{dz} + (zt - y) \frac{dP}{dt} = 0. \quad \ldots \ldots \ldots \quad (2.)$$
Hence writing
$$\Delta = \frac{d}{dx} + (t + xy + xz) \frac{d}{dz} + (y + z - 3x) \frac{d}{dt},$$
$$\Delta' = \frac{d}{dy} + (xzt + y - xy) \frac{d}{dz} + (zt - y) \frac{d}{dt},$$
and forming the equation
$$(\Delta \Delta' - \Delta' \Delta)P = 0,$$
we have, on rejecting a common algebraic factor,
$$x \frac{dP}{dz} + \frac{dP}{dt} = 0. \quad \ldots \ldots \ldots \ldots \quad (3.)$$
By substituting in (1.) and (2.) the value of $\frac{dP}{dt}$ hence obtained, we have the system of three equations,
$$\frac{dP}{dx} + (3x^2 + t) \frac{dP}{dz} = 0,$$
$$\frac{dP}{dy} + y \frac{dP}{dz} = 0,$$
$$\frac{dP}{dt} + x \frac{dP}{dz} = 0.$$
Now if upon any two of these we repeat the same process as upon (1.) and (2.), we obtain as the result $0 = 0$. Thus the system of partial differential equations is complete.
As then there are three equations in this final system, while the number of original variables was four, the primary system will admit of one integral of the form $P = c$.
To obtain this integral, eliminate $\frac{dP}{dx}$, $\frac{dP}{dy}$, $\frac{dP}{dt}$ from the above equations and
$$\frac{dP}{dx} dx + \frac{dP}{dy} dy + \frac{dP}{dz} dz + \frac{dP}{dt} dt = 0,$$
and equate to 0 the coefficient of \( \frac{dP}{dz} \) in the result. We find
\[
dz - (t + 3x^2)dx - ydy - xdt = 0,
\]
the integral of which is
\[
z - xt - x^3 - y^2 = c.
\]
2nd. The solution of the partial differential equation
\[
Rr + Ss + Tt + U(s^2 - rt) = V,
\]
as well as of the special equations
\[
Rr + Ss + Tt = V,
\]
\[
Rr + Ss + Tt + U(s^2 - rt) = 0,
\]
the theory of which constitutes an exception to that of the more general form, depends, in general upon the integration of three simultaneous differential equations between five variables. To this integration the method of the foregoing sections is applicable.
The only cases for which the theory of the ultimate solution can be said to be complete, are those in which the auxiliary system of common differential equations admits either three integrals of the form \( P = c \), or two integrals of that form.
We may apply the method of the foregoing sections, not only to the determination of the integrals, but also to the discovery of the \( \textit{à priori} \) conditions connecting the coefficients \( R, S, T, U, V \) in order that each of these species of integration may be possible.
For example, the solution of the equation
\[
Rr + Ss + Tt + (s^2 - rt) = V
\]
depending upon the integration of the system
\[
dq = -m_1 dx + Rdq,
\]
\[
dp = -m_2 dy + Tdx,
\]
\[
dz = pdz + qdy,
\]
in which \( m_1 \) and \( m_2 \) are roots of the equation
\[
m^2 - Sm + RT - V = 0,
\]
let it be required to determine the conditions under which the system admits three integrals.
Eliminating \( dq, dp, dz \) between the above equations and
\[
\frac{dP}{dx} dx + \frac{dP}{dy} dy + \frac{dP}{dz} dz + \frac{dP}{dp} dp + \frac{dP}{dq} dq = 0,
\]
and equating to 0 the coefficients of \( dx \) and \( dy \) in the result, we obtain two partial differential equations which may be thus represented, viz.
\[
\Delta P = 0, \quad \Delta' P = 0,
\]
in which
\[
\Delta = \frac{d}{dx} - m_1 \frac{d}{dq} + T \frac{d}{dp} + p \frac{d}{dz},
\]
\[
\Delta' = \frac{d}{dy} + R \frac{d}{dq} - m_2 \frac{d}{dp} + q \frac{d}{dz}.
\]
That there may be three integrals, it is here necessary that there should be but two partial differential equations in the completed system. Hence the equation
$$(\Delta'\Delta - \Delta'\Delta)P = 0$$
must vanish identically.
Developing this, we have the conditions
$$\Delta R + \Delta'm_1 = 0, \quad \Delta m_2 + \Delta'T = 0, \quad \Delta q - \Delta'p = 0.$$
Now on performing the operations denoted by $\Delta$ and $\Delta'$, the last equation gives
$$m_2 - m_1 = 0.$$
Hence referring to the quadratic, we see that
$$S^2 - 4RT + 4V = 0. \quad \ldots \quad \ldots \quad \ldots \quad (I.)$$
To this must be added the two other reduced conditions,
$$\Delta R + \Delta'm = 0, \quad \ldots \quad \ldots \quad \ldots \quad (II.)$$
$$\Delta m + \Delta'T = 0, \quad \ldots \quad \ldots \quad \ldots \quad (III.)$$
$m$ representing one of the equal roots of the reduced quadratic.
The first of the above conditions was given by Ampère*. The others are probably new. Satisfied, they enable us to predict that the partial differential equation under consideration admits a complete primitive involving three constants, and a general primitive arising from the variation of those constants in subjection to any two arbitrary conditions.
3rd. We have supposed each linear partial differential equation employed in the processes of this paper to be of the form
$$A_1 \frac{dP}{dx_1} + A_2 \frac{dP}{dx_2} \cdots + A_n \frac{dP}{dx_n} = 0,$$
and we have supposed each system of partial differential equations which arises, to be so reduced that each equation shall have some one of the partial differential coefficients of $P$ entering into it alone and with a coefficient equal to unity.
The first of these conditions is virtually sufficiently general, because any linear partial differential equation can be deprived of its second member. The advantage of the second condition is that each newly-formed equation will be really new, and not an algebraic combination of the old ones.
But neither of these conditions is necessary. From two linear partial differential equations of the form
$$\Delta_1 P = H, \quad \Delta_2 P = K,$$
in which $H$ and $K$ are functions of the independent variables, arises a new equation,
$$(\Delta_1 \Delta_2 - \Delta_2 \Delta_1)P = \Delta_1 K - \Delta_2 H, \quad \ldots \quad \ldots \quad \ldots \quad (I.)$$
which will be satisfied by all the simultaneous integrals of the equations from which it is derived.
It may be rigorously proved that, in applying this process, the generated system
* Journal de l'Ecole Polytechnique, Cahier 18°.
(including the original equations) will be complete when no new equation arises from the combination of any one of the equations with any one of the equations of the original system.
I will illustrate this by investigating the conditions of integrability of the expression
\[ \varphi(x, y, \frac{dy}{dx}, \frac{d^2y}{dx^2}, \ldots, \frac{d^n y}{dx^n}) \, dx. \]
If this expression admit of an integral \( V \), it is easy to see that \( V \) will satisfy the two partial differential equations
\[
\frac{dV}{dx} + y_1 \frac{dV}{dy} + y_2 \frac{dV}{dy_1} + \cdots + y_n \frac{dV}{dy_{n-1}} = \varphi, \quad \ldots \quad (2.)
\]
\[
\frac{dV}{dy_n} = 0, \quad \ldots \quad (3.)
\]
in which
\[ y_1 = \frac{dy}{dx}, \quad y_2 = \frac{d^2y}{dx^2}, \quad \ldots, \]
and \( \varphi \) stands for
\[ \varphi(x, y, y_1, \ldots, y_n). \]
The above are, in fact, the partial differential equations which we should obtain by Prop. I. as the equivalents of the system of ordinary differential equations,
\[ dV = \varphi \, dx, \]
\[ dy = y_1 \, dx, \quad dy_1 = y_2 \, dx, \quad \ldots \quad dy_{n-1} = y_n \, dx. \]
If we write
\[ \left( \frac{d}{dx} \right) = \frac{d}{dx} + y_1 \frac{d}{dy} + y_2 \frac{d}{dy_1} + \cdots + y_n \frac{d}{dy_{n-1}}, \]
the above partial differential equations become
\[ \left( \frac{d}{dx} \right) V = \varphi \ldots (I.), \quad \frac{dV}{dy_n} = 0 \ldots (II.) \]
The combination of (I.) with (II.) (by the theorem (1.)), then of (I.) with the result, and so on, gives a series of equations which may be thus expressed:
\[
\frac{dV}{dy_{n-1}} = \Delta_n \varphi, \quad \ldots \quad (III.)
\]
\[
\frac{dV}{dy_{n-2}} = \Delta_{n-1} \varphi, \quad \ldots \quad (IV.)
\]
\[
\frac{dV}{dy_0} = \Delta_1 \varphi, \quad \ldots \quad (V.)
\]
\[ 0 = \Delta_0 \varphi, \quad \ldots \quad (VI.) \]
in which
\[ \Delta_r = \frac{d}{dy_r} - \left( \frac{d}{dx} \right) \frac{d}{dy_{r+1}} + \left( \frac{d}{dx} \right)^2 \frac{d}{dy_{r+2}} - \&c., \]
\[ \Delta_0 = \frac{d}{dy} - \left( \frac{d}{dx} \right) \frac{d}{dy_1} + \left( \frac{d}{dx} \right)^2 \frac{d}{dy_2} - \&c. \]
The combination of (II.) with (III.) ... (V.) gives the series of conditions
\[
\frac{d}{dy_n} \Delta_n \phi = 0, \quad \frac{d}{dy_{n-1}} \Delta_{n-1} \phi = 0, \quad \ldots \quad \frac{d}{dy_1} \Delta_1 \phi = 0. \quad \ldots \quad \ldots \quad \ldots \quad \text{(VII.)}
\]
The conditions of integrability are expressed by (VI.) and (VII.). These satisfied, the equations (III.), (IV.), ... (V.) show that \( \phi dx \) can be expressed as an exact differential with respect to \( x, y, y_1, y_{n-1} \) in the form
\[
\phi dx = (\phi - y_1 \Delta_1 \phi - y_2 \Delta_2 \phi \ldots - y_n \Delta_n \phi) dx \\
+ \Delta_1 \phi dy + \Delta_2 \phi dy_1 \ldots + \Delta_n \phi dy_{n-1},
\]
a result first established by M. Sarrus.