On the Differential Equations of Dynamics. A Sequel to a Paper on Simultaneous Differential Equations

Author(s) George Boole
Year 1863
Volume 153
Pages 18 pages
Language en
Journal Philosophical Transactions of the Royal Society of London

Full Text (OCR)

XXII. On the Differential Equations of Dynamics. A sequel to a Paper on Simultaneous Differential Equations. By GEORGE BOOLE, F.R.S., Professor of Mathematics in Queen's College, Cork. Received December 22, 1862,—Read January 22, 1863. JACOBI, in a posthumous memoir* which has only this year appeared, has developed two remarkable methods (agreeing in their general character, but differing in details) of solving non-linear partial differential equations of the first order, and has applied them in connexion with that theory of the Differential Equations of Dynamics which was established by Sir W. R. HAMILTON in the Philosophical Transactions for 1834–35. The knowledge, indeed, that the solution of the equations of a dynamical problem is involved in the discovery of a single central function, defined by a single partial differential equation of the first order, does not appear to have been hitherto (perhaps it will never be) very fruitful in practical results. But in the order of those speculative truths which enable us to perceive unity where it was unperceived before, its place is a high and enduring one. Given a system of dynamical equations, it is possible, as JACOBI had shown, to construct a partial differential equation such that from any complete primitive of that equation, i.e. from any solution of it involving a number of constants equal to the number of the independent variables, all the integrals of the dynamical equations can be deduced by processes of differentiation. Hitherto, however, the discovery of the complete primitive of a partial differential equation has been supposed to require a previous knowledge of the integrals of a certain auxiliary system of ordinary differential equations; and in the case under consideration that auxiliary system consisted of the dynamical equations themselves. JACOBI's new methods do not require the preliminary integration of the auxiliary system. They require, instead of this, the solution of certain systems of simultaneous linear partial differential equations. To this object therefore the method developed in my recent paper on Simultaneous Differential Equations† might be applied. But the systems of equations in question are of a peculiar form. They admit, in consequence of this, of a peculiar analysis. And JACOBI's methods of solving them are in fact different from the one given by me, though connected with it by remarkable relations. He does indeed refer to the general problem of the solution of simultaneous partial differential equations, and this in language which does not even * Nova methodus æquationes differentiales partiales primi ordinis inter numerum variabilium quæcunque propositas integrandi (Crelle's Journal, Band lx. p. 1). † Philosophical Transactions for 1862. suppose the condition of linearity. He says, "Non ego hic immorabor quaestioni generali quando et quomodo duabus compluribusve æquationibus differentialibus partialiibus una eademque functione satisfieri possit, sed ad casum propositum investigationem restringam. Quippe quo præclaris uti licet artificiis ad integrationem expediendam commodis." But he does not, as far as I have been able to discover, discuss any systems of equations more general than those which arise in the immediate problem before him. It is only very lately that I have come to understand the nature of the relation between the general method of solving simultaneous partial differential equations, published in my recent memoir, and the particular methods of Jacobi. But in arriving at this knowledge I have been led to perceive how by a combination of my own method with one of those of Jacobi, the problem may be solved in a new and perhaps better, certainly a remarkable way. This new way forms the subject of the present paper*. Before proceeding to explain it, it will be necessary to describe Jacobi's methods, to refer to my own already published, and to point out the nature of the connexion between them. The system of linear partial differential equations being given, and it being required to find a simultaneous solution of them, Jacobi, according to his first method, transforms these equations by a change of variables; he directs that an integral of the first equation of the system be found; he shows that, in virtue of the form of the equations and the relation which connects the first and second of them, other integrals of the first equation may be derived by mere processes of differentiation from the integral already found; and he shows how, by means of such integrals of the first equation, a common integral of the first and second equations of the system may be found. This common integral is a function of the above integrals of the first equation, and of certain variables, and its form is obtained by the solution of a differential equation between two variables—a differential equation which is in general non-linear, and of an order equal to the total number of integrals previously found. An integral of the first two equations of the given system having been obtained, Jacobi shows that by a second process of derivation, followed by the solution of a second differential equation, an integral which will satisfy simultaneously the first three equations of the system may be found; and thus he proceeds by alternate processes of derivation and integration till an integral satisfying all the equations of the given system together is obtained. In these alternations, it is the function of the processes of derivation to give new integrals of the equations already satisfied; it is the function of the processes of integration to determine the functional forms by which the remaining equations may in their turn be satisfied. Jacobi's second method does not require a preliminary transformation of the equations; but the process of derivation, by which from an integral of the first equation other integrals are derived by virtue of the relation connecting the first and second * It was stated by me, but without demonstration, at the Meeting of the British Association in Cambridge in October of the present year (1862). equations, is carried further than in his first method. It is indeed carried on until no new integrals arise. The difference of result is, that the common integral of the first and second partial differential equations is determined as a function solely of the integrals known, and not as a mixed function of integrals and variables. But its form is determined, as before, by the solution of a differential equation. All the subsequent processes of derivation and integration are of a similar nature. On the other hand, the method of my former paper applied to the same problem leads, by a certain process of derivation, to a system of ordinary differential equations equal in number to the number of possible integrals, and, without being individually exact, susceptible of combination into exact differential equations. The integration of these would give all the common integrals of the given system. All these methods possess, with reference to the requirements of the actual case, a superfluous generality. A single common integral of the system is all that is required. Now the chief result to be established in this paper is the following: If, with Jacobi, according to his second method, we suppose one integral of the untransformed first partial differential equation to be found, if by means of this we construct according to a certain type a new partial differential equation, if to the system thus increased we apply the process of my former paper, continually deriving new partial differential equations until, no more arising, the system is complete, then, under a certain condition hereafter to be explained, a common integral of all the equations of the complete system, and therefore of the original system which is contained in it, may be found by the integration of a single differential equation susceptible of being made integrable by means of a factor. When the condition referred to is not satisfied, the results obtained may be applied to the transforming of the original system of equations into an equivalent system of the same character, but containing one equation less than before. To this system we may apply the same process as to the former, and shall arrive at the same final alternation, viz. either the satisfying of the system by a function determined by the solution of a single differential equation susceptible of being made exact by a factor, or the power of reducing it to an equivalent system containing still one equation less. In the most unfavourable case the common integral sought will be ultimately given by the solution of a single final partial differential equation. The condition in question is grounded on the theoretical connexion which exists between the process of derivation of partial differential equations developed in my former paper, and the process of derivation of integrals involved in Jacobi's methods. In the actual problem, and in virtue of the peculiar form of the partial differential equations employed, these two processes are coordinate, and it may even be said equivalent. The equations of that problem, if expressed in the symbolical form \[ \Delta_1 P = 0, \Delta_2 P = 0, \ldots \Delta_m P = 0, \] satisfy identically the condition \[ (\Delta_i \Delta_j - \Delta_j \Delta_i) P = 0. \] Each of the given equations is moreover of the form \[ \sum \left( \frac{dH}{dx_i} \frac{dP}{dp_i} - \frac{dH}{dp_i} \frac{dP}{dx_i} \right) = 0, \] \( H \) being a given function of the independent variables \( x_1, x_2, \ldots x_n, p_1, p_2, \ldots p_n \). It is usual to represent the first member of the above equation in the form \([H, P]\). If we adopt this notation, the entire system of equations may be expressed in the form \([H_1, P] = 0, [H_2, P] = 0, \ldots [H_m, P] = 0.\) Lastly, though this is not a new condition, being already implied in the former ones, the functions \( H_1, H_2, \ldots H_m \) are all common integrals of the system. It is the object of the problem to find a new common integral. With reference to such a system the connexion above referred to is as follows:—If we obtain a new integral \( K \) of the first equation of the system, and, associating this with the functions \( H \), form with it a new equation of the same type as the former ones, so that, corresponding to the series of integrals of the first equation \[ H_1, H_2, \ldots H_m, K, \] we have the series of partial differential equations \([H_1, P] = 0, [H_2, P] = 0, \ldots [H_m, P] = 0, [K, P] = 0, \] and then if to the former series we apply Jacobi's process for the derivation of integrals, to the latter the process of derivation of partial differential equations of my last paper, carrying each to its fullest extent, the result will be that to each new partial differential equation arising from the one will correspond a new integral (of the first partial differential equation) arising from the other. The theory now to be developed is founded upon the inquiry whether it is possible to satisfy the completed system of partial differential equations by a function of the completed system of the Jacobian integrals, i.e. to determine a common integral of the completed series of equations as a function of the completed series of integrals of the first equation. The reader is reminded that by the completed series of integrals is meant, not all the integrals of the first partial differential equations that exist, but all that arise from a certain root integral by a certain process of derivation, together with the root integral itself. Now the answer here to be established to this inquiry is the following. The first of the partial differential equations necessarily will, and others may, be satisfied by the proposed function irrespectively of its form. If the number of equations of the completed system which is not thus satisfied be odd (this is the condition in question), the form of the function which will satisfy all is determinable by the solution of a single differential equation of the first order, capable of being made integrable by means of a factor. I have entered into some details upon the history of the problem, partly because I believe the theory of simultaneous partial differential equations to be an important one, but partly also in order that I might render that just tribute to the great German mathematician which I was unable to pay before*. Jacobi certainly originated the theory of systems in which the condition \[(\Delta_i \Delta_j - \Delta_j \Delta_i)P = 0\] is satisfied. I learn from his distinguished pupil Dr. Borchardt, that this subject was fully discussed by him in lectures delivered at Königsberg in 1842–43, my informant having been one of the auditors. The present memoir is but a contribution to that theory. And though it does not appear that Jacobi has discussed the theory of systems not satisfying the above conditions, it is just to observe that the more general theory is at least to this extent contained in the particular one, that the recognition of the above equation as the condition of a mode of integration naturally suggests the inquiry how that equation is to be interpreted when not satisfied as a condition; and in the answer to this question the general theory is contained. **Proposition I.** The solution of any non-linear partial differential equation of the first order may be made to depend upon that of certain linear systems of partial differential equations of the first order. Although what is contained in this proposition is already known, it is presented here for the sake of unity and to avoid inconvenient references. First, the solution of any partial differential equation is reducible to that of one in which the dependent variable does not explicitly appear. For let \(z\) be the dependent variable, \(x_1, x_2, \ldots x_n\) the independent ones, and \(p_1, p_2, \ldots p_n\) the differential coefficients of \(z\) with respect to these. Represent the given equation by \[f(x_1, x_2, \ldots x_n, z, p_1, p_2, \ldots p_n) = 0,\] and let \[\varphi(x_1, x_2, \ldots x_n, z) = 0\] be any relation between the primitive variables which satisfies the given equation. Differentiating with respect to \(x_1, x_2, \ldots x_n\) respectively, and representing the first member of (2.) by \(\varphi\), we have \[\frac{d\varphi}{dx_1} + p_1 \frac{d\varphi}{dz} = 0, \ldots \frac{d\varphi}{dx_n} + p_n \frac{d\varphi}{dz} = 0.\] Hence determining \(p_1, \ldots p_n\), and substituting in (1.), we have \[f\left(x_1, x_2, \ldots x_n, z, \frac{-d\varphi}{dx_1}, \ldots \frac{-d\varphi}{dx_n}\right) = 0,\] a partial differential of the first order in which \(\varphi\) is the dependent variable, and \(x_1, x_2, \ldots x_n, z\) the independent ones. Here \(\varphi\) does not explicitly appear. * The results of my former researches were communicated to Dr. Salmon on February 4, 1862. At that time I had not seen Jacobi's researches, which indeed could only just have been published. A note on the connexion of the two which accompanied my paper was cancelled in the proof sheet in the prospect of that fuller explanation which I then hoped to be able to give, and now give. It will suffice, therefore, to develope the theory of the solution of partial differential equations not explicitly involving the dependent variable. The general form of such an equation is \[ f(x_1, x_2, \ldots, x_n; p_1, p_2, \ldots, p_n) = 0. \] Now, to solve this equation, we must find values of \( p_1, p_2, \ldots, p_n \) as functions of \( x_1, x_2, \ldots, x_n \), which, while satisfying it, will make the equation \[ dz - p_1 dx_1 - p_2 dx_2 - \ldots - p_n dx_n = 0 \quad \ldots \ldots \ldots \ldots \quad (4.) \] admit of a single integral containing \( n \) arbitrary constants. This integral will constitute a complete primitive, a form of solution from which all other forms can be derived. Let \( H_i = 0 \) represent the given equation, and \[ H_1 = 0, \quad H_2 = c_2, \ldots, H_n = c_n \] the system of equations from which \( p_1, p_2, \ldots, p_n \) are to be thus found. Their values will contain the \( n-1 \) arbitrary constants \( c_2, c_3, \ldots, c_n \); and the remaining constant will be introduced by the integration of (4.). Let \( U \) and \( V \) represent any two of the functions \( H_1, H_2, \ldots, H_n \). Then differentiating the corresponding equations with respect to any one of the independent variables \( x_i \), explicitly as it appears, and implicitly as involved in \( p_1, p_2, \ldots, p_n \), we have \[ \frac{dU}{dx_i} + \sum_j \frac{dU}{dp_j} \frac{dp_j}{dx_i} = 0, \quad \ldots \ldots \ldots \ldots \quad (5.) \] \[ \frac{dV}{dx_i} + \sum_j \frac{dV}{dp_j} \frac{dp_j}{dx_i} = 0, \quad \ldots \ldots \ldots \ldots \quad (6.) \] the summations extending from \( j=1 \) to \( j=n \). Multiply the first of these equations by \( \frac{dV}{dp_i} \), and sum the result from \( i=1 \) to \( i=n \). Then \[ \sum_i \frac{dU}{dx_i} \frac{dV}{dp_i} + \sum_i \sum_j \frac{dU}{dp_j} \frac{dV}{dp_i} \frac{dp_j}{dx_i} = 0; \] or, since \( \frac{dp_j}{dx_i} = \frac{dp_i}{dx_j} \), \[ \sum_i \frac{dU}{dx_i} \frac{dV}{dp_i} + \sum_i \sum_j \frac{dU}{dp_j} \frac{dV}{dp_i} \frac{dp_j}{dx_j} = 0. \] Interchange in the second term \( i \) and \( j \), since the limits of summation are the same with respect to both, then \[ \sum_i \frac{dU}{dx_i} \frac{dV}{dp_i} + \sum_j \sum_i \frac{dU}{dp_i} \frac{dV}{dp_j} \frac{dp_j}{dx_i} = 0, \] or \[ \sum_i \frac{dU}{dx_i} \frac{dV}{dp_i} + \sum_i \sum_j \frac{dU}{dp_i} \frac{dV}{dp_j} \frac{dp_j}{dx_i} = 0, \] whence, reducing the second term by (6.),, \[ \sum_i \left( \frac{dU}{dx_i} \frac{dV}{dp_i} - \frac{dU}{dp_i} \frac{dV}{dx_i} \right) = 0. \] Representing the first member of this equation in the form \([U, V]\), it appears that the functions \(H_1, H_2, \ldots, H_n\) must satisfy mutually the \(\frac{n(n-1)}{2}\) equations of which the type is \[ [H_i, H_j] = 0; \] and by these conditions \(H_2, H_3, \ldots, H_n\) must be deduced from \(H_1\), which is known*. Now all these conditions will be satisfied if we determine \(H_2\) to satisfy the single linear partial differential equation \[ [H_1, H_2] = 0, \] then \(H_3\) to satisfy the binary system \[ [H_1, H_3] = 0, [H_2, H_3] = 0, \] then \(H_4\) to satisfy the ternary system \[ [H_1, H_4] = 0, [H_2, H_4] = 0, [H_3, H_4] = 0, \] and finally, \(H_n\) to satisfy the system of \(n-1\) equations \[ [H_1, H_n] = 0, [H_2, H_n] = 0, \ldots [H_{n-1}, H_n] = 0. \] All these are cases of the general problem of determining a function \(P\) to satisfy the simultaneous equations \[ [H_1, P] = 0, [H_2, P] = 0, \ldots [H_m, P] = 0, \ldots \ldots \ldots (7.) \] \(H_1, H_2, \ldots, H_m\) being known functions mutually satisfying the conditions \[ [H_i, H_j] = 0, \] for each \(P\) thus determined gives the succeeding function \(H_{m+1}\), and so on till all are found. This is the problem with which we are concerned. But before proceeding to its solution we must notice certain properties of the symbolic combination \([U, V]\). **Properties of \([U, V]\).** 1st. It is evident from the definition that \[ [U, V] = -[V, U], \ldots \ldots \ldots \ldots \ldots (8.) \] \[ [U, U] = 0. \ldots \ldots \ldots \ldots \ldots \ldots \ldots (9.) \] 2nd. The case sometimes arises in which one of the functions under the symbol \([\ ]\) is itself a function of several other functions of the independent variables. Suppose \(V\) to be a function of \(v_1, v_2, \ldots, v_q\), then it may be shown that \[ [U, V] = [U, v_1] \frac{dV}{dv_1} + [U, v_2] \frac{dV}{dv_2} + \ldots + [U, v_q] \frac{dV}{dv_q}, \ldots \ldots \ldots (10.) \] a theorem of especial use in transformations. * On the sufficiency of these conditions see Professor Donkin's excellent memoir "On a Class of Differential Equations, including those of Dynamics," Philosophical Transactions, 1854. For \[ [U, V] = \sum_i \left( \frac{dU}{dx_i} \frac{dV}{dp_i} - \frac{dU}{dp_i} \frac{dV}{dx_i} \right) \] \[ = \sum_i \left\{ \frac{dU}{dx_i} \left( \frac{dV}{dv_1} \frac{dv_1}{dp_i} + \frac{dV}{dv_2} \frac{dv_2}{dp_i} + \cdots + \frac{dV}{dv_q} \frac{dv_q}{dp_i} \right) - \frac{dU}{dp_i} \left( \frac{dV}{dv_1} \frac{dv_1}{dx_i} + \frac{dV}{dv_2} \frac{dv_2}{dx_i} + \cdots + \frac{dV}{dv_q} \frac{dv_q}{dx_i} \right) \right\} \] \[ = \frac{dV}{dv_1} [U, v_1] + \frac{dV}{dv_2} [U, v_2] + \cdots + \frac{dV}{dv_q} [U, v_q]. \] In like manner, if \( U \) be a function of \( u_1, u_2, \ldots, u_q \), which are themselves functions of the independent variables, then \[ [U, V] = [u_1, V] \frac{dU}{du_1} + [u_2, V] \frac{dU}{du_2} + \cdots + [u_q, V] \frac{dU}{du_q}. \quad \ldots \quad (11.) \] 3rd. The important theorem \[ [U, [V, W]] + [V, [W, U]] + [W, [U, V]] = 0. \quad \ldots \quad (12.) \] is implicitly contained in the results of the following Proposition. All these are known relations. **Proposition II.** To determine the result of the application of the general theorem of derivation to any system of partial differential equations of the form \[ [u_1, P] = 0, \quad [u_2, P] = 0, \quad \ldots \quad [u_m, P] = 0, \quad \ldots \quad (1.) \] \( u_1, u_2, \ldots, u_m \) being given functions of the independent variables \( x_1, x_2, \ldots, x_n, p_1, p_2, \ldots, p_n \). We adopt in the expression of this proposition \( u_1, u_2, \ldots, u_m \) in the place of \( H_1, H_2, \ldots, H_m \), because we suppose the functions given to be unrestrained by connecting conditions. If we represent any two equations of the system by \[ [U, P] = 0, \quad [V, P] = 0, \] and then give to these the symbolical forms \[ \Delta_r P = 0, \quad \Delta_s P = 0, \] we shall have \[ \Delta_i = \sum_r \left( \frac{dU}{dx_r} \frac{d}{dp_r} - \frac{dU}{dp_r} \frac{d}{dx_r} \right), \] \[ \Delta_j = \sum_s \left( \frac{dV}{dx_s} \frac{d}{dp_s} - \frac{dV}{dp_s} \frac{d}{dx_s} \right), \] the summations with respect to \( r \) and \( s \) extending in each case from 1 to \( n \) inclusive. Hence collecting into separate groups the terms which contain differential coefficients of \( P \) with respect to \( p_s, p_r \), and with respect to \( x_s, x_r \), we have \[ (\Delta_i \Delta_j - \Delta_j \Delta_i)P = \sum_r \sum_s \left\{ \frac{dU}{dx_r} \frac{d^2V}{dp_s dx_s} \frac{dP}{dp_r} - \frac{dU}{dp_r} \frac{d^2V}{dx_r dx_s} \frac{dP}{dp_s} - \frac{dV}{dx_s} \frac{d^2U}{dp_s dx_r} \frac{dP}{dp_r} + \frac{dV}{dp_r} \frac{d^2U}{dx_s dx_r} \frac{dP}{dp_s} \right\} \] \[ - \sum_r \sum_s \left\{ \frac{dU}{dx_r} \frac{d^2V}{dp_s dp_r} \frac{dP}{dx_s} - \frac{dU}{dp_r} \frac{d^2V}{dx_r dp_s} \frac{dP}{dx_s} - \frac{dV}{dx_s} \frac{d^2U}{dp_s dp_r} \frac{dP}{dx_r} + \frac{dV}{dp_r} \frac{d^2U}{dx_s dp_r} \frac{dP}{dx_s} \right\}. \] Now this expression will not be affected if in any of its terms we interchange $s$ and $r$. If we do this in the terms involving $\frac{dP}{dp_s}$, the first aggregate will become $$\sum_r \sum_s \left[ \frac{dU}{dx_s} \frac{d^2V}{dp_s dx_r} \frac{dP}{dp_r} - \frac{dU}{dx_s} \frac{d^2V}{dp_s dx_r} \frac{dP}{dp_r} + \frac{dV}{dx_s} \frac{d^2U}{dp_s dx_r} \frac{dP}{dp_r} + \frac{dV}{dx_s} \frac{d^2U}{dp_s dx_r} \frac{dP}{dp_r} \right]$$ $$= \sum_r \frac{dP}{dp_r} \frac{d}{dx_r} \sum_s \left( \frac{dU}{dx_s} \frac{dV}{dp_s} - \frac{dU}{dp_s} \frac{dV}{dx_s} \right) = \sum_r \frac{dP}{dp_r} \frac{d[U, V]}{dx_r}.$$ In a similar way the second aggregate of terms will reduce to $$- \sum_r \frac{dP}{dx_r} \frac{d[U, V]}{dp_r}.$$ Hence $$(\Delta_i \Delta_j - \Delta_j \Delta_i)P = \sum_r \left( \frac{d[U, V]}{dx_r} \frac{dP}{dp_r} - \frac{d[U, V]}{dp_r} \frac{dP}{dx_r} \right).$$ Therefore $$(\Delta_i \Delta_j - \Delta_j \Delta_i)P = [[U, V], P]. \quad \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots (2.)$$ Thus the theorem of derivation applied to the two equations $$[U, P] = 0, [V, P] = 0,$$ gives $$[[U, V], P] = 0,$$ an equation of the same general form as the equations from which it was derived. Apply this to the separate pairs of equations in the given system $$[u_1, P] = 0, [u_2, P] = 0, \ldots [u_m, P] = 0,$$ and to the equations thus generated, and so on in succession till no new equations arise, and the result will be a system of the form $$[u_1, P] = 0, [u_2, P] = 0, \ldots [u_q, P] = 0, \ldots q > m. \quad \ldots \ldots \ldots \ldots (3.)$$ This constitutes the completed system, and it possesses, in accordance with the doctrine of my former paper, the property that, if we form the equation $$\frac{dP}{dx_1} dx_1 + \frac{dP}{dx_n} dx_n + \frac{dP}{dp_1} dp_1 + \ldots + \frac{dP}{dp_n} dp_n = 0,$$ eliminate thence $q$ of the differential coefficients of $P$, and equate to 0 the coefficients of the $2n-q$ which remain, we shall obtain a system of $2n-q$ ordinary differential equations susceptible of reduction to the exact form, and yielding by integration the common integrals of the system given. The completed system is one of independent equations in which $P$ is brought into successive relations with a series of functions $u_1, u_2, \ldots u_q$. It is important to show that the independence of the equations involves the independence of the functions, as also that if the equations were dependent the functions would be so too. 1st. The independence of the equations involves the independence of the functions. * This is not a new theorem. It is but another form of the theorem (12.) Prop. I. It has also been explicitly given by Jacobi and Clebsch. MDCCCLXIII. For suppose the equations independent and the functions not independent, so that one of them, \( u_q \), could be expressed as a function of the others, \( u_1, u_2, \ldots u_{q-1} \). Then by (11.), Prop. 1, \[ [u_q, P] = [u_1, P] \frac{du_q}{du_1} + [u_2, P] \frac{du_q}{du_2} + \cdots + [u_{q-1}, P] \frac{du_q}{du_{q-1}}, \] which would imply that the equation \[ [u_q, P] = 0 \] was not independent of the other equations of the system, as by hypothesis it is. Hence the functions are independent. 2ndly. If the equations of a system of the form (3.) are algebraically dependent, then are the functions \( u_1, u_2, \ldots u_q \) dependent. The algebraic dependence of the equations implies, in consequence of the linearity of their developed forms, the existence of at least one relation of the form \[ [u_q, P] = \lambda_1 [u_1, P] + \lambda_2 [u_2, P] + \cdots + \lambda_{q-1} [u_{q-1}, P], \quad \ldots \quad (4.) \] \( \lambda_1, \lambda_2, \ldots \lambda_{q-1} \) being, on the most general supposition, functions of the independent variables. Now the functions \( u_1, u_2, \ldots u_{q-1} \) are either dependent or independent. If dependent, the proposition is granted; if independent, then equating the coefficients of \( \frac{dP}{dx_i} \) in the developed members of (4.), we have \[ \frac{du_q}{dp_i} = \lambda_1 \frac{du_1}{dp_i} + \lambda_2 \frac{du_2}{dp_i} + \cdots + \lambda_{q-1} \frac{du_{q-1}}{dp_i}; \] and equating the coefficients of \( \frac{dP}{dp_i} \) in the same equation, we have \[ \frac{du_q}{dx_i} = \lambda_1 \frac{du_1}{dx_i} + \lambda_2 \frac{du_2}{dx_i} + \cdots + \lambda_{q-1} \frac{du_{q-1}}{dx_i}. \] Hence if we represent the series of variables \( x_1, \ldots x_n, p_1, \ldots p_n \) taken in any order by \( y_1, y_2, \ldots y_{2n} \), both the above equations will be included in the general one, \[ \frac{du_q}{dy_i} = \lambda_1 \frac{du_1}{dy_i} + \lambda_2 \frac{du_2}{dy_i} + \cdots + \lambda_{q-1} \frac{du_{q-1}}{dy_i}. \] Now, \( u_1, u_2, \ldots u_{q-1} \) being by hypothesis independent, \( u_q \) will be expressible as, at most, a function of \( u_1, u_2, \ldots u_{q-1} \) and \( 2n-q+1 \) of the original variables. Regard then \( u_q \) as a function of \( u_1, u_2, \ldots u_{q-1}, y_{q+1}, y_{q+2}, \ldots y_{2n} \). Of course \( u_1, u_2, \ldots u_{q-1} \) will be functionally independent with respect to the quantities \( y_1, y_2, \ldots y_{q-1} \) which they replace. Then for all values of \( i \), from 1 to \( q-1 \) inclusive, the last equation becomes \[ \frac{du_q}{du_1} \frac{du_1}{dy_i} + \frac{du_q}{du_2} \frac{du_2}{dy_i} + \cdots + \frac{du_q}{du_{q-1}} \frac{du_{q-1}}{dy_i} = \lambda_1 \frac{du_1}{dy_i} + \lambda_2 \frac{du_2}{dy_i} + \cdots + \lambda_{q-1} \frac{du_{q-1}}{dy_i}, \] or \[ \frac{du_1}{dy_i} \left( \frac{du_q}{du_1} - \lambda_1 \right) + \frac{du_2}{dy_i} \left( \frac{du_q}{du_2} - \lambda_2 \right) + \cdots + \frac{du_{q-1}}{dy_i} \left( \frac{du_q}{du_{q-1}} - \lambda_{q-1} \right) = 0, \quad \ldots \quad (5.) \] while for all values of \( i \) greater than \( q - 1 \) it becomes \[ \frac{du_1}{dy_i} \left( \frac{du_q}{du_1} - \lambda_1 \right) + \frac{du_2}{dy_i} \left( \frac{du_q}{du_2} - \lambda_2 \right) \cdots + \frac{du_{q-1}}{dy_i} \left( \frac{du_q}{du_{q-1}} - \lambda_{q-1} \right) + \frac{du_q}{dy_i} = 0. \quad \ldots \quad (6.) \] From the system of \( q - 1 \) linear equations of the first type (5.), we find \[ \frac{du_q}{du_1} - \lambda_1 = 0, \quad \frac{du_q}{du_2} - \lambda_2 = 0, \quad \ldots \quad \frac{du_q}{du_{q-1}} - \lambda_{q-1} = 0, \quad \ldots \quad \ldots \quad \ldots \quad (7.) \] unless the determinant \[ \begin{vmatrix} \frac{du_1}{dy_1} & \frac{du_2}{dy_1} & \cdots & \frac{du_{q-1}}{dy_1} \\ \frac{du_1}{dy_2} & \frac{du_2}{dy_2} & \cdots & \frac{du_{q-1}}{dy_2} \\ \vdots & \vdots & \ddots & \vdots \\ \frac{du_1}{dy_{q-1}} & \frac{du_2}{dy_{q-1}} & \cdots & \frac{du_{q-1}}{dy_{q-1}} \end{vmatrix} \] vanish identically. But this would, by the known property of determinants, imply that \( u_1, u_2, \ldots u_{q-1} \) are not as functions of \( y_1, y_2, \ldots y_{q-1} \) independent, which we have seen that they are. Hence the system (7.) is true. Reducing by it the system represented by (6.), we have \[ \frac{du_q}{dy_q} = 0, \quad \frac{du_q}{dy_{q+1}} = 0, \quad \frac{du_q}{dy_{2n}} = 0. \] It results therefore that \( u_q \) will be simply a function of \( u_1, u_2, \ldots u_{q-1} \). From these conclusions united we see that, if from any system of equations of the form \[ [u_1, P] = 0, \quad [u_2, P] = 0, \quad \ldots \quad [u_m, P] = 0 \] we separate the functions \( u_1, u_2, \ldots u_m \), derive from these all possible independent functions of the form \([u_i, u_j]\), and representing these by \( u_{m+1}, u_{m+2}, \ldots \) continue with the aid of these the process of derivation until no new functions can be formed, then if we represent the completed series of functions by \( u_1, u_2, \ldots u_q \), the corresponding system of equations \[ [u_1, P] = 0, \quad [u_2, P] = 0, \quad \ldots \quad [u_q, P] = 0 \] will be precisely that system to which the theorem of derivation of my former paper, applied to the given system of equations, would lead. **Proposition III.** To integrate the system of simultaneous partial differential equations \[ [H_1, P] = 0, \quad [H_2, P] = 0, \quad \ldots \quad [H_m, P] = 0, \] it being given as a condition that \( H_1, H_2, \ldots H_m \) satisfy mutually all relations of the form \[ [H_i, H_j] = 0. \] This we have seen to be the general problem upon the solution of which the integration of any non-linear partial differential equation depends. We learn by the last proposition that the system of equations given is already a complete system. For, applying the theorem of derivation to any two equations contained in it, we have a result of the form $$[[H_i, H_j]P]=0;$$ but this is identically satisfied by virtue of the connected condition. Instead, however, of solving the equation as a complete system, let us, with Jacobi, deduce a new integral of the first partial differential equation of the system, i.e., an integral distinct from $H_1, H_2, \ldots H_m$, which, in virtue of the condition given, are already integrals of that equation—in fact, common integrals of the system. For if we make $P=H_i$ in any of the equations, that equation will be identically satisfied. Represent by $$u=c$$ this new integral. It may happen that it proves on trial to satisfy all the other equations. In that case a common integral is found and the problem is solved. Suppose, however, the new integral of the first equation not to be a common integral, and, constructing the equation $[u, P]=0$, incorporate it with the given system of equations so as to form the larger system $$[H_1, P]=0, [H_2, P]=0, \ldots [H_m, P]=0, [u, P]=0.$$ . . . . . (1.) Any common integral of this system will be also a common integral of the given system which is contained in this one. Such common integral, if it exist, we propose to seek. First let us complete the system just formed by the last proposition. The completed system will be of the form $$[u_1, P]=0, [u_2, P]=0, \ldots [u_q, P]=0,$$ . . . . . . (2.) in which $u_1, u_2, \ldots u_{m+1}$ are for symmetry employed to represent $H_1, H_2, \ldots H_m, u$, and $u_{m+2}, u_{m+3}, \ldots u_q$ are new functions. Now all the functions $u_1, u_2, \ldots u_q$ are integrals of the first partial differential equation of the system; for, the system (2.) being a formal consequence of the system (1.), if we substitute $H_i$ for $P$, the system $$[u_i, H_i]=0, [u_2, H_i]=0, \ldots [u_q, H_i]=0$$ will be seen to be a consequence of the system $$[H_1, H_i]=0, [H_2, H_i]=0, \ldots [u, H_i]=0.$$ But the latter system is true, therefore the former; therefore, since $[u_i, H_i]=-[H_i, u_i]$, the system $$[H_1, u_i]=0, [H_1, u_2]=0, [H_1, u_q]=0$$ . . . . . . . . . . . . (3.) is true; therefore $u_1, u_2, \ldots u_q$ are integrals of the first equation of the system. They are independent, Prop. II. And as the process by which such of them as are new have been formed is identical in character with that which Jacobi makes use of, differing only in the extent to which it is here applied, I shall speak of them as Jacobian integrals, or Jacobian functions. Let us inquire whether it is possible to satisfy the completed system of equations (2.) by a function of the completed series of Jacobian integrals of the first equation of the system. Suppose, then, $P$ a function of $u_1, u_2, \ldots u_q$. The equation $[u_i, P] = 0$, which is the type of the equations of the system to be integrated, assumes, by (10.) Prop. I., the forms $$[u_i, u_j] \frac{dP}{du_j} + [u_i, u_k] \frac{dP}{du_k} \cdots + [u_i, u_q] \frac{dP}{du_q} = 0. \quad \ldots \quad (4.)$$ Hence since by (3.) $$[H_i, u_i] = [u_i, u_i] = 0,$$ the first equation of the system is identically satisfied, as it ought to be. Let us, to make the problem more definite, suppose for the present that none of the other equations are identically satisfied. Then, giving to $i$ the successive values 2, 3, ... $q$, and observing that always $$[u_i, u_i] = 0, \quad [u_i, u_j] = 0,$$ we see that the system of equations represented by (4.) will be $$\begin{align*} &+ [u_2, u_3] \frac{dP}{du_3} + [u_2, u_4] \frac{dP}{du_4} \cdots + [u_2, u_q] \frac{dP}{du_q} = 0, \\ &[u_3, u_2] \frac{dP}{du_2} + [u_3, u_4] \frac{dP}{du_4} \cdots + [u_3, u_q] \frac{dP}{du_q} = 0, \\ &[u_4, u_2] \frac{dP}{du_2} + [u_4, u_3] \frac{dP}{du_3} \cdots + [u_4, u_q] \frac{dP}{du_q} = 0, \\ &\ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \\ &[u_q, u_2] \frac{dP}{du_2} + [u_q, u_3] \frac{dP}{du_3} + [u_q, u_4] \frac{dP}{du_4} \cdots = 0. \end{align*} \quad \ldots \quad (5.)$$ Now the coefficients $[u_i, u_j]$ are on the most general supposition functions of $u_1, u_2, \ldots u_q$, since the system (2.) is complete. Hence, if from any two equations $[u_i, P] = 0, [u_j, P] = 0$, we derive an equation $$[[u_i, u_j]P] = 0,$$ that equation will be algebraically dependent on the equations of the system, and therefore, by Prop. II., $[u_i, u_j]$ will be functionally dependent on $u_1, u_2, \ldots u_q$. The system (5.) is then one of partial differential equations, in which $u_1, u_2, \ldots u_q$ are the sole independent variables. If we express that system symbolically in the form $$\Delta_2 P = 0, \quad \Delta_3 P = 0, \ldots \Delta_q P = 0,$$ all equations of the form $$(\Delta_i \Delta_j - \Delta_j \Delta_i) P = 0$$ derived from it will be simply what the corresponding equations derived from (2.), when \( x_1 \ldots x_n, p_1 \ldots p_n \) are the independent variables, would become under the limitation that \( P \) is a function of \( u_1, u_2, \ldots u_q \). They are therefore not new equations, but combinations of the old ones. It follows, therefore, that if from the equation \[ \frac{dP}{du_1} du_1 + \frac{dP}{du_2} du_2 + \cdots + \frac{dP}{du_q} du_q = 0 \quad \ldots \ldots \ldots (6.) \] we eliminate, by means of the system (5.), as many as possible of the differential coefficients of \( P \), and equate to 0 the coefficients of the remaining ones, we shall obtain a system of differential equations of the first order, reducible to the exact form, and giving on integration all the common integrals of the original system which are expressible as functions of \( u_1, u_2, \ldots u_q \). First let us suppose the number of equations in the system (5.) to be odd. Then are those equations not independent. For the determinant of the system is \[ \begin{vmatrix} * & [u_2, u_3] & [u_2, u_4] & \cdots & [u_2, u_q] \\ [u_3, u_2] & * & [u_3, u_4] & \cdots & [u_3, u_q] \\ [u_4, u_2] & [u_4, u_3] & * & \cdots & [u_4, u_q] \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ [u_q, u_2] & [u_q, u_3] & [u_q, u_4] & \cdots & * \end{vmatrix} \] and this, since \([u_i, u_j] = -[u_j, u_i]\), belongs to the class of symmetrical skew determinants, of which it is a known property that when the number of rows or columns is odd the determinant vanishes; and this, by another known theorem, indicates that one of the corresponding linear equations is dependent. The system (5.) is therefore in general equivalent to a system of \( q-2 \) independent equations determining the ratios of the \( q-1 \) differential coefficients \( \frac{dP}{du_2}, \frac{dP}{du_3}, \ldots \frac{dP}{du_q} \) in the form \[ \frac{dP}{U_2} \frac{dP}{U_3} \cdots \frac{dP}{U_q}, \] \( U_2, U_3, \ldots U_q \) being known functions of \( u_1, u_2, \ldots u_q \). Eliminating by these \( q-2 \) equations the \( q-2 \) differential coefficients \( \frac{dP}{du_3}, \frac{dP}{du_4}, \ldots \frac{dP}{du_q} \) from (6.), we have \[ \frac{dP}{du_1} du_1 + \frac{dP}{du_2} \times \frac{1}{U_2} (U_2 du_2 + U_3 du_3 \cdots + U_q du_q) = 0, \] whence, equating to 0 the coefficients of \( \frac{dP}{du_1} \) and \( \frac{dP}{du_2} \), we have \[ du_1 = 0, \] \[ U_2 du_2 + U_3 du_3 \cdots + U_q du_q = 0. \] The first of these gives \[ u_1 = c, \] a common integral already known. The second equation being reducible by means of the first to the exact form, it follows that, if in that equation we regard \( u_1 \) as constant, it will admit of an integral of the form \[ f(u_1, u_2, \ldots u_q) = \text{const.}; \] and this is the new common integral sought. We supposed that only the first of the differential equations of the system (2.) was satisfied independently of the form of \( P \) as a function of \( u_1, u_2, \ldots u_q \). Suppose, however, that any number of equations \[ [u_a, P] = 0, \quad [u_b, P] = 0, \ldots \] are identically satisfied, independently of the form of \( P \). Then \[ u_a = c, \quad u_b = c', \ldots \] are common integrals of the system; and if any one of these be not contained in the series of known common integrals, \[ u_1 = c_1, \quad u_2 = c_2, \ldots u_m = c_m, \] it will be a new common integral, and the problem is solved. But whether such is or is not the case, if the number of equations \[ [u_a P] = 0, \quad [u_\beta P] = 0, \ldots \] not identically satisfied be odd, we shall, proceeding as before, arrive at an equation \[ \frac{dP}{du_a} du_a + \frac{dP}{du_b} du_b \ldots + \frac{dP}{du_\alpha} \times \frac{1}{U_\alpha} (U_\alpha du_\alpha + U_\beta du_\beta \ldots) = 0, \] resolvable therefore into \[ du_a = 0, \quad du_b = 0, \ldots \] \[ U_\alpha du_\alpha + U_\beta du_\beta, \ldots = 0. \] Then obtaining from the first line of this system the already known integrals \[ u_a = c, \quad u_\beta = c', \] we shall be able to reduce by these the equation of the second line to an integrable form, and thence obtain the common integral sought. Generally, then, when the number of equations of the completed system \[ [u_1, P] = 0, \quad [u_2, P] = 0, \ldots [u_q, P] = 0 \] which, on the supposition that \( P \) is a function of \( u_1, u_2, \ldots u_q \), is not satisfied independ- ently of the form of that function is odd, the form which will satisfy all is determinable by the solution of a single differential equation of the first order*. June 27, 1863. Secondly, suppose the number of equations of the above system not satisfied independently of the form of \( P \) as a function of \( u_1, u_2, \ldots u_q \) to be even. In this case no form can generally be assigned to that function that will cause all the equations of the system to be satisfied. * Communicated to the British Association in October 1862,—without demonstration. But neither is it required that all should be satisfied. It is only necessary to satisfy the original system, \[ [u_1, P] = 0, \quad [u_2, P] = 0, \ldots [u_m, P] = 0, \] which is also a complete system. If in this system we make \( P \) a function of \( u_1, u_2, \ldots u_q \), we get \[ \begin{align*} * & + [u_2, u_3] \frac{dP}{du_3} + [u_2, u_4] \frac{dP}{du_4} + \cdots + [u_2, u_q] \frac{dP}{du_q} = 0, \\ (u_3, u_2) \frac{dP}{du_2} & * + [u_3, u_4] \frac{dP}{du_4} + \cdots + [u_3, u_q] \frac{dP}{du_q} = 0, \\ \cdots & \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \\ [u_m, u_2] \frac{dP}{du_2} & \cdots * \cdots + [u_m, u_q] \frac{dP}{du_q} = 0, \end{align*} \] in which the coefficients \([u_i, u_j]\) are all functions of \(u_1, u_2, \ldots u_q\), and in the integration of which \(u_1\) is to be regarded as a constant. I shall prove that to this system of \(m-1\) partial differential equations, the same treatment may be applied as to the original system of \(m\) partial differential equations. Let \(v\) be any new integral of the first equation of the above system. As such, \(v\) will be a function of \(u_1, u_2, \ldots u_q\), and will therefore be also an integral of the prior equation \[ [u_1, P] = 0. \] Forming the equation \([v, P] = 0\), incorporate it with the original system, so as to form the larger system \[ [H_1, P] = 0, \quad [H_2, P] = 0, \ldots [H_m, P] = 0, \quad [v, P] = 0, \] and, expressing \(H_1, H_2, \ldots H_m, v\) as functions of the original variables, complete the system by the theorem of derivation. Let the result be \[ [v, P] = 0, \quad [v_2, P] = 0, \ldots [v_r, P] = 0, \] in which \(v_1, v_2, \ldots v_{m+1}\) are identical with \(H_1, H_2, \ldots H_m, v\), and let it be sought to satisfy this system by regarding \(P\) as a function \(v_1, v_2, \ldots v_r\). The two first equations will be satisfied identically; the others will take the form \[ \begin{align*} * & + [v_3, v_4] \frac{dP}{dv_4} + \cdots + [v_3, v_r] \frac{dP}{dv_r} = 0, \\ [v_4, v_3] \frac{dP}{dv_3} & * \cdots + [v_4, v_r] \frac{dP}{dv_r} = 0, \\ \cdots & \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \\ [v_r, v_3] \frac{dP}{dv_3} + [v_r, v_4] \frac{dP}{dv_4} & \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots = 0; \end{align*} \] and if the number of these equations, or more generally the number of these equations not identically satisfied, be odd, a common integral will be found by the solution of a single differential equation of the first order, of the form \[ V_3 dv_3 + V_4 dv_4 \cdots + V_r dv_r = 0. \] In solving that equation \( v_1, v_2 \) are to be regarded as constant. If the number be even, we must proceed as before, and we shall thus reduce the original system to a system of the same character, but possessing only \( m - 2 \) equations. In the most unfavourable case, the emerging systems being always odd, the common integral will ultimately be found by the solution of a single final partial differential equation. I have supposed that, of the symmetrical equations which arise from the introduction of the Jacobian integrals as independent variables, only one is dependent when the number of equations is odd, and none when even. But exceptions may conceivably arise from the splitting of the determinant into component factors each of the skew-symmetrical form, and the corresponding resolution of the system of equations into partial systems each complete in itself. To such partial systems, and not to the general system, the law is to be applied. The connexion of the common integral with the odd skew determinant does not suffer exception even in those cases in which the integral is obtained without a final integration, or is primarily given. Thus for each of the common integrals \( u_1, u_2, \ldots u_m \) the determinant reduces to a single vanishing term on the diagonal. The possibility of cases of real exception seems to be a subject well worthy of inquiry. Postscript.—September 24, 1863. Since communicating the above I have discovered that the number of independent equations of the final symmetrical system (5.) is necessarily even. This confirms the foregoing observations. It follows that whether we take all or some of the Jacobian functions \( u_1, u_2, \ldots u_q \), if there exist one common integral of the system expressible by means of those functions, the determinant of the system will either be or will contain as a component factor an odd symmetrical skew determinant.