On a Class of Differential Equations, Including Those Which Occur in Dynamical Problems.--Part I

Author(s) W. F. Donkin
Year 1854
Volume 144
Pages 44 pages
Language en
Journal Philosophical Transactions of the Royal Society of London

Full Text (OCR)

V. On a Class of Differential Equations, including those which occur in Dynamical Problems.—Part I. By W. F. Donkin, M.A., F.R.S., F.R.A.S., Savilian Professor of Astronomy in the University of Oxford. Received February 23,—Read February 23, 1854. The Analytical Theory of Dynamics, as it exists at present, is due mainly to the labours of Lagrange, Poisson, Sir W. R. Hamilton, and Jacobi; whose researches on this subject present a series of discoveries hardly paralleled, for their elegance and importance, in any other branch of mathematics. The following investigations in the same department do not pretend to make any important step in advance; though I should not of course have presumed to lay them before the Society, if I had not hoped they might be found to possess some degree of novelty and interest*. Of previous publications with which I am acquainted, those most nearly on the same subject are, Sir W. R. Hamilton’s two memoirs “On a General Method in Dynamics” in the Philosophical Transactions; Jacobi’s Memoir in the 17th vol. of Crelle’s Journal, “Ueber die Reduction der partiellen Differential-gleichungen,” &c.; and M. Bertrand’s “Mémoire sur l’integration des équations différentielles de la Mécanique,” in Liouville’s Journal (1852). The relation in which the present essay stands to the papers just named will be apparent to those who are acquainted with them, and it would be useless to attempt to make it intelligible to others. Oxford, Feb. 21, 1854. Section I. 1. Let $x_1, x_2, \ldots, x_n$ be $n$ variables, connected by $n$ relations with $n$ other variables $y_1, y_2, \ldots, y_n$; so that each variable of either set may be considered as a function of the variables of the other set. Suppose then $$y_i = \varphi_i(x_1, x_2, \ldots, x_n),$$ [* It may be useful to specify the parts to which I should principally refer as containing what is, relatively to my own reading on the subject, new; and in the present day it can hardly be required of any one to profess more than this kind of originality. These are—the theorem (3.), art. 1. The results of arts. 2 to 4. The formulæ (19.), art. 7. The general form of the theorem (26.), art. 10. The processes and results of arts. 12 to 14. The generalization of Sir W. Hamilton’s transformation of the dynamical equations, arts. 17, 18. The demonstration of Poisson’s theorem, arts. 21, 22. The contents of art. 25. The method of obtaining elliptic elements, arts. 27 to 30. The contents of arts. 34 to 36. The solution of the problem of rotation, Section III.] this equation would become identical if \( x_1, x_2, \ldots, x_n \), in its second member, were expressed in terms of \( y_1, y_2, \ldots, y_n \); hence, differentiating each side, on this hypothesis, first with respect to \( y_i \), and then with respect to \( y_j \), we obtain \[ 1 = \frac{dy_i}{dx_1} \frac{dx_1}{dy_i} + \frac{dy_i}{dx_2} \frac{dx_2}{dy_i} + \cdots + \frac{dy_i}{dx_n} \frac{dx_n}{dy_i}, \quad \ldots \quad (1.) \] \[ 0 = \frac{dy_i}{dx_1} \frac{dx_1}{dy_j} + \frac{dy_i}{dx_2} \frac{dx_2}{dy_j} + \cdots + \frac{dy_i}{dx_n} \frac{dx_n}{dy_j}, \quad \ldots \quad (2.) \] where \( j \) is any index different from \( i \). These theorems are given by Jacobi in his memoir "De Determinantibus functionalibus." They are however only particular cases of more general theorems, which may be investigated as follows. If we represent by \( i, j, k, \ldots \) \( p, q, r, \ldots \) any two determinate sets of \( m \) indices each, selected out of the series 1, 2, 3, \ldots, n, then the determinant formed with the \( m^2 \) differential coefficients \[ \frac{dy_i}{dx_p}, \frac{dy_i}{dx_q}, \ldots; \frac{dy_j}{dx_p}, \frac{dy_j}{dx_q}, \ldots; \text{&c.} \] possesses properties remarkably analogous to those of a simple differential coefficient. This analogy was pointed out by Jacobi, and has been further developed by M. Bertrand in his "Mémoire sur le Déterminant d'un système de Fonctions" (Liouville's Journal, 1851). It appears to me that such functional determinants might be appropriately and conveniently denoted by a symbol analogous to that of a common differential coefficient; thus \[ \frac{d(y_i, y_j, y_k, \ldots)}{d(x_p, x_q, x_r, \ldots)}, \quad \ldots \quad (D.) \] and I shall adopt this notation in the present paper. For example, \[ \frac{d(u, v)}{d(x, y)} \] would represent the determinant \[ \frac{du}{dx} \frac{dv}{dy} - \frac{du}{dy} \frac{dv}{dx}. \] [The expression (D.) is not a mere arbitrary symbol, but, like a simple differential coefficient, is a real fraction. For if we denote by \[ d(x_p, x_q, x_r, \ldots) \] the determinant formed with the \( m^2 \) quantities \[ d_1x_p, d_1x_q, d_1x_r, \ldots \\ d_2x_p, d_2x_q, d_2x_r, \ldots \\ \ldots \ldots \ldots \\ d_mx_p, d_mx_q, d_mx_r, \ldots \] and attribute a corresponding meaning to \[ d(y_1, y_2, y_3, \ldots), \] where \( d_1, d_2, \ldots, d_n \) are symbols denoting \( n \) distinct and independent sets of variations, so that \[ d_i = \frac{dy_i}{dx_1} + \frac{dy_i}{dx_2} + \cdots + \frac{dy_i}{dx_n}, \] then it follows from well-known properties of determinants (as M. Bertrand has shown) that the complete functional determinant formed with the \( n^2 \) differential coefficients \[ \frac{dy_1}{dx_1}, \frac{dy_1}{dx_2}, \ldots, \frac{dy_2}{dx_1}, \frac{dy_2}{dx_2}, \ldots, \ldots &c. \] is equal to the quotient of the two determinants which I propose to denote by \[ d(y_1, y_2, y_3, \ldots, y_n), \quad d(x_1, x_2, x_3, \ldots, x_n), \] and moreover that the partial functional determinant formed with the \( m^2 \) terms \[ \frac{dy_i}{dx_p}, \frac{dy_i}{dx_q}, \ldots, \frac{dy_i}{dx_p}, \frac{dy_i}{dx_q}, \ldots, \ldots &c. \] is equal to the quotient of the two partial determinants \[ d(y_1, y_2, y_3, \ldots), \quad d(x_p, x_q, x_r, \ldots), \] the differentials of \( y_i \) &c. being taken on the hypothesis that all the differentials of the \( x \)-variables are \(=0\), except those of the set \( x_p, x_q, x_r, \ldots \). Thus the expression (D.) is a real fraction, provided its numerator and denominator be interpreted in a manner exactly analogous to that in which the numerator and denominator of an ordinary total or partial differential coefficient are interpreted.] This being premised, let \( u_1, u_2, \ldots, u_m \) be \( m \) functions of any or all of the functions \( y_1, y_2, \ldots, y_n \) (\( m \) being supposed not greater than \( n \)), so that \( u_1, u_2, \ldots \) are functions of \( x_1, x_2, \ldots \) through \( y_1, y_2, \ldots \). Let any selected sets of \( m \) indices out of the series 1, 2, \ldots, \( n \), be denoted, for greater clearness, by \( \alpha_1, \alpha_2, \ldots, \alpha_m; \beta_1, \beta_2, \ldots, \beta_m \) &c. Then the general theorem analogous to \[ du_i = \frac{du_i}{dy_1} dy_1 + \frac{du_i}{dy_2} dy_2 + \ldots &c. \] may be expressed as follows: \[ d(u_{\alpha_1}, u_{\alpha_2}, \ldots, u_{\alpha_m}) = \sum_{\beta} \left[ \frac{d(u_{\alpha_1}, u_{\alpha_2}, \ldots, u_{\alpha_m})}{d(y_{\beta_1}, y_{\beta_2}, \ldots, y_{\beta_m})} \right] \] (the summation on the second side referring only to the indices \( \beta \), and extending to every combination of \( m \) out of the \( n \) numbers 1, 2, \ldots, \( n \)). In like manner, the theorem analogous to \[ \frac{du_i}{dx_j} = \frac{du_i}{dy_1} \frac{dy_1}{dx_j} + \frac{du_i}{dy_2} \frac{dy_2}{dx_j} + \ldots &c. \] is \[ \frac{d(u_{\alpha_1}, u_{\alpha_2}, \ldots, u_{\alpha_m})}{d(x_{\gamma_1}, x_{\gamma_2}, \ldots, x_{\gamma_m})} = \sum_{\beta} \left[ \frac{d(u_{\alpha_1}, u_{\alpha_2}, \ldots, u_{\alpha_m})}{d(y_{\beta_1}, y_{\beta_2}, \ldots, y_{\beta_m})} \right] \left[ \frac{d(y_{\beta_1}, y_{\beta_2}, \ldots, y_{\beta_m})}{d(x_{\gamma_1}, x_{\gamma_2}, \ldots, x_{\gamma_m})} \right]. \] MDCCCLIV. These two theorems (expressed in a different notation) may be found in the memoirs above cited. But the following, which we shall have occasion to employ hereafter, has not, so far as I am aware, been explicitly stated. Inasmuch as \( \frac{dy_i}{dy_j} = 1 \), \( \frac{dy_i}{dy_j} = 0 \), it follows that the determinant represented by \[ \frac{d(y_{a_1}, y_{a_2}, \ldots, y_{a_m})}{d(y_{b_1}, y_{b_2}, \ldots, y_{b_m})} \] is \(=1\) if \( \beta_1, \beta_2, \ldots, \beta_m \) be the same combination of indices as \( \alpha_1, \alpha_2, \ldots, \alpha_m \), but is \(=0\) in every other case. (For in the first case the determinant is formed with 1, 0, 0, ...; 0, 1, 0, ....; 0, 0, 1, ...; &c., but if there be one index \( \beta_i \) which is not contained in the series \( \alpha_1, \alpha_2, \&c. \), then one row of terms in the determinant will consist wholly of zeros.) Now considering \( y_1, y_2, \&c. \) as functions of \( x_1, x_2, \&c. \), and again considering these latter as functions of \( y_1, y_2, \&c. \) given by the inverse equations, we have, by the preceding theorem, for the value of the determinant (E.) above written, the expression \( \nabla_m = \) \[ \Sigma \left[ \frac{d(y_{a_1}, y_{a_2}, \ldots, y_{a_m})}{d(x_{\gamma_1}, x_{\gamma_2}, \ldots, x_{\gamma_m})} \cdot \frac{d(x_{\gamma_1}, x_{\gamma_2}, \ldots, x_{\gamma_m})}{d(y_{b_1}, y_{b_2}, \ldots, y_{b_m})} \right] \] (where \( \alpha_1, \alpha_2, \ldots, \alpha_m \); \( \beta_1, \beta_2, \ldots, \beta_m \) are two determinate sets of \( m \) out of the \( n \) indices, and the summation with respect to the indices \( \gamma \) extends to every combination of \( m \) out of the \( n \)). Consequently, \[ \nabla_m = 1 \text{ or } \nabla_m = 0, \] according as the series of indices \( \beta_1, \beta_2, \ldots, \beta_m \) is, or is not, the same combination as \( \alpha_1, \alpha_2, \ldots, \alpha_m \). (I suppose, for convenience, that when the two combinations are the same, the arrangement is the same in each; otherwise the value of \( \nabla_m \) may be \(-1\).) This is the theorem in question. If we put \( m = 1 \), we obtain the equations (1.) and (2.) given at the beginning of this article. If we put \( m = n \), the expression \( \nabla_n \) reduces itself to the product of the two determinants formed respectively with the complete sets of differential coefficients \( \frac{dy_i}{dx_j} \&c. \), \( \frac{dx_i}{dy_j} \&c. \), the value of which product is \(=1\), as is well known. As an illustration, it may be useful to exhibit the theorem in the case of \( m = 2 \), as expressed by the common notation. Namely, \[ \nabla_2 = \Sigma \left[ \left( \frac{dy_p}{dx_q} - \frac{dy_p}{dx_q} \right) \left( \frac{dx_i}{dy_a} - \frac{dx_i}{dy_b} \right) \right] = 1, \text{ or } = 0, \] according as \( \alpha, \beta \) are, or are not, the same as \( p, q \). Here \( \alpha, \beta; p, q \) are two determinate pairs of indices, and the summation refers to \( i, j \), extending to every binary combination. 2. Theorem.—Retaining the suppositions made at the beginning of the last article, let X be a given function of \( x_1, x_2, \ldots, x_n \); and let us further suppose that the equations by which \( y_1, y_2, \ldots, y_n \) are determined as functions of \( x_1, \&c., \) are \[ y_1 = \frac{dX}{dx_1}, \quad y_2 = \frac{dX}{dx_2}, \quad \ldots, \quad y_n = \frac{dX}{dx_n} \tag{5.} \] so that \[ \frac{dy_i}{dx_j} = \frac{dy_j}{dx_i}; \] and if we transform the equations (1.), (2.), art. 1, by this condition, we obtain the \( n \) equations \[ \frac{dy_1}{dx_1} \frac{dx_1}{dy_i} + \frac{dy_2}{dx_2} \frac{dx_2}{dy_i} + \cdots + \frac{dy_n}{dx_n} \frac{dx_n}{dy_i} = 0 \] \[ \frac{dy_1}{dx_2} \frac{dx_2}{dy_i} + \frac{dy_2}{dx_2} \frac{dx_2}{dy_i} + \cdots + \frac{dy_n}{dx_n} \frac{dx_n}{dy_i} = 0 \] \[ \vdots \] \[ \frac{dy_1}{dx_i} \frac{dx_i}{dy_i} + \frac{dy_2}{dx_i} \frac{dx_2}{dy_i} + \cdots + \frac{dy_n}{dx_i} \frac{dx_n}{dy_i} = 1 \] If these equations be added, after multiplying them respectively by \[ \frac{dx_1}{dy_i}, \frac{dx_2}{dy_j}, \ldots, \frac{dx_n}{dy_j} \] the sum of the first members reduces itself by virtue of the equations (1.), (2.), to \( \frac{dx_i}{dy_i} \); whilst the second side consists of the single term \( \frac{dx_i}{dy_j} \). We have then \[ \frac{dx_i}{dy_j} = \frac{dx_j}{dy_i}, \] or, in other words, if \( x_1, x_2, \ldots, x_n \) be found from the system of equations (5.) in terms of \( y_1, y_2, \ldots, y_n \), the resulting expressions are the partial differential coefficients of a certain function of \( y_1, y_2, \ldots, y_n \), so that the system inverse to (5.) is of the form \[ x_1 = \frac{dY}{dy_1}, \quad x_2 = \frac{dY}{dy_2}, \ldots, \quad x_n = \frac{dY}{dy_n}. \tag{6.} \] The relation between X and Y is easily found as follows. The equations (5.) and (6.) give \[ dX = y_1 dx_1 + y_2 dx_2 + \cdots + y_n dx_n \] \[ dY = x_1 dy_1 + x_2 dy_2 + \cdots + x_n dy_n; \] whence, by addition, \[ d(X+Y) = d(x_1 y_1 + x_2 y_2 + \cdots + x_n y_n), \] and therefore \[ X + Y = x_1 y_1 + x_2 y_2 + \cdots + x_n y_n. \tag{7.} \] (omitting the arbitrary constant, which might of course be added). The actual value of Y will then be \[ Y = -(X) + (x_1)y_1 + (x_2)y_2 + \cdots + (x_n)y_n. \tag{8.} \] in which the brackets indicate that \(x_1, x_2, \ldots x_n\) are to be expressed in terms of \(y_1, y_2, \ldots y_m\), so that \(Y\) may be a function of the latter variables only. It is easy to show \textit{à posteriori} that the expression (8.) verifies the equations (6.), but I pass on to some further considerations. (See note at the end of Section II.) 3. Suppose the function \(X\) involves explicitly, besides the variables \(x_1, x_2, \&c.\), any other quantity \(p\), so that the expressions \((x_1), (x_2), \&c.\) (or the values of \(x_1, x_2, \ldots\) in terms of \(y_1, y_2, \&c.\)) will also involve \(p\) explicitly, and we shall have \[ \frac{dX}{dp} = \frac{dX}{dx_1} \frac{dx_1}{dp} + \frac{dX}{dx_2} \frac{dx_2}{dp} + \ldots \] \[ = \frac{dX}{dp} + y_1 \frac{dx_1}{dp} + y_2 \frac{dx_2}{dp} + \ldots \] Now, differentiating the equation (8.) with respect to \(p\) (so far as it contains \(p\) explicitly), we obtain \[ \frac{dY}{dp} = -\frac{dX}{dp} + y_1 \frac{dx_1}{dp} + y_2 \frac{dx_2}{dp} + \ldots \] which the equation above written reduces simply to \[ \frac{dX}{dp} + \frac{dY}{dp} = 0. \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots (9.) \] In the particular case in which \(X\) is a homogeneous function of \(x_1, x_2, \ldots x_n\), and of \(m\) dimensions with respect to those variables, the equations (8.) and (9.) become \[ Y = (m-1)(X) \] \[ \frac{dX}{dp} + (m-1)\frac{d(X)}{dp} = 0 \] \] and it is easily seen that \(Y\) is also homogeneous and of \(\frac{m}{m-1}\) dimensions in \(y_1, y_2, \ldots y_m\). 4. The theorems (8.) and (9.) are cases of more general ones which are easily proved in a perfectly similar way, and which I shall therefore only enunciate. If, by means of the equations (5.), art. 2, we express a set of \(n\) out of the \(2n\) variables, consisting of \(r\) \(x\)'s and \(n-r\) \(y\)'s, of which no two indices are the same, for example, \[ (x_1, x_2, \ldots x_r, y_{r+1}, \ldots, y_n) \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots (\alpha.) \] in terms of the remaining \(n\) variables, \[ (y_1, y_2, \ldots y_r, x_{r+1}, \ldots, x_n); \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots (\beta.) \] then, taking \[ Q = -(X) + (x_1)y_1 + (x_2)y_2 + \ldots + (x_r)y_r \] (in which the brackets indicate that the variables of the set \((\alpha.)\) are to be expressed in terms of those of the set \((\beta.)\), so that \(Q\) is a function of the latter set), we shall have \[ \frac{dQ}{dy_i} = x_i \text{ from } i=0 \text{ to } i=r, \] \[ \frac{dQ}{dx_j} = -y_j \text{ from } j=r+1 \text{ to } j=n, \] and \[ \frac{dX}{dp} + \frac{dQ}{dp} = 0, \] as before*; but the equations corresponding to (10.) will not subsist unless X be homogeneous with respect to the r variables \( x_1, x_2, \ldots, x_r \). 5. Let us now suppose that the function X contains, explicitly, besides the n variables \( x_1, x_2, \ldots, x_n \), another variable t, and also n constants \( a_1, a_2, \ldots, a_n \); and that these last are contained in such a way that the n equations \[ \frac{dX}{da_1} = b_1, \quad \frac{dX}{da_2} = b_2, \quad \ldots \quad \frac{dX}{da_n} = b_n \] would be algebraically sufficient to determine \( a_1, a_2, \ldots, a_n \) in terms of \( b_1, b_2, \ldots, b_n \), \( x_1, \ldots, x_n \), &c. Then taking \[ X_b = -(X) + (a_1)b_1 + (a_2)b_2 + \ldots + (a_n)b_n \] (the brackets indicating that \( a_1, a_2, \ldots, a_n \) are to be expressed as above supposed), we shall have, by the theorems of arts. 2 and 3, \[ \frac{dX_b}{db_1} = a_1, \quad \frac{dX_b}{db_2} = a_2, \quad \ldots \quad \frac{dX_b}{db_n} = a_n; \] and also, for all values of i, \[ \frac{dX_b}{dx_i} = -\frac{dX}{dx_i} = -y_i; \] to which we may add \[ \frac{dX_b}{dt} = -\frac{dX}{dt} = -\frac{dY}{dt}. \] Now assuming the \( 2n \) equations (5.) and (11.), namely (for all values of i), \[ \frac{dX}{dx_i} = y_i, \quad \frac{dX}{da_i} = b_i, \] we may suppose each of the \( 2n \) variables \( x_1, x_2, \ldots, y_1, y_2, \ldots \) to be expressed by means of them as a function of the \( 2n \) constants \( a_1, \ldots, a_n, b_1, \ldots, b_n \), &c., and t; or, conversely, each of the \( 2n \) constants to be expressed as a function of the variables \( x_1, \ldots, x_n, y_1, \ldots, y_n \), &c., and t. On the former hypothesis each of the variables \( x_1, \ldots, y_1, \ldots \) is given as an explicit, and on the latter as an implicit function of the single variable t, which we will consider as independent; and total differentiation with respect to t will throughout this paper be denoted by accents, which will be used for no other purpose. Thus, \( p \) being any function of all the variables, we shall have \[ p' = \frac{dp}{dt} + \frac{dp}{dx_1}x'_1 + \ldots + \frac{dp}{dy_1}y'_1 + \ldots \] For the rest, we may, when necessary, distinguish the meanings of the various partial * Although these theorems, as stated in the text, are more general in form than those of the preceding article, they may, under another point of view, be considered as particular cases of them, and may in this way be best established. differential coefficients employed, by referring to the hypotheses on which they are taken, and which I shall denote as follows: **Hyp. I.**—The $2n$ variables $x_1, x_2, \ldots y_1, y_2, \ldots$ expressed as functions of $a_1, a_2, \ldots b_1, b_2, \ldots$ and $t$. **Hyp. II.**—The $2n$ constants $a_1, a_2, \ldots b_1, b_2, \ldots$ expressed as functions of $x_1, x_2, \ldots y_1, y_2, \ldots$ and $t$. **Hyp. III.**—The $n$ variables $y_1, y_2, \ldots y_n$ expressed as functions of the $n$ variables $x_1, x_2, \ldots x_n$, the $n$ constants $a_1, a_2, \ldots a_n$, and $t$ (as by equations (5.)). **Hyp. IV.**—The $n$ constants $b_1, b_2, \ldots b_n$ expressed as functions of the $n$ variables $x_1, \ldots x_n$, the $n$ constants $a_1, \ldots a_n$, and $t$ (as by equations (11.)). 6. Differentiating totally the equation (11.), $$\frac{dX}{da_i} = b_i$$ with respect to $t$, we obtain (observing that $\frac{d^2X}{da_jdx_j} = \frac{dy_j}{da_i}$ by virtue of the conditions (5.),) $$\frac{d^2X}{da_idt} + \frac{dy_1}{da_i}x'_1 + \frac{dy_2}{da_i}x'_2 + \cdots + \frac{dy_n}{da_i}x'_n = 0$$ (where $\frac{dy_1}{da_i}, \ldots$ &c. are taken on Hyp. III., art. 5.). Now let $(Z)$ be a function of $x_1, \ldots x_n, t, a_1, \ldots a_n$, defined by the equation $$(Z) = -\frac{dX}{dt}, \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots (15.)$$ the above equation then becomes $$\frac{d(Z)}{da_i} = \frac{dy_1}{da_i}x'_1 + \frac{dy_2}{da_i}x'_2 + \cdots + \frac{dy_n}{da_i}x'_n.$$ If this equation be multiplied by $\frac{da_i}{dy_j}$, and the result on each side summed with respect to $i$, it will be seen that the coefficients of $x'_1, x'_2, \ldots$ on the second side all vanish except that of $x'_j$, which reduces itself to 1 (see art. 1, equations (1.) (2.)); so that we have $$\frac{d(Z)}{da_1} \frac{da_1}{dy_j} + \frac{d(Z)}{da_2} \frac{da_2}{dy_j} + \cdots + \frac{d(Z)}{da_n} \frac{da_n}{dy_j} = x'_j.$$ Now the expression on the left of this equation is equivalent to $$\frac{dZ}{dy_j},$$ if by $Z$ (without brackets) we denote the result of substituting for $a_1, a_2, \ldots a_n$ in $(Z)$, their values in terms of all the variables (Hyp. II.), so that $Z$ is a function of the variables only. We have then, finally (writing $i$ instead of $j$), $$x'_i = \frac{dZ}{dy_i}. \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots (16.)$$ Again, we have (Hyp. III.) $$y'_i = \frac{dy_i}{dt} + \frac{dy_i}{dx'_1}x'_1 + \frac{dy_i}{dx'_2}x'_2 + \ldots.$$ which, by (5.), (15.) and (16.), becomes \[ y_i' + \frac{d(Z)}{dx_i} = \frac{dZ}{dy_1 dx_1} dy_i + \frac{dZ}{dy_2 dx_2} dy_i + \cdots; \] but it is plain that \[ \frac{d(Z)}{dx_i} = \frac{dZ}{dx_i} + \frac{dZ}{dy_1 dx_1} dy_i + \frac{dZ}{dy_2 dx_2} dy_i + \cdots \] (since \( Z \) would be derived from \( Z \) by substituting in the latter the expressions for \( y_1, y_2, \ldots \) Hyp. III.). And since \( \frac{dy_1}{dx_i} = \frac{dy_i}{dx_i}, \&c., \) comparing the two equations last written, we obtain \[ y_i' = -\frac{dZ}{dx_i}. \ldots . . . . . . . . . . . . . . . . . . . . (17.) \] The system of \( 2n \) equations (16.) and (17.) express the result of eliminating the \( 2n \) constants from the equations (5.) and (11.) and their differential coefficients with respect to \( t \). In other words, (16.) and (17.) are a system of \( 2n \) simultaneous differential equations of the first order, of which (5.) and (11.), or again, the equations supposed in Hyp. I. or II., art. 5, are the \( 2n \) integrals. 7. There are other remarkable relations between the partial differential coefficients of the expressions supposed in Hyp. I. and II., art. 5. For if we differentiate the equation \( \frac{dX}{da_i} = b_i \) with respect to \( a_j \) (Hyp. I.), we obtain \[ \frac{d^2X}{da_i da_j} + \frac{d^2X}{da_i dx_j} + \frac{d^2X}{da_j dx_i} = 0, \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots . . \) which gives, putting \( b_j \) for \( \frac{dX}{da_j} \) and \( y_i \) for \( \frac{dx_i}{da_i} \), \[ \frac{db_j}{da_i} + \frac{dy_1}{da_i} \frac{dx_1}{da_j} + \frac{dy_2}{da_i} \frac{dx_2}{da_j} + \cdots + \frac{dy_n}{da_i} \frac{dx_n}{da_j} = 0 \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots (b.) \] (where \( \frac{db_j}{da_i} \) refers to Hyp. IV., \( \frac{dy_1}{da_i}, \&c. \) to Hyp. III., and \( \frac{dx_i}{da_i}, \&c. \) to Hyp. I.). If then this equation be multiplied by \( \frac{da_i}{dy_k} \) (Hyp. II.) and the result summed with respect to \( i \), the sum of the first terms is \( \frac{db_j}{dy_k} \) (Hyp. II.), and for the rest, the coefficient of \( \frac{dx_k}{da_j} \) reduces itself to unity, whilst those of the remaining terms vanish (art. 1, equ. (1.), (2.)). Thus we have \[ \frac{db_j}{dy_k} = -\frac{dx_k}{da_j} . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . (18.) \] (where the first side refers to Hyp. II., and the second to Hyp. I.). Now if we treat the equations \[ \frac{dY}{dy_i} = x_i, \frac{dY}{da_i} = -b_i \] (see equations (6.) and (9.), putting \(a_i\) for \(p\) in the latter) exactly in the same way, it is plain that the result may be deduced from (18.) by interchanging \(x\) and \(y\), and changing the sign of \(b\); thus \[ \frac{db_j}{dx_k} = \frac{dy_k}{da_j}. \] Lastly, from the equations \[ \frac{dX_b}{dy_i} = a_i, \quad \frac{dX_b}{dx_i} = -y_i \quad \text{(see (12.) and (13.))}, \] we should find in a similar manner \[ \frac{da_i}{dy_k} = \frac{dx_k}{db_j}; \] and from the analogous equations (the existence of which is obvious) \[ \frac{dY_b}{dy_i} = -x_i, \quad \frac{dY_b}{db_i} = -a_i \] we should obtain \[ \frac{da_i}{dx_k} = -\frac{dy_k}{db_j}. \] Collecting these results, and changing the indices, we have the system \[ \begin{align*} \frac{dx_i}{da_j} &= -\frac{db_j}{dy_i}, & \frac{dx_i}{db_j} &= \frac{da_j}{dy_i} \\ \frac{dy_i}{da_j} &= \frac{db_j}{dx_i}, & \frac{dy_i}{db_j} &= -\frac{da_j}{dx_i} \end{align*} \] in each of which equations the first member refers to Hyp. I., and the second to Hyp. II. (art. 5.); and it is to be remembered that there is no relation between the indices of the variables and those of the constants, so that the case of \(i=j\) has no peculiarity*. 8. Let \(\delta, \Delta\) be symbols denoting two distinct sets of arbitrary and independent variations attributed to the \(2n\) constants; then the equations \[ \frac{dX}{dx_i} = y_i, \quad \frac{dX}{da_i} = b_i \] give \[ \delta X = \Sigma_i(y_i \delta x_i + a_i \delta b_i); \] and if the operation \(\Delta\) be performed on each side, we have \[ \Delta \delta X = \Sigma_i(\Delta y_i \delta x_i + \Delta a_i \delta b_i) \] \[ + \Sigma_i(y_i \Delta \delta x_i + a_i \Delta \delta b_i). \] * It is remarkable that each of the equations (19.) is also true on a different and separate hypothesis, as is apparent on inspection of the four different sets of equations, \[ \begin{align*} \frac{dX}{dx_i} &= y_i, & \frac{dX_b}{dx_i} &= -y_i, & \frac{dY}{dy_i} &= x_i, & \frac{dY_b}{dy_i} &= -x_i \\ \frac{dX}{da_i} &= b_i, & \frac{dX_b}{db_i} &= a_i, & \frac{dY}{da_i} &= -b_i, & \frac{dY_b}{db_i} &= -a_i \end{align*} \] (see the preceding articles). If from this we subtract the corresponding equation obtained by inverting the order of the operations $\Delta$, $\delta$, remembering that $\Delta \delta u = \delta \Delta u$, we obtain $$\sum_i (\delta x_i \Delta y_i - \Delta x_i \delta y_i) + \sum_i (\delta a_i \Delta b_i - \Delta a_i \delta b_i) = 0.$$ (The use here made of the double operation $\Delta \delta$, is due in principle to Mr. Boole. See his demonstration of a well-known theorem of Lagrange, of which the equation (20.) is a more general form†). If in this equation we suppose $\delta x_i$, $\delta y_i$, &c. to be expressed in terms of $\delta a_i$, $\delta b_i$, &c. (Hyp. I.), and $\Delta a_i$, $\Delta b_i$, &c. in terms of $x_i$, $y_i$, &c. (Hyp. II.), and compare the terms on the two sides, it is easy to derive the relations (19.). I preferred however to deduce them by a more direct method. 9. If $x_i$ be expressed in terms of the $2n$ constants and $t$ (Hyp. I.), and then each constant be expressed in terms of the variables (Hyp. II.), the result is an identical equation. Differentiating then with respect to $x_i$, $x_j$, $y_k$, we obtain the three equations $$1 = \frac{dx_i}{da_1} \frac{da_1}{dx_i} + \frac{dx_i}{db_1} \frac{db_1}{dx_i} + \frac{dx_i}{da_2} \frac{da_2}{dx_i} + \frac{dx_i}{db_2} \frac{db_2}{dx_i} + \text{&c.}$$ $$0 = \frac{dx_i}{da_1} \frac{da_1}{dx_j} + \frac{dx_i}{db_1} \frac{db_1}{dx_j} + \frac{dx_i}{da_2} \frac{da_2}{dx_j} + \frac{dx_i}{db_2} \frac{db_2}{dx_j} + \text{&c.}$$ $$0 = \frac{dx_i}{da_1} \frac{da_1}{dy_k} + \frac{dx_i}{db_1} \frac{db_1}{dy_k} + \frac{dx_i}{da_2} \frac{da_2}{dy_k} + \frac{dx_i}{db_2} \frac{db_2}{dy_k} + \text{&c.}$$ Three similar equations may be obtained by treating $y_i$ in the same way. And if we apply to these six equations the transformations given by the system (19.), art. 7, the resulting theorems may be comprehended in the following statement. If $p$, $q$ be any two of the $2n$ variables $x_1$, ... $x_n$, $y_1$, ... $y_n$, then $$\sum_i \left( \frac{dp}{db_i} \frac{dq}{da_i} - \frac{dp}{da_i} \frac{dq}{db_i} \right) = \sum_i \left( \frac{db_i}{dp} \frac{da_i}{dq} - \frac{db_i}{dq} \frac{da_i}{dp} \right) = \pm 1, \text{ or } = 0,$$ according as $p$ and $q$ are or are not a conjugate pair, i.e. a pair of the form $x_j$, $y_j$. (The value $+1$ belongs to the case in which $p=x_j$, $q=y_j$, and $-1$ to the converse.) Here $p$ and $q$ are a determinate pair of variables, and the summation refers to the constants, extending to the $n$ conjugate pairs. More important however are the converse theorems obtained in a perfectly similar way by expressing $a_i$, or $b_i$ in terms of the variables (Hyp. II.), and supposing the variables to be again expressed in terms of the constants and $t$ (Hyp. I.). Differentiating the resulting identical equation with respect to $a_i$, $a_j$, $b_i$, $b_j$, and applying the transformations (19.) as before, we have, putting $h$, $k$ for a determinate pair of constants, $$\sum_i \left( \frac{dh}{dy_i} \frac{dk}{dx_i} - \frac{dh}{dx_i} \frac{dk}{dy_i} \right) = \sum_i \left( \frac{dy_i}{dh} \frac{dx_i}{dk} - \frac{dy_i}{dk} \frac{dx_i}{dh} \right) = \pm 1, \text{ or } = 0.$$ * This might be written $$\Sigma \delta(x_i, y_i) + \Sigma \delta(a_i, b_i) = 0.$$ See the notation proposed in art. 1. † Cambridge Mathematical Journal, vol. ii. p. 100. MDCCCLIV. according as \( h, k \) are or are not a conjugate pair, i.e. of the form \( a_j, b_j \). (The value +1 belongs to \( h = a_j, k = b_j \), and -1 to the converse.) According to the notation proposed at the beginning of this paper, the above formula may be written \[ \Sigma_i \frac{d(h, k)}{d(y_i, x_i)} = \Sigma_i \frac{d(y_i, x_i)}{d(h, k)} = \pm 1, \text{ or } = 0. \] By a usual and convenient abbreviation, the sum \[ \Sigma_i \frac{d(h, k)}{d(y_i, x_i)} \] may be denoted by the symbol* \([h, k]\). We have then, by (22.), \[ [a_i, b_i] = -[b_i, a_i] = 1 \quad [a_i, b_j] = [a_i, a_j] = [b_i, b_j] = 0, \ldots \ldots \quad (23.) \] \( j \) being different from \( i \); and, obviously, \[ [a_i, a_i] = [b_i, b_i] = 0. \] Now let \( f, g \) be any two functions whatever of the \( 2n \) constants \( a_1, \&c., b_1, \&c.; \) when the latter are expressed in terms of the variables (Hyp. II.), \( f, g \) become also functions of the variables; and if \( h, k \) represent, as above, any pair whatever of \( a_1, \&c., b_1, \&c., \) we have (see art. 1.) \[ \frac{d(f, g)}{d(y_i, x_i)} = \Sigma \left\{ \frac{d(f, g)}{d(h, k)} \cdot \frac{d(h, k)}{d(y_i, x_i)} \right\}, \] the summation referring to \( h, k \), and extending to every binary combination. If, now, we sum each side of this equation with respect to \( i \), we obtain \[ [f, g] = \Sigma \left\{ [h, k] \cdot \frac{d(f, g)}{d(h, k)} \right\} \quad \ldots \ldots \quad (24.) \] (the summation referring as before to \( h, k \)). But, by (23.), \([h, k]\) is 0 unless \( h, k \) be a conjugate pair, and then it is \( \pm 1 \); so that (24.) becomes simply \[ [f, g] = \Sigma_i \frac{d(f, g)}{d(a_i, b_i)}, \quad \ldots \ldots \quad (25.) \] an equation which, written at length in the common notation, is \[ \Sigma_i \left( \frac{df}{dy_i} \frac{dg}{dx_i} - \frac{df}{dx_i} \frac{dg}{dy_i} \right) = \Sigma_i \left( \frac{df}{da_i} \frac{dg}{db_i} - \frac{df}{db_i} \frac{dg}{da_i} \right). \] The expression on the right being a function of the constants \( a_1, \&c., b_1, \&c. \) only, the equation (25.) expresses obviously the following theorem. If \[ f = \varphi(x_1, x_2, \ldots x_n, y_1, y_2, \ldots y_n, t) \] \[ g = \psi(x_1, x_2, \ldots x_n, y_1, y_2, \ldots y_n, t) \] be any two integrals of the system of simultaneous equations (16.), (17.), art. 6, then * Poisson employs the notation \((h, k)\), which would have led to confusion if adopted here. Lagrange (in the Méc. Anal.) uses \([h, k]\), but with a different signification. See below, note to art. 34. the expression \([f;g]\), or \[ \Sigma_i \left\{ \frac{df}{dy_i} \frac{d\psi}{dx_i} - \frac{df}{dx_i} \frac{d\psi}{dy_i} \right\}, \] is constant; i.e., it becomes a function of the arbitrary constants only, if for \(x_i, &c., y_i, &c.\) be substituted their values in terms of the constants and \(t\). In the case in which (16.) and (17.) represent the dynamical equations, this is identical with the remarkable theorem discovered by Poisson. We shall have occasion to return to it presently. 10. If we treat the equations (21.) of the last article exactly in the same way as we have treated (22.), putting \(u, v\) for any two functions whatever of the \(2n\) variables \(x_1, x_2, \ldots x_n, y_1, y_2, \ldots y_n,\) we find \[ \Sigma_i \frac{d(u,v)}{d(b_i,a_i)} = \Sigma_i \frac{d(u,v)}{d(x_i,y_i)}, \] and comparing this with the theorem (25.) of the last article, we see that both may be included in the following general enunciation: If \(u, v\) be either (1) any two functions whatever of the \(2n\) constants \(a_i, &c., b_i, &c.,\) or (2) any two functions whatever of the \(2n\) variables \(x_i, &c., y_i, &c.\) (not containing \(t\) explicitly), then \[ \Sigma_i \left\{ \frac{du}{dy_i} \frac{dv}{dx_i} - \frac{du}{dx_i} \frac{dv}{dy_i} + \frac{du}{db_i} \frac{dv}{da_i} - \frac{du}{da_i} \frac{dv}{db_i} \right\} = 0, \] or \[ \Sigma_i \left\{ \frac{d(u,v)}{d(y_i,x_i)} + \frac{d(u,v)}{d(b_i,a_i)} \right\} = 0. \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots (26.) \] (When \(u, v\) represent functions of the constants, the differential coefficients in the first term are taken on Hyp. II.; and, when functions of the variables, those in the second term on Hyp. I. (art. 5.)). This property depends, as will be seen, solely on the relations (5.), (11.), arts. 2, 5, which are the only assumptions that have been made in deducing all the preceding propositions. 11. There are similar theorems in which the summation refers to the numerators of the differential coefficients; but as these are less remarkable, and moreover are deducible immediately from the equation (20.), art. 8, I shall omit them. 12. Theorem.—I proceed now to establish a theorem which may be considered as the converse of that expressed by (23.), art. 9. Let \(x_1, x_2, \ldots x_n, y_1, y_2, \ldots y_n\) be \(2n\) variables, concerning which no supposition whatever is made, except that they are connected by \(n\) equations \[ \begin{aligned} a_1 &= \varphi_1(x_1, x_2, \ldots x_n, y_1, y_2, \ldots y_n) \\ a_2 &= \varphi_2(x_1, x_2, \ldots x_n, y_1, y_2, \ldots y_n) \\ &\ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots (a.) \end{aligned} \] \(a_1, a_2, \ldots a_n\) being \(n\) constants. The functions on the right may involve explicitly any other quantities whatever, except $a_i$, &c. It is assumed that these equations are algebraically sufficient to determine each of the $n$ variables $y_1$, ... $y_n$, as a function of the other $n$ variables $x_1$, ... $x_n$ and the constants. Then the theorem in question is as follows: If, by means of the equations (a.), the $n$ variables $y_1$, ... $y_n$ be expressed as functions of $x_1$, &c., then in order that the conditions $$\frac{dy_j}{dx_i} = \frac{dy_i}{dx_i}$$ may subsist identically, it is necessary and sufficient that the expression $[a_i, a_j]$ (defined as in art. 9.) shall vanish for every binary combination of the $n$ equations. This may be proved as follows: Putting $h, k$ for any two of the constants $a_1, a_2$, &c., let $h=\varphi(x_1, &c., y_1, &c.)$ represent one of the equations (a.) above written. If in this equation the values of $y_1$, ... $y_n$ be expressed, as above supposed, in terms of $x_1$, &c., $a_1$, &c., it becomes identical. Differentiating it, on this hypothesis, with respect to $x_i$, we obtain $$\frac{dh}{dx_i} + \frac{dh}{dy_1} \frac{dy_1}{dx_i} + \frac{dh}{dy_2} \frac{dy_2}{dx_i} + \ldots + \frac{dh}{dy_n} \frac{dy_n}{dx_i} = 0;$$ and in like manner $$\frac{dk}{dx_i} + \frac{dk}{dy_1} \frac{dy_1}{dx_i} + \frac{dk}{dy_2} \frac{dy_2}{dx_i} + \ldots + \frac{dk}{dy_n} \frac{dy_n}{dx_i} = 0;$$ and if we multiply the first of these equations by $\frac{dk}{dy_i}$ and the second by $\frac{dh}{dy_i}$ and subtract, there results an equation which may be written as follows: $$\frac{dh}{dy_i} \frac{dk}{dx_i} - \frac{dh}{dx_i} \frac{dk}{dy_i} = \Sigma_j \left\{ \frac{dy_j}{dx_i} \left( \frac{dh}{dy_j} \frac{dk}{dy_i} - \frac{dh}{dy_i} \frac{dk}{dy_j} \right) \right\};$$ or, putting now $a_p, a_q$ instead of $h, k$, and employing the same notation as before, $$\frac{d(a_p, a_q)}{d(y_j, x_i)} = \Sigma_j \left\{ \frac{dy_j}{dx_i} \frac{d(a_p, a_q)}{d(y_j, y_i)} \right\}.$$ If now the terms on each side be summed with respect to $i$, the result on the first side is $[a_p, a_q]$; and observing that on the second side the term multiplied by $\frac{dy_i}{dx_i}$ will only differ in sign from that multiplied by $\frac{dy_j}{dx_i}$, we shall have $$[a_p, a_q] = \Sigma_i \Sigma_j \left\{ \left( \frac{dy_i}{dx_i} - \frac{dy_j}{dx_j} \right) \frac{d(a_p, a_q)}{d(y_j, y_i)} \right\}. \quad \ldots \quad (27.)$$ the summation on the right extending to all binary combinations $i, j$. Suppose this equation to be written at length, and then after multiplying each side by $$\frac{d(y_r, y_s)}{d(a_p, a_q)},$$ let the sum be taken with respect to all the binary combinations $p, q$. It follows from the theorems of art. 1, that the coefficient of \[ \frac{dy_r}{dx_s} - \frac{dy_s}{dx_r} \] on the right will reduce itself to unity, and that of each of the remaining terms to zero; so that we shall have, writing now \(j, i\) for \(r, s\), \[ \frac{dy_j}{dx_i} - \frac{dy_i}{dx_j} = \sum_p \sum_q \left[ a_p, a_q \right] \cdot \frac{d(y_j, y_i)}{d(a_p, a_q)}. \] (28.) In order then that the expression \(\frac{dy_j}{dx_i} - \frac{dy_i}{dx_j}\) should vanish identically for every binary combination of indices, it follows from (28.) that it is sufficient, and from (27.) that it is necessary, that each of the \(\frac{n(n-1)}{2}\) terms \([a_p, a_q]\) should vanish, and vice versa. It will be observed that the terms \([a_p, a_q]\) cannot vanish otherwise than identically, since they do not contain any of the constants \(a_1, a_2, \ldots, a_n\), and it is by hypothesis impossible to eliminate all these constants from the equations \((a)\). It follows then that when the conditions \([a_p, a_q] = 0\) subsist, the values of \(y_1, \ldots, y_n\) expressed as above, are identically the partial differential coefficients of a function of \(x_1, \ldots, x_n, a_1, \ldots, a_n\). We have thus established the theorem enunciated at the beginning of this article. 13. The preceding theorem may be made somewhat more general as follows: If we divide the \(2n\) variables into any two sets of \(n\) each, so that no two in the same set are conjugate (as for instance \[ x_1, x_2, \ldots, x_r, y_{r+1}, \ldots, y_n \] and denote one set by \[ \xi_1, \xi_2, \ldots, \xi_n, \] and the other by \[ \pm \eta_1, \pm \eta_2, \ldots, \pm \eta_n, \] taking the \(+\) or \(-\) sign according as \(\eta_i\) represents \(y_i\) or \(x_i\), it is obvious that the expression \(\sum_i \frac{d(a_p, a_q)}{d(\eta_i, \xi_i)}\) is identical with \([a_p, a_q]\); and therefore whenever all the terms \([a_p, a_q]\) vanish, if the set \(\eta_1, \eta_2, \ldots, \eta_n\) can be expressed by means of the equations \((a)\) of the last article, in terms of \(\xi_1, \xi_2, \ldots, \xi_n, a_1, a_2, \ldots, a_n\), their values will be the partial differential coefficients with respect to \(\xi_1, \xi_2, \ldots, \xi_n\) of a function of these variables and of the constants. 14. Theorem.—If of the system of \(2n\) simultaneous differential equations of the first order \[ \begin{align*} x'_1 &= \frac{dZ}{dy_1}, & x'_n &= \frac{dZ}{dy_n}, \\ y'_1 &= -\frac{dZ}{dx_1}, & y'_n &= -\frac{dZ}{dx_n}, \end{align*} \] (I.) (where \(Z\) denotes any function of \(x_1, \ldots, x_n, y_1, \ldots, y_n\) and \(t\), and accents denote as usual Professor Donkin On The total differentiation with respect to \( t \) there be given \( n \) integrals, involving \( n \) arbitrary constants \( a_1, \ldots, a_n \), as \[ a_i = \phi_i(x_1, x_2, \ldots, x_n, y_1, y_2, \ldots, y_n, t), \] the remaining integrals may be found, whenever the \( \frac{n(n-1)}{2} \) conditions \([a_i, a_j] = 0\) are satisfied. For let \( y_1, y_2, \ldots, y_n \) be expressed, by means of the given integrals, in terms of \[ x_1, \ldots, x_n, a_1, \ldots, a_n, t. \] Their values so expressed will satisfy (art. 12.) the conditions \[ \frac{dy_i}{dx_i} - \frac{dy_j}{dx_j} = 0. \ldots \ldots . . . . . (b.) \] Let \( Z \) represent the result of substituting in \( Z \) these values of \( y_1, \ldots, y_n \), so that \( Z \) is a given function of \( x_1, \ldots, x_n, a_1, \ldots, a_n, t \). We shall have \[ \frac{dZ}{dx_i} = \frac{dZ}{dz_i} + \frac{dZ}{dy_1} \frac{dy_1}{dx_i} + \frac{dZ}{dy_2} \frac{dy_2}{dx_i} + \ldots \] which the equations (I.) and (b.) reduce to \[ \frac{d(Z)}{dx_i} = -y'_i + \frac{dy_i}{dx'_1} + \frac{dy_i}{dx'_2} + \ldots . \] but \[ y'_i = \frac{dy_i}{dt} + \frac{dy_i}{dx'_1} + \frac{dy_i}{dx'_2} + \ldots . \] consequently \[ \frac{d(Z)}{dx_i} = -\frac{dy_i}{dt}. \ldots \ldots \ldots \ldots \ldots \ldots \ldots (c.) \] Looking now at the assemblage of equations (b.), (c.), we see that they express the following proposition: The values of \( y_1, y_2, \ldots, y_n, -(Z) \), are the partial differential coefficients with respect to \( x_1, x_2, \ldots, x_n, t \), of one and the same function. Let this function be called \( X \); we have then \[ \frac{dX}{dx_i} = y_i, \frac{dX}{dt} = -(Z); \ldots \ldots \ldots \ldots (II.) \] and since \( y_1, \ldots, y_n, (Z) \) are given functions of \( x_1, &c., a_1, &c., t \), the function \( X \) can be found by simple integration. Let us then suppose \( X \) to be known, and let us take the total differential coefficient with respect to \( t \), of \( \frac{dX}{da_i} \); we shall have \[ \left( \frac{dX}{da_i} \right)' = \frac{d^2X}{da_idt} + \frac{d^2X}{da_idx'_1} + \frac{d^2X}{da_idx'_2} + \ldots , \] which, by virtue of (I.) and (II.), becomes \[ \left( \frac{dX}{da_i} \right)' = -\frac{d(Z)}{da_i} + \frac{dZ}{dy_1} \frac{dy_1}{da_i} + \frac{dZ}{dy_2} \frac{dy_2}{da_i} + \ldots . \] but \[ \frac{dZ}{da_i} = \frac{dZ}{dy_1} \frac{dy_1}{da_i} + \frac{dZ}{dy_2} \frac{dy_2}{da_i} + \ldots \] (since \( Z \) is derived from \( Z \) by introducing the values of \( y_1, \ldots y_n \), in terms of \( x_1, \&c., \) \( a_1, \ldots a_n \)), hence the second member of the preceding equation vanishes, and we have \[ \left( \frac{dX}{da_i} \right)' = 0, \] so that \( \frac{dX}{da_i} \) is constant, and we may write \[ \frac{dX}{da_i} = b_i, \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots (III.) \] and \( b_i \) is an independent arbitrary constant, as it is easy to prove; it is however unnecessary to do so here, because we have in fact already proved it in showing that the elimination of \( a_1, \ldots a_n, b_1, \ldots b_n \), from the system of equations (II.), (III.), leads to the differential equations (I.) (see art. 6.). The \( n \) equations (III.) give therefore the remaining \( n \) integrals of the system (I.), of which (II.) and (III.) together are the complete solution. The system of equations (II.), (III.) being the same as that discussed in the pre- ceding articles, all the conclusions there obtained will continue to subsist. 15. Suppose the expression for \( Z \) (see the last article) in terms of the variables is \[ Z = f(x_1, x_2, \ldots x_n, y_1, y_2, \ldots y_n, t), \] \( Z \) is changed into \( (Z) \) by the substitution of \( \frac{dX}{dx_1} \) for \( y_1, \&c.; \) and since \( \frac{dX}{dt} \) is (identi- cally) \(- (Z), \) the equation \[ \frac{dX}{dt} + f(x_1, x_2, \ldots x_n, \frac{dX}{dx_1}, \ldots, \frac{dX}{dx_n}, t) = 0 \ldots \ldots \ldots (X.) \] is a partial differential equation satisfied by the function \( X. \) We have thus arrived, by an inverse route, at the point from which Sir W. HAMIL- TON’s theory, as improved by Jacobi, sets out. Jacobi, namely, has shown (by a demonstration immediately applying only to a particular form of the equation (X.), but easily extended), that if \( X \) be any “complete” solution of the equation (X.), that is, a solution involving (besides the constant which may be merely added to \( X \)) \( n \) arbitrary constants \( a_1, a_2, \ldots a_n, \) in such a way that they cannot be all eliminated from the \( n+1 \) equations obtained by differentiating \( X \) with respect to \( x_1, \ldots x_n, t, \) without employing all those equations, then \( X \) possesses the properties of Sir W. Hamilton’s “Principal Function,” or in other words, gives all the integrals of the system (I.) by means of the system (II.), (III.). It will be desirable briefly to indicate the mode in which this demonstration may be made to apply to the general form (X.). Assuming that a complete solution \( X, \) of that equation, is given, put \( \frac{dX}{dx_i} = y_i; \) then differentiating the equation (X.) with respect to \( x_i \), and employing the equations \[ \frac{dy_p}{dx_q} = \frac{dy_q}{dx_p}, \] we have \[ \frac{dy_i}{dt} + \frac{df}{dx_i} + \frac{df}{dy_1} \frac{dy_i}{dx_1} + \frac{df}{dy_2} \frac{dy_i}{dx_2} + \ldots = 0; \] on the other hand, taking the differential coefficient of \( y_i \) with respect to \( t \), without assuming anything as to the nature of the relations between \( t \) and the other variables, we find \[ y'_i = \frac{dy_i}{dt} + \frac{dy_i}{dx_1'} x_1' + \frac{dy_i}{dx_2'} x_2' + \ldots. \] and adding to this the preceding equation, \[ y'_i + \frac{df}{dx_i} = \frac{dy_i}{dx_1} \left( x_1' - \frac{df}{dy_1} \right) + \frac{dy_i}{dx_2} \left( x_2' - \frac{df}{dx_2} \right) + \ldots, \] from which it follows that the \( n \) assumptions \[ x'_i = \frac{df}{dy_i} \] would involve the \( n \) further equations \[ y'_i = -\frac{df}{dx_i}. \] Again, the \( n \) assumptions \[ \frac{dX}{da_i} = b_i \] would give, by combining the \( n \) equations obtained by differentiating totally with respect to \( t \), viz. \[ \frac{d^2X}{da_idt} + \frac{d^2X}{da_idx_1} x_1' + \frac{d^2X}{da_idx_2} x_2' + \ldots = 0, \] with the \( n \) others obtained by differentiating the equation (X.) with respect to \( a_i \), viz. \[ \frac{d^2X}{da_idt} + \frac{df}{dy_1} \frac{d^2X}{da_idx_1} + \frac{df}{dy_2} \frac{d^2X}{da_idx_2} + \ldots = 0, \] the \( n \) following, namely, \[ \frac{d^2X}{da_idx_1} \left( x_1' - \frac{df}{dy_1} \right) + \frac{d^2X}{da_idx_2} \left( x_2' - \frac{df}{dy_2} \right) + \ldots = 0, \] from which it follows either that \( x'_i = \frac{df}{dy_i} \), or that the determinant formed with the \( n^2 \) expressions \( \frac{d^2X}{da_idx_j} \), or \( \frac{d}{da_i} \left( \frac{dX}{dx_j} \right) \), vanishes; but this last condition would express, as is well known, the possibility of eliminating the \( n \) constants \( a_1, a_2, \ldots a_n \) from the \( n \) equations \[ \frac{dX}{dx_i} = \Psi_j(x_1, \&c., a_1, \&c., t), \] which would contradict the assumption that \( X \) is a complete solution of the equation (X.). Finally, then, if \( X \) be a complete solution, the assumptions \( \frac{dX}{da_i} = b_i \) involve as a consequence the relations \( x'_i = \frac{df}{dy_i} \), and these again involve \( y'_i = -\frac{df}{dx'_i} \), where \( y_i \) stands for \( \frac{dX}{dx'_i} \). In thus applying Jacobi's demonstration I have slightly altered its form, in order to bring more prominently into view the necessity for \( X \) being a complete solution. 16. It is obvious, from the considerations given in art. 13, that instead of the equation (X.) of the last article, we might employ any one of the analogous equations obtained by distributing the variables as explained in the article referred to, and then writing \( \frac{dQ}{dx'_i} \) for \( z_i \) in the expression for \( Z \). The function \( Q \) will be a "principal function." In particular, if we take the equation \[ \frac{dY}{dt} = f\left\{ \frac{dY}{dy_1}, \frac{dY}{dy_2}, \ldots, y_1, y_2, \ldots, y_n, t \right\} = 0, \] any complete solution will give the integrals of the differential equations (I.) by means of the system \[ \frac{dY}{dy_i} = x_i, \quad \frac{dY}{da_i} = b_i. \] The whole number of partial differential equations from each of which a "principal function" can be obtained, will obviously be \( 2^n \). The relations between these different principal functions will be apparent from the conclusions of art. 4*. 17. If \( x_1, x_2, \ldots, x_n \) represent all the independent coordinates (of whatever kind) in any ordinary dynamical problem, and \( T \) the expression for the vis viva† in terms of \( x_1, \&c., x'_1, \&c. \), the equations of motion are, as is well known, \[ \left( \frac{dT}{dx'_i} \right)' = \frac{dT}{dx_i} = \frac{dU}{dx_i}, \quad \ldots \ldots \ldots \ldots \ldots \ldots \quad (T.) \] where \( U \) is a function of \( x_1, \ldots, x_n \), which may also contain \( t \) explicitly, but not \( x'_1, \&c. \). Lagrange, to whom these formulæ are due, was also the first to employ the expressions \( \frac{dT}{dx'_i} \) as new variables, instead of \( x'_i \). But Sir W. Hamilton first showed that this substitution (putting \( \frac{dT}{dx'_i} = y_i \)) would reduce the \( n \) equations (T.) to the \( 2n \) equations of the first order of the form (I.), art. 14. His demonstration, however‡, depends upon the circumstance that \( T \) is, in dynamical problems, necessarily homogeneous with respect to \( x'_1, \ldots, x'_n \); and I am not aware that any other case has hitherto been contemplated. The investigations of the preceding articles will however enable us to apply a * Compare Sir W. Hamilton's expressions, Philosophical Transactions, 1835, p. 99, art. 5. † I here adopt, what I hope will be universally adopted, the suggestion of Coriolis and Professor Helmholtz, that the definition of vis viva should be half the sum of products of masses by squares of velocities. ‡ Philosophical Transactions, 1835, p. 97, art. 3. similar transformation to the equations (T.), in the case in which no limitation is imposed upon the form of the function T, as I shall now proceed to show. 18. Putting $T + U = W$, we shall have (since U does not contain $x'_i$, &c.) $$\left(\frac{dW}{dx_i}\right)' = \frac{dW}{dx_i}.$$ (W.) Let $\frac{dW}{dx_i} = y_i$; then if we take $$Z = -(W) + (x'_1)y_1 + (x'_2)y_2 + \ldots + (x'_n)y_n.$$ (V.) (where, in the terms enclosed in brackets, $x'_1, x'_2, \ldots, x'_n$, &c. are to be expressed in terms of $y_1, y_2, \ldots, x_1, x_2, \ldots, p$, &c.), we shall have, by the theorems of the former articles (see equations (6.), (8.), (9.) of arts. 2 and 3, putting $x'_i$ instead of $x_i$ and $x_i$ instead of $p$, in those equations), $$x'_i = \frac{dZ}{dy_i},$$ (α.) and $$\frac{dW}{dx_i} = -\frac{dZ}{dx_i},$$ so that the equation (W.) becomes $$y_i = -\frac{dZ}{dx_i}.$$ (β.) and (α.), (β.) are of the form in question* ((I.), art. 14.). Thus, so far as the application of any methods of integration, founded upon the preceding principles, and the theories of Sir W. Hamilton and Jacobi, to the system (T.), art. 16, is concerned, there is no restriction to the form of the function T. This extension is probably at present of no practical importance, but may perhaps be thought of some interest in a purely analytical point of view. 19. Returning now to the suppositions and conclusions of art. 14, let us further suppose that Z does not contain $t$ explicitly, so that $$Z' = \Sigma_i \left(\frac{dZ}{dx_i} x'_i + \frac{dZ}{dy_i} y'_i\right) = 0$$ by virtue of the system (I.); in this case $$Z = h,$$ (h.) is one of the integrals of the system, and if we suppose this to be one of the $n$ given integrals from which the principal function X is to be found, so that $h, a_1, a_2, \ldots, a_{n-1}$ are now the $n$ arbitrary constants, and the conditions $$[a_i, a_j] = 0, [h, a_i] = 0$$ subsist, it is plain that we shall have $$(Z) = h,$$ * In the case in which T is homogeneous and of the second degree, in $x'_1, x'_2, \ldots, x'_n$, it is obvious that the expression for Z reduces itself to $2(T) - W$, or $(T) - U$. since the expression for $Z$ must reduce itself identically to $h$ when the values of $y_1 \ldots y_n$ obtained from the integrals are substituted in it. Hence $$\frac{dX}{dt} = -h,$$ and therefore $$X = -ht + V,$$ $V$ being a function not containing $t$ explicitly. We have then $\frac{dX}{dx_i} = \frac{dV}{dx_i}$, so that $V$ is to be found from the $n$ expressions $$\frac{dV}{dx_i} = y_i.$$ Lastly, the $n$ remaining integrals will be $$\frac{dX}{dh} = \tau, \quad \frac{dX}{da_i} = b_i$$ ($\tau$ representing the arbitrary constant conjugate to $h$); and, substituting in these the above expression for $X$, we obtain $$\frac{dV}{dh} = t + \tau, \quad \frac{dV}{da_i} = h_i. \quad \ldots \ldots \ldots \ldots (29.)$$ The function $V$ now satisfies, and may be defined by, the partial differential equation $$f(x_1, \ldots x_n, y_1, \ldots y_n) = h, \quad \ldots \ldots \ldots \ldots (V.)$$ where $f(x_1, \ldots x_n, y_1, \ldots y_n)$ is the expression for $Z$ in terms of the variables. This, in dynamical problems, is the case in which the so-called "principle of vis viva" subsists. I shall, in the rest of this paper, use $h$ exclusively in the above signification, and call it, whether actually referring to a dynamical problem or not, the "constant of vis viva," whilst the integral $Z = h$ may be called the "integral of vis viva." 20. When the $2n$ integrals of the system of differential equations (I.), art. 14, are expressed in the manner which has been explained, it follows from the conclusions of former articles, that when these integrals are put in the form $$a_i = \varphi_i(x_1, \ldots, x_n, y_1, \ldots y_n, t)$$ $$b_i = \psi_i(x_1, \ldots, x_n, y_1, \ldots y_n, t),$$ the conditions $[a_i, b_i] = 1$, $[a_i, b_j] = 0$, $[b_i, b_j] = 0$ will subsist, as well as $[a_i, a_j] = 0$. I shall call any system of $2n$ integrals in which these conditions are fulfilled, a "normal solution," or a system of "normal integrals," whilst the $2n$ arbitrary constants contained in such a system may be called "normal elements." Any pair $a_i, b_i$ may be called (as before) conjugate elements. In the case considered in art. 19, $h$ and $\tau$ are conjugate elements, these letters being used instead of $a, b$, merely from obvious motives of convenience. It has been one principal object of these investigations to ascertain what advantages could be gained—either for the actual integration of a system of equations of the form (I.), or for the transformation of known solutions into forms convenient for the application of the method of variation of elements—by making the discovery of principal functions depend upon that of \( n \) integrals satisfying given conditions, rather than upon the solution of a partial differential equation. Having now prepared the way for this inquiry, I shall proceed with it in the following section. **Section II.** 21. **Theorem.**—If \( p, q, r \) be any three functions whatever of the \( 2n \) variables \( x_1, \ldots, x_n, y_1, \ldots, y_n \), then \[ \left[ [p, q], r \right] + \left[ [q, r], p \right] + \left[ [r, p], q \right] = 0. \quad \ldots \quad (30.) \] (The symbols have the same signification as in the last section. See art. 9.) This may be proved as follows. It is evident that if the above expression were developed, each term would consist of a second differential coefficient of one of the functions \( p, q, r \), multiplied by a first differential coefficient of each of the other two. Consider then the terms in which \( p \) is twice differentiated; these will be of the three forms \[ \frac{d^2p}{dx_i dy_j} \cdot \frac{dq}{dx_i} \cdot \frac{dr}{dy_j}, \quad \frac{d^2p}{dx_i dx_j} \cdot \frac{dq}{dy_i} \cdot \frac{dr}{dy_j}, \quad \text{and} \quad \frac{d^2p}{dy_i dy_j} \cdot \frac{dq}{dx_i} \cdot \frac{dr}{dx_j}, \] each of which will arise from the first and third terms of (30.) only. (It is to be observed that \( i \) may \( = j \).) Now if we examine each of these forms, we see easily that for every term arising from the first term of (30.), there is a similar term with the opposite sign arising from the third term of (30.); and since a similar proposition would be true of the terms in which \( q, r \), respectively, are twice differentiated, the whole expression on the left of the equation (30.) vanishes identically. The theorem is therefore established. It is obvious that \( p, q, r \) may contain, explicitly, any other quantities (as \( t \)) besides the \( 2n \) variables with respect to which the differentiations are performed. Let \( \xi \) represent, either, one of the \( 2n \) variables \( x_1, \ldots, y_n, \ldots, \) or any other quantity whatever, explicitly contained in \( p \) and \( q \). It is evident that we shall have \[ \frac{d}{d\xi} [p, q] = \left[ \frac{dp}{d\xi}, q \right] + \left[ p, \frac{dq}{d\xi} \right]. \quad \ldots \quad (31.) \] 22. Resuming now the consideration of the \( 2n \) simultaneous differential equations discussed in the first section, namely, \[ x'_i = \frac{dZ}{dy_i}, \quad y'_i = -\frac{dZ}{dx_i}, \quad \ldots \quad (I.) \] we shall be enabled, by means of the theorems (30.), (31.) of the last article, to give a very simple and direct proof of the proposition indirectly demonstrated in art. 9. For let \( u \) be any function whatever of the variables \( x_1, \ldots, y_n, \ldots, t \); then \[ u' = \frac{du}{dt} + \sum_i \left( \frac{du}{dx_i} x'_i + \frac{du}{dy_i} y'_i \right), \] and if the values of \( x_i, y_i \) given by (I.) be substituted in this expression, it becomes \[ u' = \frac{du}{dt} + [Z, u]. \] Let \( u = [p, q] \), then (making use of (31.)) \[ [p, q]' = \left[ \frac{dp}{dt}, q \right] + \left[ p, \frac{dq}{dt} \right] + [Z, [p, q]]. \] Now suppose that, by virtue of the differential equations (I.), the values of \( p \) and \( q \) are constant; or, in other words, that \[ p = \varphi(x_1, \&c., y_1, \&c., t) \] \[ q = \psi(x_1, \&c., y_1, \&c., t) \] are any two integrals whatever of the system (I.); \( p, q \) representing two arbitrary constants. The equation \( p' = 0 \) gives (see (32.)) \[ \frac{dp}{dt} + [Z, p] = 0, \] or \[ \frac{dp}{dt} = -[Z, p], \] hence \[ \left[ \frac{dp}{dt}, q \right] = -\left[ [Z, p], q \right] = \left[ q, [Z, p] \right]. \] In like manner \[ \left[ p, \frac{dq}{dt} \right] = \left[ p, [q, Z] \right]. \] Thus the expression given above for \([p, q]'\) becomes \[ [p, q]' = \left[ p, [q, Z] \right] + \left[ q, [Z, p] \right] + [Z, [p, q]], \] which is identically equal to 0, by the theorem (30.). Consequently, for any two integrals \( p \) and \( q \), \[ [p, q] = \text{constant}. \] This theorem, as has been already mentioned, was discovered, in the case of the dynamical equations, by Poisson; and the fact that he was able to arrive at it through so long and complex a process as that which he has given in his first memoir on the Variation of Arbitrary Constants*, must be looked upon as a remarkable instance of his analytical skill. I am not acquainted with any attempt to simplify the demonstration, except that of Sir W. Hamilton†; in fact it is probable that no material simplification was attainable without the help of the transformation of the differential equations to the form (I.), towards which Poisson (as Jacobi has remarked) only made a first step. Sir W. Hamilton’s demonstration may certainly be considered simple as compared with that of Poisson. That which I have given above will, I hope, be regarded as a further improvement. 23. In what follows I shall use such expressions as “the integral c,” as an abbreviation for “the equation \( c = \varphi(x_1, \&c., y_1, \&c., t) \).” * Journ. de l’École Polytechnique, tom. viii. † Philosophical Transactions, 1835, p. 108–9. It is of course understood that the function on the right contains neither \( c \) nor any other arbitrary constant explicitly. Let then \( f, g \) be any two given integrals of the system (I.). It has been shown that we shall always have \[ [f, g] = \text{constant}. \quad \ldots \ldots \ldots \ldots \ldots \ldots \quad (K.) \] But this equation may be true either (1) identically, or (2) not identically. In the first case the expression \([f, g]\) may either be identically \(=0\), or it may reduce itself identically to a determinate constant, which might always be made unity by multiplying one of the integrals by a factor. (In the case of a "normal system" of integrals (art. 20.), it has been seen that every binary combination gives either 0 or 1.) But if the above equation (K.) be not identically true, so that \([f, g]\) obtains a constant value only by virtue of the differential equations, then the constant on the right of (K.) is an arbitrary constant, and that equation is itself an integral. But here again there are two cases; for the function \([f, g]\) may be only a combination of the functions on the right of the two integrals \( f, g \); and then (K.) is not a new integral, but only a combination of the two given ones; or, on the other hand, \([f, g]\) may be a function independent of \( f, g \); and then (K.) is really a new integral, which cannot be produced by merely combining the other two. Thus it appears that Poisson's theorem may in some cases lead to the discovery of new integrals, when two are known. On this subject, and others connected with it, I refer to the interesting memoir of M. Bertrand in Liouville's Journal (1852), "Sur l'intégration des équations différentielles de la Mécanique." 24. Let \( c_1, c_2, \ldots, c_m \) be any \( m \) integrals, and let \( f, g \) be any two functions of the \( m \) constants \( c_1, c_2, \ldots, c_m \), so that \( f, g \) are also two integrals; and considering \( f, g \) as functions of \( c_1, \ldots, c_m \), and, through them, of the variables, we have exactly as in art. 9, equation (24.), \[ [f, g] = \Sigma \left\{ \frac{d(f, g)}{d(c_i, c_j)} \cdot [c_i, c_j] \right\}, \quad \ldots \ldots \ldots \ldots \quad (L.) \] the summation extending to all binary combinations of the \( m \) constants \( c_1, \&c. \). If then we suppose \( k_1, k_2, \ldots, k_m \) to be \( m \) functions (such as \( h, k \)) of the \( m \) constants \( c_1, \ldots, c_m \), we shall have for any pair \( k_p, k_q \), \[ [k_p, k_q] = \Sigma \left\{ \frac{d(k_p, k_q)}{d(c_i, c_j)} \cdot [c_i, c_j] \right\} \quad \ldots \ldots \ldots \ldots \quad (34.) \] (the summation referring as before to \( i, j \)); and the inverse equations (obtained either by considering \( c_i, \&c. \) as functions of \( k_i, \&c. \), and reasoning in the same way, or by multiplying the above equation by \( \frac{d(c_i, c_j)}{d(k_p, k_q)} \) and summing with respect to \( p, q \)) will be \[ [c_i, c_j] = \Sigma \left\{ \frac{d(c_i, c_j)}{d(k_p, k_q)} \cdot [k_p, k_q] \right\} \quad \ldots \ldots \ldots \ldots \quad (35.) \] (the summation referring to \( p, q \)). This inversion can only fail in the case in which the equations expressing \( k_1, \&c. \) in terms of \( c_1, \ldots, c_m \) are not all independent; a supposition which we exclude, in order that \( k_1, \ldots, k_m \) may represent \( m \) distinct integrals. The equations above written lead obviously to the following conclusions: (1.) If \( f \) be a given function of the \( m \) constants \( c_1, \ldots, c_m \); then the determination of another function \( g \), such that \([f, g] = 0\), depends in general upon the solution of a linear partial differential equation of the first order. (2.) It is impossible that the conditions \([k_i, k_j] = 0\) can exist for every binary combination of \( k_1, \ldots, k_m \), unless \([c_i, c_j] = 0\) for every binary combination of \( c_1, \ldots, c_m \). 25. As an illustration of the first of these conclusions, we may take a case which actually occurs in many dynamical problems. Let \( c_1, c_2, c_3 \) be three integrals, such that \[ [c_2, c_3] = c_1, \quad [c_3, c_1] = c_2, \quad [c_1, c_2] = c_3, \quad \ldots \quad \ldots \quad \ldots \quad \ldots \quad (c.) \] and let it be required to find a function \( g \) of \( c_1, c_2, c_3 \), such that \([c_1, g] = 0\). The equation (L.) of the last article gives, if we put \( f = c_1 \), and introduce the conditions (c.), \[ c_3 \frac{dg}{dc_2} - c_2 \frac{dg}{dc_3} = 0. \] The solution of which is \[ g = \psi(c_2^2 + c_3^2), \quad \ldots, \quad \ldots \quad \ldots \quad \ldots \quad (g.) \] \( \psi \) being an arbitrary function (which may evidently also contain \( c_1 \) in an arbitrary manner). If, instead of \( f = c_1 \), we put \( f = \phi(c_1^2 + c_2^2 + c_3^2) \), it will be found that the expression on the right of the equation (L.) vanishes identically; so that in this case, if \( g \) be any arbitrary function of \( c_1, c_2, c_3 \), the condition \([f, g] = 0\) will be satisfied. 26. If \( a_1, a_2, \ldots, a_n, b_1, b_2, \ldots, b_n \) be a system of normal elements (art. 20.), we have (equation (25.), art. 9.) \[ [f, g] = \sum_i \frac{d(f, g)}{d(a_i, b_i)}, \] where \( f, g \) represent any two functions of the elements, or in other words, any two integrals whatever. If in the above equation we put successively \( f = a_i, f = b_i \), we obtain \[ [a_i, g] = \frac{dg}{db_i}, \quad [b_i, g] = -\frac{dg}{da_i}. \quad \ldots \quad \ldots \quad \ldots \quad \ldots \quad (36.) \] In the case where the principle of vis viva subsists, we may suppose the constant of vis viva, \( h \), to be one of the elements. In this case (see (29.), art. 19.) the element conjugate to \( h \) is \( \tau \), and \( t \) appears in none of the integrals explicitly, except one, namely, the integral conjugate to \( h \), which is \[ \tau = -t + \frac{dV}{dh}. \] If, then, \( g \) be any integral whatever, not containing \( t \) explicitly, it cannot contain \( \tau \), since any combination of the normal integrals involving \( \tau \), will involve it in the form Consequently, for every such integral we shall have, by (36.), \[ [g, h] = 0, \] since \[ \frac{dg}{dx} = 0. \] This particular consequence of the formula (36.) follows also immediately from (32.), art. 22, since the equation \( g' = 0 \) gives, by (32.), \([Z, g] = 0\), and in this case \( Z = h \), so that \([Z, g] = [h, g]\). In this manner the theorem expressed by (37.) has been already obtained by M. Bertrand. **Examples of the preceding Methods.** 27. I shall now exemplify the principles which have been explained, by applying them to two of the most familiar as well as important problems of dynamics. First then let it be required to obtain in a normal form the integrals of the differential equations which determine the motion of a material point, acted on by a force emanating from a fixed centre and depending only on the distance. Taking the centre of force as the origin of a system of rectangular coordinates, let \( m \) be the mass, and \( x, y, z \) the coordinates of the moving point. Then \[ T = \frac{1}{2}m(x'^2 + y'^2 + z'^2), \] and \( U \) (see art. 17.) is a given function of \( r \), say \( \varphi(r) \), where \( r^2 = x^2 + y^2 + z^2 \). Let us put \[ \frac{dT}{dx'} = u, \quad \frac{dT}{dy'} = v, \quad \frac{dT}{dz'} = w, \] so that, referring to the notation used in the preceding pages, we have \[ x, y, z \text{ instead of } x_1, x_2, x_3 \] \[ u, v, w \text{ instead of } y_1, y_2, y_3. \] Moreover, \[ u = mx', \quad v = my', \quad w = mz'. \] Hence we obtain \[ Z = (T) - U = m^{-1}(u^2 + v^2 + w^2) - \varphi(r), \] so that the integral of *vis viva*, or \( Z = h \), becomes \[ \frac{1}{m}(u^2 + v^2 + w^2) - \varphi(r) = h; \] and the three integrals which express the conservation of areas become \[ yw - zv = c_1 \] \[ xu - xv = c_2 \] \[ xv - yu = c_3. \] These integrals are immediately seen to satisfy the conditions \[ [c_2, c_3] = -c_1, \quad [c_3, c_1] = -c_2, \quad [c_1, c_2] = -c_3; \] from which it follows (see art. 25, the result of which is obviously unaffected by the negative signs), that if we take \( k = (c_1^2 + c_2^2 + c_3^2)^{\frac{1}{2}} \), the condition \([c_3, k] = 0\) will be satisfied. (as is easily found to be true); and since neither of the integrals \(c_3, k\) contain \(t\) explicitly, the conditions \([h, c_3] = 0, [h, k] = 0\) will subsist also (art. 26.). Hence it follows that if we solved algebraically the three integrals \(h, c_3, k\) so as to express \(u, v, w\) in terms of \(x, y, z\), their values would be the partial differential coefficients of a function \(V\), from which the three remaining integrals could be found (arts. 12 and 19.). But it is more convenient to adopt a different system of coordinates. Reverting then to the primitive form of the three integrals which we have chosen, and writing \(c\) instead of \(c_3\), we have \[ T - U = h. \tag{i.} \] \[ m(xy' - yx') = c. \tag{ii.} \] \[ m^2(r^2(x'^2 + y'^2 + z'^2) - r^2r'^2) = k^2. \tag{iii.} \] 28. Let us now employ, instead of \(x, y, z\), the three coordinates \(g, \theta, z\); where \(z\) is the same as before, \(g\) is the projection of \(r\) on the plane of \(xy\), and \(\theta\) is the angle between \(g\) and the positive axis of \(x\). We shall thus have \[ g^2 + z^2 = r^2, \quad x = g \cos \theta, \quad y = g \sin \theta, \] and \[ T = \frac{1}{2}m(g^2 + g^2\theta^2 + z^2). \] Let \[ \frac{dT}{dg'} = u, \quad \frac{dT}{d\theta'} = v, \quad \frac{dT}{dz'} = w \] (where \(u\) and \(v\) have now a new signification), then \[ g' = \frac{u}{m}, \quad \theta' = \frac{v}{mg^2}, \quad z' = \frac{w}{m}; \] and the three integrals at the end of the last article become, after obvious reductions, \[ \frac{1}{2m}(u^2 + v^2 + w^2) = h + \varphi(r). \tag{i.} \] \[ v = c. \tag{ii.} \] \[ (gw - zu)^2 + \frac{r^2}{g^2}v^2 = k^2. \tag{iii.} \] The conditions \([h, c] = 0, [h, k] = 0, [c, k] = 0\) continue to subsist with reference to the new variables; the two former necessarily, because (ii.) and (iii.) do not contain \(t\) (art. 26.), and the third actually, as is seen on trial (not accidentally, as will be shown hereafter). We know, therefore, that the values of \(u, v, w\), found from these equations, will be the partial differential coefficients with respect to \(g, \theta, z\) of a function \(V\) of these latter variables. The two first give \[ u^2 + w^2 = 2m(h + \varphi(r)) - \frac{c^2}{g^2}; \] and if we multiply this by \(g^2 + z^2 = r^2\), and subtract (iii.), we obtain (introducing the condition (ii.) \[ (gu + zw)^2 = 2mr^2(h + \varphi(r)) - k^2. \] Lastly, if this be combined with (iii.), the following expressions are found for \( u \) and \( w \): \[ u = \frac{g}{r^2} \left[ 2mr^2(h + \varphi(r)) - k^2 \right]^{1/2} - \frac{z}{r^2} \left[ k^2 - \frac{r^2}{g^2} c^2 \right]^{1/2} \] \[ w = \frac{z}{r^2} \left[ 2mr^2(h + \varphi(r)) - k^2 \right]^{1/2} + \frac{g}{r^2} \left[ k^2 - \frac{r^2}{g^2} c^2 \right]^{1/2} \] (in which it is to be remembered that \( r^2 = z^2 + \xi^2 \)), and if to these we join the equation (ii.), the values of \( u, v, w \) are explicitly given in terms of the conjugate variables \( \xi, \theta, z \). We have then (art. 19.) \[ V = \int (ud\xi + vdB + wdz); \] or, substituting the above values, \[ V = c\theta + \int \left( \frac{gd\xi + zdz}{r^2} \left[ 2mr^2(h + \varphi(r)) - k^2 \right]^{1/2} + \frac{gdz - zd\xi}{r^2} \left[ k^2 - \frac{r^2}{g^2} c^2 \right]^{1/2} \right). \] The term under the integral sign is easily seen to be (as we know \( a \) priori it must be) a complete differential. It is convenient however to transform it thus. First, we have \( gd\xi + zdz = rdr \); next, let the latitude of the body (or the angle between \( r \) and the plane of \( x, y \)) be \( \lambda \); then \( \tan \lambda = \frac{z}{\xi} \), and \[ \xi dz - zd\xi = r^2 d\lambda, \quad \frac{r^2}{\xi^2} = \sec^2 \lambda. \] Making these substitutions, the expression for \( V \) becomes \[ V = c\theta + \int \frac{dr}{r} \left[ 2mr^2(h + \varphi(r)) - k^2 \right]^{1/2} + \int d\lambda \left[ k^2 - c^2 \sec^2 \lambda \right]^{1/2}. \] The integration in the second term cannot be effected till the form of the function \( \varphi(r) \) is given: that of the third term may be more conveniently performed after the differentiations with respect to \( c \) and \( k \), as in the next article. 29. The remaining integrals* of the problem are (art. 19.) \[ \frac{dV}{dk} = \alpha, \quad \frac{dV}{dc} = \beta, \quad \frac{dV}{dh} = t + \tau. \] Performing the operations indicated, and observing that \[ \int \frac{d\lambda}{\sqrt{k^2 - c^2 \sec^2 \lambda}} = \frac{1}{k} \sin^{-1} \left( \frac{k \sin \lambda}{\sqrt{k^2 - c^2}} \right) \] and \[ \int \frac{\sec^2 \lambda d\lambda}{\sqrt{k^2 - c^2 \sec^2 \lambda}} = \frac{1}{c} \sin^{-1} \left( \frac{c \tan \lambda}{\sqrt{k^2 - c^2}} \right), \] * It would perhaps be better to use the term "integral equations" here, in order to reserve the term "integral" for the case of an equation involving only one arbitrary constant (see art. 23.). The equations \( \frac{dV}{dk} = \alpha, \&c. \) become "integrals" in this sense, when for \( k, c, \) and \( h, \) on the left, are substituted the functions of the variables to which they are respectively equal (from (i.), (ii.), (iii.)). An "integral" in this limited meaning is what is commonly called a "first integral," when the problem is considered as the solution of \( n \) differential equations of the second order. And any equation obtained by combining "integrals" so as to eliminate a set of \( n \) of the variables \( x_1, x_2, \ldots, x_n, y_1, y_2, \ldots, y_n, \) of which no two are conjugate, corresponds to what is commonly called a "final integral." we obtain for the final integrals, \[ m \int r dr \{2mr^3(h + \phi(r)) - k^2\}^{-\frac{1}{2}} = t + \tau. \quad \text{(iv.)} \] \[ \theta - \sin^{-1}\left(\frac{c \tan \lambda}{\sqrt{k^2 - c^2}}\right) = \beta. \quad \text{(v.)} \] \[ -k \int r \{2mr^3(h + \phi(r)) - k^2\}^{-\frac{1}{2}} + \sin^{-1}\left(\frac{k \sin \lambda}{\sqrt{k^2 - c^2}}\right) = \alpha. \quad \text{(vi.)} \] Let \( \frac{c}{k} = \cos \iota \); then \( \frac{c}{\sqrt{k^2 - c^2}} = \cot \iota \), and the equation (v.) becomes \[ \tan \lambda = \tan \iota \cdot \sin (\theta - \beta), \quad \text{(v.a)} \] which expresses that the orbit is in a plane whose inclination to the plane of \( x, y \) is \( \iota \). Also \( \beta \) is evidently the longitude of the node, reckoned from the axis of \( x \). The last term on the left of (vi.) becomes \[ \sin^{-1}\left(\frac{\sin \lambda}{\sin \iota}\right). \] Now if \( \Omega \) be the "argument of latitude" or the angle between the node and the radius vector \( r \), we have evidently \( \sin \Omega = \frac{\sin \lambda}{\sin \iota} \), so that the above term is simply \( \Omega \), and the integral (vi.) becomes \[ \Omega - \alpha = k \int r \{2mr^3(h + \phi(r)) - k^2\}^{-\frac{1}{2}}. \quad \text{(vi.a)} \] 30. To apply the above expressions to the case of the undisturbed motion of a planet, we have only to put \( \phi(r) = \frac{\mu m}{r} \), where \( m \) is now the mass of the planet, and \( \mu \) the sum of the masses of the sun and planet, the origin of coordinates being placed at the sun. It would be useless to give the well-known expressions to which the integrations now lead, my object being merely to obtain a set of normal elements. Now in this case we have (by well-known theorems), if \( a \) be the semiaxis major, \( e \) the eccentricity, and \( \iota \) as before the inclination, \[ h = \frac{-\mu}{2a}, \quad k = \sqrt{\mu a(1 - e^2)}, \] and therefore \[ c = \sqrt{\mu a(1 - e^2)} \cdot \cos \iota. \] Also, if we take for the inferior limit of the integrations in (iv.) and (vi.a) the minimum value of \( r \), or the perihelion distance, it is plain that \( \alpha \) will be the longitude of the node, reckoned from the perihelion in the plane of the orbit, and \( -\tau \) the time of perihelion passage. Thus we have the following six elements, arranged in conjugate pairs: - \( \frac{-\mu}{2a} \), \(-\) (time of perihelion passage) - \( \sqrt{\mu a(1 - e^2)} \), (angle between node and perihelion) - \( \sqrt{\mu a(1 - e^2)} \cdot \cos \iota \), (longitude of node). It is obvious that we may change the signs of the first pair. And generally, that if \( f, g \) be any two conjugate elements, we may substitute for them \( \lambda f, \frac{1}{\lambda} g \), where \( \lambda \) is any determinate constant, i.e. not a function of the elements*. The above elements coincide with those given by Jacobi. My object has been merely to illustrate a mode of obtaining them which seems capable of useful applications. 31. As a second example I shall apply the method to the case of the motion of a solid body about a fixed point. Let the fixed point be taken for the origin, and the principal axes of the body through that point for the axes of \( x, y, z \). Let \( \xi, \eta, \zeta \) refer to the same origin and to axes fixed in space; \( a, b, c \) being the direction-cosines of the axis of \( x \) referred to the fixed axes of \( \xi, \eta, \zeta \), and \( a', b', c' ; a'', b'', c'' \) being respectively the direction-cosines of the axes of \( y \) and \( z \). Let \( \phi \) be the inclination of the plane of \( x, y \) (or "equator") to that of \( \xi, \eta \) (or "ecliptic"); \( \psi \) the longitude of the node, reckoned from the axis of \( \xi \), and \( \varphi \) the right-ascension of the axis of \( x \). Then if \( A, B, C \) be the Moments of Inertia, and \( p, q, r \) the angular velocities, about the axes of \( x, y, z \), the expression for the vis viva is \[ T = \frac{1}{2}(Ap^2 + Bq^2 + Cr^2), \] where \[ p = -\theta' \cos \varphi - \psi' \sin \varphi \sin \theta \] \[ q = \theta' \sin \varphi - \psi' \cos \varphi \sin \theta \] \[ r = \varphi' + \psi' \cos \theta. \] Let \( u, v, w \) be the variables conjugate respectively to \( \theta, \varphi, \psi \), so that \[ u = \frac{dT}{d\theta'}, \quad v = \frac{dT}{d\varphi}, \quad w = \frac{dT}{d\psi}, \] the following expressions will be found without difficulty: \[ Ap = -u \cos \varphi + \frac{\sin \varphi}{\sin \theta} (v \cos \theta - w) \] \[ Bq = u \sin \varphi + \frac{\cos \varphi}{\sin \theta} (v \cos \theta - w) \] \[ Cr = v. \] Considering at present only the case in which no forces act, we have the integral of vis viva \( T = h \), which becomes \[ \frac{1}{A} \left( -u \cos \varphi + \frac{\sin \varphi}{\sin \theta} (v \cos \theta - w) \right)^2 + \frac{1}{B} \left( u \sin \varphi + \frac{\cos \varphi}{\sin \theta} (v \cos \theta - w) \right)^2 + \frac{1}{C} v^2 = 2h. \] (i.) * More generally, we may substitute for \( f, g \) any two functions of them, \( p, q \), such that \[ \frac{dp}{dg} \frac{dg}{df} = \frac{dp}{df} \frac{dg}{dg} = 1, \] a condition which requires the solution of a linear partial differential equation for the determination of one function, if the other be assumed. But on the subject of the transformation of elements see below, arts. 34, 35. The three integrals which express the conservation of areas, namely, \[ Aap + Ba'q + Ca''r = e, \] \[ Abp + Bb'q + Cb''r = f, \] \[ Acp + Bc'q + Cc''r = g, \] become, after simple reductions, \[ -u \cos \psi - \frac{\sin \psi}{\sin \theta} (v - w \cos \theta) = e \] \[ -u \sin \psi + \frac{\cos \psi}{\sin \theta} (v - w \cos \theta) = f \] \[ w = g. \] Let \( e^2 + f^2 + g^2 = k^2 \); we have, adding the squares of these three equations, \[ u^2 + w^2 + \frac{(v - w \cos \theta)^2}{\sin^2 \theta} = k^2, \ldots \ldots \ldots \ldots \ldots \] (ii.) and we may take the three equations (i.), (ii.), and \[ w = g \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \] (iii.) as three normal integrals; the conditions \[ [g, h] = 0, \quad [h, k] = 0, \quad [k, g] = 0 \] being obviously satisfied. These three equations determine \( u, v, w \) as functions of \( \theta, \phi, \psi \); and supposing the three former variables to be explicitly expressed in terms of the latter, we should have at once the three partial differential coefficients \( \frac{dV}{d\theta}, \frac{dV}{d\phi}, \frac{dV}{dp} \); the determination of V would therefore depend upon simple integration, and the remaining integrals would be given by means of the three equations \[ \frac{dV}{dt} = t + \tau, \quad \frac{dV}{dg} = c_1, \quad \frac{dV}{dk} = c_2, \] \( \tau, c_1, c_2 \) being new arbitrary constants. In the general case, however, the algebraical solution of the equations (i.), (ii.), (iii.) is impracticable, since the elimination of \( v \) and \( w \) leads to an equation of the fourth degree in \( u \); nor does it seem possible to evade the difficulty by choosing a different combination of integrals, since it may be shown that the necessary conditions cannot be satisfied unless two at least of the combinations chosen are of the second degree in \( u, v, w \). 32. Mr. CALEY has given* a solution of this problem, which, though differing totally in form and method from the above, resembles it in arriving exactly at a corresponding point. For in Mr. CALEY's equations (27.), (28.), \( \Phi \) and \( V \) are to be expressed as functions of \( v \); but this requires the algebraical solution of the system (18.) for \( p, q, r \), and is therefore impracticable. (The two equations (i.), (ii.) of the * Cambridge and Dublin Mathematical Journal, vol. i. p. 167. last article are merely transformations of the two first of Mr. Cayley's (18.) ; and (iii.), though not identical with the third, is of the same degree; so that the algebraical difficulty is precisely the same in both methods.) 33. If we suppose $A = B$, the algebraical difficulty disappears, and the solution of the problem can be explicitly completed. But on account of the importance and interest of this case I shall make it the subject of a separate section, in which it will also be shown that the solution of the general case may be made to depend upon it, by means of the variation of elements. (See Section III.) 34. Suppose any complete normal solution of the system of differential equations (I.), art. 14, be known, i.e. a solution involving the $2n$ elements $$a_1, a_2, \ldots a_n, b_1, b_2, \ldots b_n$$ which satisfy the conditions (23.), art. 9; then an infinite number of other sets of normal elements can always be found. For if we determine the $2n$ quantities $\alpha_1, \ldots \alpha_n, \beta_1, \ldots \beta_n$, as functions of $a_1, \&c., b_1, \&c.$ by the $2n$ equations $$\frac{dA}{da_i} = b_i, \quad \frac{dA}{da_i} = \beta_i,$$ where $A$ is any arbitrary function of $$a_1, a_2, \ldots a_n, \alpha_1, \alpha_2, \ldots \alpha_n,$$ it is obvious that the whole of the reasoning by which the formulae (19.), art. 7, were established may be repeated, merely putting $A$ in place of $X$, and $\alpha, \beta$ instead of $x, y$. And repeating in like manner the reasoning of art. 9, mutatis mutandis, it will follow that if $f, g$ represent any two of the $2n$ quantities $\alpha_1, \&c., \beta_1, \&c.$, the expression $$\sum_i \frac{d(f,g)}{d(b_i,a_i)}$$ will be equal to unity if $f, g$ be a pair of the form $\alpha_j, \beta_j$, and will vanish in every other case. But it was also shown ((25.) art. 9) that the above expression is equivalent to $-[f,g]$; it follows then that $$[\alpha_j, \beta_j] = -1, \quad [\alpha_i, \alpha_j] = [\alpha_i, \beta_j] = [\beta_i, \beta_j] = 0*;$$ or, in other words, that $\alpha_1, \ldots \alpha_n, \beta_1, \ldots \beta_n$, are a new set of normal elements. This method however can hardly be of much use in practice, because we cannot (at least without the solution of partial differential equations) determine what form $$\sum_i \frac{d(a_i,b_i)}{d(f,g)} = 1, \text{ or } = 0,$$ according as $f, g$ are of the form $\alpha_j, \beta_j$ or not, which is easily established without the help of relations analogous * I shall have occasion to refer afterwards to M. Desboves' Memoir in Liouville's Journal, vol. xiii., "Démonstration de deux théorèmes de M. Jacobi." But it may be observed here that the proposition in the text is not the same as that expressed by the same notation in the memoir alluded to, p. 400. For M. Desboves uses the symbols $[\alpha_i, \alpha_j]$ in a different sense. His theorem, in the notation of the present paper, is of the function \( A \) will cause any of the new elements to be given functions of the old. But the problems most likely to occur may be solved in another way, as follows. 35. Assuming for the set \( \alpha_1, \alpha_2, \ldots, \alpha_n \) given functions of the set \( a_1, a_2, \ldots, a_n \) only, it is required to find \( \beta_1, \ldots, \beta_n \). (It will be observed that the conditions \([\alpha_i, \alpha_j] = 0\) are necessarily satisfied in this case by virtue of (25.), art. 9, since \( \alpha_i \), &c. do not involve \( b_i \), &c.) It is plain, that if the principal function \( X \) had been found from the \( n \) integrals \( a_1, a_2, \ldots, a_n \) (as in art. 14.), it would be changed into that which would be found from the \( n \) integrals \( \alpha_1, \alpha_2, \ldots, \alpha_n \), merely by introducing the expressions for \( a_1, \ldots, a_n \) in terms of \( \alpha_1, \ldots, \alpha_n \); which expressions would be found by algebraical inversion of the assumed equations which give the latter set as functions of the former. Let \( \bar{X} \) represent the function \( X \) thus transformed; we have then \[ \beta_i = \frac{d\bar{X}}{da_i} = \frac{dX}{da_1} \frac{da_1}{da_i} + \frac{dX}{da_2} \frac{da_2}{da_i} + \ldots \\ = b_1 \frac{da_1}{da_i} + b_2 \frac{da_2}{da_i} + \ldots + b_n \frac{da_n}{da_i}. \] Thus \( \beta_i \) is determined as a function of the old elements, since \( \frac{da_i}{da_i} \), &c. may be expressed in terms of the latter. In like manner we should have a set of inverse equations \[ b_i = \beta_1 \frac{da_1}{da_i} + \beta_2 \frac{da_2}{da_i} + \ldots + \beta_n \frac{da_n}{da_i}, \] which may be used instead of (38.). It is apparent that \( \beta_1 \), &c. will involve in general the elements \( a_1 \), &c. as well as \( b_1 \), &c. Conversely, if we assumed for \( \beta_1, \ldots, \beta_n \) alone, we to (19.), but would not answer our present purpose. I regret to use symbols with a meaning different from that which custom has to some extent sanctioned; but there seemed to be only a choice of difficulties. Mr. Spottiswoode has suggested to me the employment of the symbols (analogous to Mr. Sylvester's "umbral" notation) \[ \begin{bmatrix} u, v, w, \ldots \\ \frac{d}{dx}, \frac{d}{dy}, \frac{d}{dz}, \ldots \end{bmatrix}, \quad \begin{bmatrix} x, y, z, \ldots \\ d_1, d_2, d_3, \ldots \end{bmatrix} \] instead of those which I have used, namely, \[ \frac{d(u, v, w, \ldots)}{d(x, y, z, \ldots)}. \] If these were adopted, the two forms \((p, q)\), \([p, q]\) might be used without confusion in their usual significations. See note to art. 9. But although the "umbral" forms are more suggestive of the properties which belong to the above expressions as determinants, the other forms bring more into view the analogies which connect them with the differential calculus; and therefore, for the purposes of this paper, I have preferred them. And it is perhaps better, for the present, that different notations should be tried, than that any attempt should be made to fix upon a definitive system for subjects so recent as those connected with the theory of determinants. should have, for determining \(a_i\), &c., either of the systems \[ \alpha_i = \sum_j \left( a_j \frac{db_j}{d\beta_i} \right), \quad \alpha_i = \sum_j \left( a_j \frac{d\beta_j}{db_i} \right). \] (40.) We might obtain in this way an indefinite variety of sets of elements for the case of elliptic motion, beginning with those given at the end of art. 30. But it will be better to defer this illustration till after the discussion of the Method of the Variation of Elements, which will form the subject of a future Section. 36. It results, from the investigations of this and the preceding Sections, that if a set of \(n\) integrals \(a_1, a_2, \ldots, a_n\) be given, satisfying the \(\frac{n(n-1)}{2}\) conditions \([a_i, a_j] = 0\), the determination of \(n\) more integrals \(b_1, \ldots, b_n\), constituting, with the given ones, a complete normal set, is a determinate problem, admitting of a unique solution, and always reducible (setting aside algebraical difficulties) to quadratures. But if, out of a complete normal set, \(n\) be given of which one or more pairs are conjugate, then the completion of the set is no longer a determinate problem, since the remaining \(n\) integrals, containing also one or more conjugate pairs, admit, to some extent, of arbitrary transpositions and combinations, as is evident from considerations similar to those employed in arts. 13 and 35. Hence we should expect à priori that the problem would require the solution of partial differential equations. It appears, indeed, at first sight, that having any \(n\) of the elements given functions of the variables, the relations established in art. 9, with the others included in the formula (21.), art. 9, would furnish more than a sufficient number of equations to determine explicitly all the partial differential coefficients of the remaining elements in terms of the variables*, at least in the case in which the principle of vis viva subsists, and the given integrals do not contain \(t\). But it is certain from the above considerations that this cannot be the case, and therefore that the equations furnished by those conditions cannot be all independent. I have not at present attempted to show this directly, though it would probably be easy to do so. Note on art. 2, Section I. The theorem established in this article may be more shortly demonstrated as follows: Since \[d\Sigma_i(x,y_i) = \Sigma_i(x,dy_i) + \Sigma_i(y,dx_i)\] * The conditions \([a_i, b_i] = 1, [a_i, b_j] = 0, [b_i, b_j] = 0, [a_i, a_j] = 0\) will give, as is easily seen, \(\frac{n(n-1)}{2} + n^2\) equations; and the analogous conditions (21.), art. 9, in which the summation refers to the numerators of the differential coefficients, will give the same number, so that upon the whole we shall apparently have \(3n^2 - n\) equations, to determine the \(2n^2\) partial coefficients required. It is not difficult to make mistakes in this subject. I was for some time under the impression that the problem could be solved when any \(n\) independent integrals were given. Even the illustrious Jacobi himself appears to have been misled, at first sight, as to the consequences of Poisson's theorem (art. 22.). See the beginning of M. Bertrand's Memoir mentioned above; I do not know the fact from any other source. and \[ \Sigma_i(y_i dx_i) = dX \] (by (5.),) we have \[ \Sigma_i(x_i dy_i) = d(-X + \Sigma_i(x_i y_i)), \] an equation which must become identical when \( x_1, x_2, \ldots \) on each side are expressed in terms of \( y_1, y_2, \ldots \). But the right side being then a complete differential of a function of \( y_1, y_2, \ldots \), the left side must be so also; hence the conditions \[ \frac{dx_i}{dy_j} = \frac{dx_j}{dy_i} \] must subsist. The investigation of art. 2 shows that they do subsist, and is therefore perhaps to be preferred. **Section III.—On the Equations of Rotatory Motion.** 37. In this supplementary section I propose further to exemplify the preceding theory by exhibiting the application of it to the problem of rotation in a more detailed form than was consistent with the plan of the former part of this essay. For this purpose it will first be desirable to anticipate the subject of a future section, so far as to give a concise deduction of the method of the variation of elements in its simplest form. 38. **Variation of Elements.**—Suppose a complete normal solution of the system of differential equations \[ x'_i = \frac{dZ}{dy_i}, \quad y'_i + \frac{dZ}{dx_i} = 0 \quad \ldots \ldots \ldots \ldots \ldots \ldots \quad (I.) \] has been obtained, so that we have \( 2n \) elements, divided into two conjugate sets \[ a_1, a_2, \ldots, a_n; \quad b_1, b_2, \ldots, b_n \] as in the former articles, so that \[ [a_i, b_i] = 1, \quad [a_i, a_j] = [b_i, b_j] = [a_i, b_j] = 0. \] It is required to express the solution of the system \[ x'_i = \frac{dZ}{dy_i} + \frac{d\Omega}{dy_i}, \quad y'_i + \frac{dZ}{dx_i} + \frac{d\Omega}{dx_i} = 0 \quad \ldots \ldots \ldots \ldots \ldots \ldots \quad (I.a) \] in the same form by means of variable elements. The disturbing function \( \Omega \) may be a function of all the variables \( x_1, \ldots, y_1, \ldots \), and may also contain \( t \) explicitly. In the undisturbed problem we have \( a'_i = 0, b'_i = 0 \); i.e. the equations \[ \frac{da_i}{dt} + [Z, a_i] = 0, \quad \frac{db_i}{dt} + [Z, b_i] = 0 \quad \ldots \ldots \ldots \ldots \quad (e.) \] (see art. 22.) subsist identically when \( x_1, \ldots, y_1, \ldots \) are expressed in terms of the elements and \( t \). In the disturbed problem, \( x_1, \ldots, y_1, \ldots \) are to be the same functions of the elements and \( t \) as before; hence the equations (e.) continue to subsist identically, and therefore the values of \( a'_i, b'_i \), namely, \[ a'_i = \frac{da_i}{dt} + [Z, a_i] + [\Omega, a_i] \] \[ b'_i = \frac{db_i}{dt} + [Z, b_i] + [\Omega, b_i], \] become simply \(a'_i = [\Omega, a_i], \ b'_i = [\Omega, b_i].\) In these expressions \(a_i, b_i\) are supposed to be expressed in terms of the variables. Now \[ [\Omega, a_i] = \Sigma_j \frac{d(\Omega, a_i)}{d(y_j, x_j)}, \] but, by equation (26.), art. 10, this is equivalent to \[-\Sigma_j \frac{d(\Omega, a_i)}{d(b_j, a_j)},\] in which \(\Omega\) is expressed as a function of the elements and \(t\); and this last expression obviously reduces itself to the single term \(-\frac{d\Omega}{db_i}\). In like manner the expression for \([\Omega, b_i]\) reduces itself to \(+\frac{d\Omega}{da_i}\); thus the equations for determining the variation of the elements are \[ a'_i = -\frac{d\Omega}{db_i}, \quad b'_i = \frac{d\Omega}{da_i}, \quad \ldots \ldots \ldots \ldots \ldots \quad (E.) \] in which \(\Omega\) is to be expressed as a function of the elements and \(t*\). This will be a sufficient account of the method for our immediate purpose. 39. The following propositions in spherical trigonometry will be required. If \(a, b, c\) be the sides, and \(\alpha, \beta, \gamma\) the opposite angles of any spherical triangle, then \[ \cos(a + b) = \frac{(\cos \alpha + \cos \beta)^2 - 1 - \cos \alpha \cos \beta}{\sin \alpha \sin \beta} \quad \ldots \ldots \ldots \ldots \quad (40.) \] \[ \cos(a - b) = \frac{(\cos \alpha - \cos \beta)^2 + 1 - \cos \alpha \cos \beta}{\sin \alpha \sin \beta}; \quad \ldots \ldots \ldots \quad (41.) \] and if the sides be considered as functions of the angles, then \[ \frac{da}{d\beta} = \cos \gamma \frac{db}{d\beta} + \cos \beta \frac{dc}{d\beta} \quad \ldots \ldots \ldots \ldots \quad (42.) \] \[ \frac{da}{d\gamma} = \cos \gamma \frac{db}{d\gamma} + \cos \beta \frac{dc}{d\gamma}. \quad \ldots \ldots \ldots \ldots \quad (43.) \] The two last are easily verified; but as the others are not so obvious, I shall give the demonstration. Putting \(x\) for the expression on the right of the equation (40.), we * The history of these remarkable formulæ may, I believe, be stated as follows. They were first discovered by Lagrange in the case in which \(a_i, b_i\) were the initial values of \(x_i, y_i\), and \(\Omega\) contained \(x_i, \&c.\) but not \(y_i, \&c.\). They were extended by Sir W. R. Hamilton to the case in which \(\Omega\) contains both sets of variables; and finally, by Jacobi, to the case in which \(a_i, \&c., b_i, \&c.\) are any system of conjugate elements. Jacobi however does not appear to have published a demonstration of them, and the only one which I have seen is by M. Desboves, Liouville's Journal, vol. xiii. p. 397, and differs essentially from that given in the text. Sir W. R. Hamilton has pointed out the circumstance, that when \(\Omega\) contains both sets of variables, the varying elements determined by the formula (E.) are not osculating. easily obtain \[ \frac{1-x}{1+x} = \frac{\cos^2 \frac{\alpha - \beta}{2}}{\cos^2 \frac{\alpha + \beta}{2}} \cdot \frac{\sin^2 \frac{\gamma}{2}}{\cos^2 \frac{\alpha - \beta}{2}} - \cos^2 \frac{\alpha + \beta}{2} \] \[ = \frac{\cos^2 \frac{\alpha - \beta}{2}}{\cos^2 \frac{\alpha + \beta}{2}} \cdot \frac{\cos \frac{\alpha + \beta + \gamma}{2}}{\cos \frac{\alpha - \beta + \gamma}{2}} \cdot \frac{\cos \frac{\alpha + \beta - \gamma}{2}}{\cos \frac{\beta + \gamma - \alpha}{2}} \] \[ = \frac{\cos^2 \frac{\alpha - \beta}{2}}{\cos^2 \frac{\alpha + \beta}{2}} \tan^2 \frac{\epsilon}{2} = \tan^2 \frac{\alpha + b}{2}; \] whence it is plain that \( x = \cos (a + b) \), and in like manner may the equation (41.) be established. 40. Returning now to the problem of rotation, and supposing, for convenience, that the question refers to the motion of the earth about its centre of gravity, the following will be the signification of the symbols employed. \( A, B, C \) are the moments of inertia about the principal axes of the earth, viz. the axes of \( x, y, z \); the last being the polar axis, and the arrangement being such that the positive direction of \( z \) is to the north pole, and that the positive axis of \( x \) follows that of \( y \) in the actual rotation about the polar axis: \( p, q, r \) being the angular velocities about the three principal axes, the usual convention will be adopted as to their signs; so that in the actual case \( r \) is positive. The arrangement of the fixed axes of \( \xi, \eta, \zeta \) is supposed similar to that of \( x, y, z \), the plane of \( \xi, \eta \) being a fixed ecliptic, and the axis of \( \xi \) the origin of longitudes unless another origin be expressly indicated. Then \( \theta \) is the obliquity, \( \psi \) the longitude of the vernal equinox, and \( \phi \) the right ascension of the axis of \( x \); all referring to the fixed ecliptic. Let the "principal plane" signify that which, in the undisturbed problem, is the "invariable plane." Then \( i \) is the inclination of the principal plane to the fixed ecliptic, and \( j \) is the inclination of the equator to the principal plane. In the case of the earth, \( A \) is nearly equal to \( B \), \( \theta \) never differs sensibly from \( i \), and \( j \) is therefore always small. But these conditions are not supposed in what follows. It is assumed however that \( C \) is the greatest of the three moments of inertia. These conventions, in which it is very desirable to avoid any ambiguity, may be illustrated by the annexed figure, in which \( O \) represents the origin of longitudes. The angles of the spherical triangle formed by the intersection of the three planes with a spherical surface are \( i, j, \pi - \theta \); and the sides opposite to them will be denoted by \( I, J, \Theta \). Thus we shall have \[ \cos I = \frac{\cos i - \cos j \cos \theta}{\sin j \sin \theta}, \quad \cos J = \frac{\cos j - \cos i \cos \theta}{\sin i \sin \theta} \] \[ \cos \theta = \cos i \cos j - \sin i \sin j \cos \Theta. \] And, in the figure, \( OY = \psi \), and \( \phi \) is measured from \( Y \) in the direction indicated by the arrow, which is also the direction of the rotation about the polar axis. Moreover, if the direction-cosines of the axes of \( x, y, z \) referred to the fixed axes, be respectively \( a, b, c; a', b', c'; a'', b'', c'' \), we shall have \[ \begin{align*} a &= \cos \psi \cos \phi - \sin \psi \sin \phi \cos \theta \\ a' &= -\cos \psi \sin \phi - \sin \psi \cos \phi \cos \theta \\ a'' &= -\sin \psi \sin \theta \\ b &= \sin \psi \cos \phi + \cos \psi \sin \phi \cos \theta \\ b' &= -\sin \psi \sin \phi + \cos \psi \cos \phi \cos \theta \\ b'' &= \cos \psi \sin \theta \\ c &= -\sin \phi \sin \theta \\ c' &= -\cos \phi \sin \theta \\ c'' &= \cos \theta \\ p &= -\phi' \cos \phi - \psi' \sin \phi \sin \theta \\ q &= \phi' \sin \phi - \psi' \cos \phi \sin \theta \\ r &= \phi' + \psi' \cos \theta, \end{align*} \] hence we obtain the expressions for \( u, v, w \) employed in art. 31, viz. \[ \begin{align*} u &= \frac{dh}{d\theta} = -Ap \cos \phi + Bq \sin \phi \\ v &= \frac{dh}{d\phi'} = Cr \\ w &= \frac{dh}{d\psi} = -Ap \sin \phi \sin \theta - Bq \cos \phi \sin \theta + Cr \cos \theta, \end{align*} \] from which the following also are easily deduced: \[ u = \frac{A+B}{2} \varphi' + \frac{A-B}{2} (\varphi' \cos 2\varphi + \psi' \sin \varphi \sin 2\varphi) \] \[ v = C(\varphi' + \psi' \cos \varphi) \] \[ w = \frac{A+B}{2} \psi' \sin^2 \varphi + C \cos \varphi (\varphi' + \psi' \cos \varphi) + \frac{A-B}{2} \sin \varphi (\varphi' \sin 2\varphi - \psi' \sin \varphi \cos 2\varphi). \] 41. Resuming the three integrals (i.), (ii.), (iii.) of art. 31, we may put the first in the following form: \[ Z + \Omega = h, \] in which \[ 2Z = \frac{1}{2} \left( \frac{1}{A} + \frac{1}{B} \right) \left( u^2 + \frac{(v \cos \varphi - w)^2}{\sin^2 \varphi} \right) + C \] \[ 2\Omega = \frac{1}{2} \left( \frac{1}{A} - \frac{1}{B} \right) \left( u^2 - \frac{(v \cos \varphi - w)^2}{\sin^2 \varphi} \right) \cos 2\varphi - \frac{2u(v \cos \varphi - w)}{\sin \varphi} \sin 2\varphi; \] and the other two are, as before, \[ u^2 + u^2 + \frac{(v - w \cos \varphi)^2}{\sin^2 \varphi} = k^2 \] \[ w = g, \] in which \( k \) is the sum of areas on the invariable plane, and \( g \) the sum on fixed ecliptic; moreover \( v = Cr \) is the sum of areas projected on the plane of the equator; hence we have \[ g = k \cos i, \quad v = k \cos j. \] It has been seen that the complete solution of the problem is impracticable in the general case, on account of an algebraical difficulty. If however we suppose \( B = A \), this difficulty disappears; and after completing the solution on this supposition we may take account of the terms arising from the inequality of \( A \) and \( B \), by treating the function denoted above by \( \Omega \) (equation (i.)) as a disturbing function, and applying the method explained in art. 38. Thus when the action of disturbing forces is considered, the whole disturbing function will consist of two parts; one depending upon the forces, and the other the function which has just been assigned, and of which the effect, as will be seen, is extremely simple. 42. We proceed then first to complete the solution on the supposition \( A = B \). The three integrals (i.), (ii.), (iii.) give in this case \[ v^2 = \frac{C}{C-A}(k^2 - 2Ah), \quad w = g. \] \[ u = \frac{1}{\sin \varphi} \left[ k^2 - v^2 - w^2 + 2vw \cos \varphi - k^2 \cos^2 \varphi \right]^{\frac{1}{2}}, \] in which latter expression the above constant values of \( v \) and \( w \) are to be introduced. We may put, as before, \( g = k \cos i, v = k \cos j, j \) being now constant; and the expression for \( u \) becomes \[ u = \frac{k}{\sin \varphi} \left[ 1 - \cos^2 i - \cos^2 j + 2 \cos i \cos j \cos \varphi - \cos^2 \varphi \right]^{\frac{1}{2}}. \] and we shall have (art. 19.) \[ V = k(\psi \cos i + \phi \cos j) + \int u d\theta; \] (46.) and we will take \[ h, \quad \cos i, \quad \cos j \] for normal elements*, so that \( k \) is to be considered as a function of these elements, given by the equation (see (44.)) \[ k^2 = \frac{2ACh}{C - (C-A)\cos^2 j}. \] (47.) It is to be observed, that, according to the hypotheses admitted above, \( k \) is positive; also, the expression for \( u \) at the end of art. 40 becomes, in the case now considered, \[ u = A\theta'. \] Thus \( u \) has the same sign as \( \theta' \); and since \( \theta \) is evidently comprised between \( i-j \) and \( i+j \), if we suppose \( i \) and \( j \) both acute (as in the figure), \( \sin \theta \) is always positive; hence in the expression (45.) for \( u \), we have to attribute the sign \( + \) or \( - \) to the radical, according as \( \theta \) is increasing or diminishing, or according as \( \Theta \) is between \( 0 \) and \( \pi \), or not; thus, in the position represented in the figure, the negative sign must be taken. 43. If we put \( \pm Q \) for the radical in question, the expression for \( ud\theta \) is easily transformed into the following, namely, \[ ud\theta = \pm \frac{k \sin \theta d\theta}{Q} \left\{ 1 - \frac{1}{2} \frac{(\cos j - \cos i)^2}{1 - \cos \theta} - \frac{1}{2} \frac{(\cos j + \cos i)^2}{1 + \cos \theta} \right\}, \] in which it is evident that the part within brackets is positive upon the whole, but each of the two last terms is essentially negative. The integration is now easily performed, and the result is \[ \int ud\theta = \pm kP, \] where the sign is that which belongs to the radical \( Q \), and \( P \) is given by the equation \[ P = \cos^{-1} \frac{\cos \theta - \cos i \cos j}{\sin i \sin j} \] \[ - \frac{1}{2} (\cos j - \cos i) \cos^{-1} \frac{(\cos j - \cos i)^2}{1 - \cos \theta} - 1 + \cos i \cos j \] \[ + \frac{1}{2} (\cos j + \cos i) \cos^{-1} \frac{(\cos j + \cos i)^2}{1 + \cos \theta} - 1 - \cos i \cos j \] + an arbitrary function of \( i, j, h \). This apparently complicated expression has a very simple geometrical signification; * Since \( g, h, k \) are normal elements (i.e. satisfy the conditions \([g, h] = 0, [h, k] = 0, [k, g] = 0\), art. 31.), any three independent combinations of them are obviously normal also. for, referring to the figure, and using the theorems (40.), we see that it is equivalent to \[ P = \cos^{-1}(-\cos \Theta) + \frac{\cos j + \cos i}{2} \cos^{-1}(\cos(I+J)) \] \[ - \frac{\cos j - \cos i}{2} \cos^{-1}(-\cos(I-J)) + K, \] where \( K \) is put for the arbitrary function. Now the expression for \( ud\theta \) (from which this is derived) shows that the three terms in the above value of \( P \) must be so interpreted that the differential coefficient of the first (with respect to \( \theta \)) shall be positive, and those of the two others negative. These conditions will be satisfied by taking* \[ \pm P = \pi - \Theta + \frac{\cos j + \cos i}{2}(I+J) \] \[ - \frac{\cos j - \cos i}{2}(\pi-(I-J)) + K \] (in which the upper sign is to be taken when \( \Theta \) is between \( 0 \) and \( \pi \), and the under sign when \( \Theta \) is \( > \pi \)). Hence, assuming the arbitrary \( K \) so as to destroy the constant part of the expression, we have, without ambiguity, for all values of the variables, \[ \int ud\theta = k(\Theta - I \cos j - J \cos i), \] so that, finally, \[ V = k((\varphi - I) \cos j + (\varphi - J) \cos i + \Theta). . . . . . . (48.) \] It will be observed that without attention to the proper interpretation of ambiguous symbols, a completely erroneous expression for \( V \) might have been obtained. 44. The final equations will be (art. 19.) \[ \frac{dV}{dh} = t + \tau, \quad \frac{dV}{d\cos i} = \alpha, \quad \frac{dV}{d\cos j} = \beta, \] \( \tau, \alpha, \beta \) being three new arbitrary constants, namely, the elements conjugate respectively to \( h, \cos i, \cos j \). In performing the differentiations, it is to be remembered that \( I, J, \Theta \) do not contain \( h \); and that, by the equations (42.), (43.), art. 39, the terms arising from the differentiation of \( I, J, \Theta \) with respect to \( i \) and \( j \), disappear identically, so that these functions may be considered as exempt from differentiation. Also we have \[ \frac{dk}{dh} = \frac{k}{2h}, \quad \frac{dk}{d\cos i} = 0, \] \[ \frac{dk}{d\cos j} = \frac{(C-A)k^8 \cos j}{2AC_h}. \] * In the figure, as \( \theta \) diminishes (\( i \) and \( j \) remaining constant) \( \Theta \) increases, \( I+J \) increases, and \( I-J \) increases or diminishes according as \( j \leq i \), since \( \tan \frac{I-J}{2} = \tan \frac{\Theta}{2} \cdot \frac{\sin \frac{i-j}{2}}{\sin \frac{j+i}{2}} \). (see equation (47.)), and the final equations become, after simple reductions, \[ \begin{align*} \Theta &= -\frac{\alpha \cos i + \beta \cos j}{k} + \frac{k}{A}(t+\tau) \\ \varphi - I &= \frac{\beta}{k} - \left(\frac{1}{A} - \frac{1}{C}\right) k \cos j(t+\tau) \\ \psi - J &= \frac{\alpha}{k} \end{align*} \] These equations comprise a normal solution of the problem. The first gives immediately \[ \cos \theta = \cos i \cos j - \sin i \sin j \cos \left(\frac{k}{A}(t+\tau) - \frac{\alpha \cos i + \beta \cos j}{k}\right) \] (see art. 40.) ; and since \(I, J\) are given explicit functions of \(\theta\), the three variables \(\theta, \varphi, \psi\) are determined explicitly as functions of \(t\). The third equation (R.) simply expresses that the invariable plane intersects the ecliptic in a fixed line, whose longitude is \(\frac{\alpha}{k}\). 45. Let us now introduce the supposition that \(A\) and \(B\) are unequal, and that the body is acted on by disturbing forces. We must (see art. 41.) put \(\frac{1}{2}\left(\frac{1}{A} + \frac{1}{B}\right)\) instead of \(\frac{1}{A}\) in the equations (R.) of the last article; these equations will express the solution of the problem, the elements being now variable, and determined as functions of \(t\) by the system of equations \[ \begin{align*} h' &= -\frac{d\Phi}{dt}, \quad (\cos i)' = -\frac{d\Phi}{da}, \quad (\cos j)' = -\frac{d\Phi}{db} \\ \tau' &= \frac{d\Phi}{dh}, \quad \alpha' = \frac{d\Phi}{dcos i}, \quad \beta' = \frac{d\Phi}{dcos j}, \end{align*} \] where \(\Phi\) is the disturbing function, expressed in terms of the elements and \(t\). 46. If there are no disturbing forces, \(\Phi\) reduces itself simply to \(\Omega\) (art. 41.), which is now to be transformed by means of the equations (R.), art. 44, as follows. Since \(v = k \cos j\), and \(w = k \cos i\), we have \[ \frac{v \cos \theta - w}{\sin \theta} = -k \frac{\cos i - \cos j \cos \theta}{\sin \theta} = -k \sin j \cos I. \] Also the expression for \(u\), art. 42, is easily put in the following form: \[ u = \frac{k}{\sin \theta} \left\{ \sin^2 i \sin^2 j - (\cos \theta - \cos i \cos j)^2 \right\}^{\frac{1}{2}} \] \[ = -\frac{k}{\sin \theta} \sin i \sin j \sin \Theta \] (with respect to the sign, see art. 42.). And since \[ \frac{\sin \Theta}{\sin \theta} = \frac{\sin I}{\sin i}, \] this becomes \[ u = -k \sin j \sin I. \] Introducing these expressions in the value of $\Omega$ (art. 41.), we find $$\Omega = -\frac{k^2}{4} \left( \frac{1}{A} - \frac{1}{B} \right) \sin^2 j \cos 2(\varphi - I); \ldots \ldots \ldots \ldots \quad (49.)$$ and when $\varphi - I$ is expressed in terms of the elements and $t$ (see equations (R.), art. 44, in which $\frac{1}{2} \left( \frac{1}{A} + \frac{1}{B} \right)$ is now to be written for $\frac{1}{A}$), this becomes, finally, $$\Omega = -\frac{k^2}{4} \left( \frac{1}{A} - \frac{1}{B} \right) \sin^2 j \cdot \cos 2 \left[ \frac{\beta}{k} - \left( \frac{1}{2} \left( \frac{1}{A} + \frac{1}{B} \right) - \frac{1}{C} \right) k \cos j \cdot (t + r) \right]. \quad \ldots \quad (\Omega.)$$ 47. The above expression for $\Omega$ does not contain the elements $i$, $\alpha$; hence, when there are no disturbing forces, we shall have $(\cos i)' = 0$, $\alpha' = 0$, or $i$ and $\alpha$ are constant; also $$k' = -\frac{dk}{dh} \frac{d\Omega}{dr} - \frac{dk}{d\cos j} \frac{d\Omega}{d\beta},$$ an expression which is easily found to vanish identically (see the values of $\frac{dk}{dh}$, $\frac{dk}{d\cos j}$ in art. 44, observing to put $\frac{1}{2} \left( \frac{1}{A} + \frac{1}{B} \right)$ for $\frac{1}{A}$). Thus $k$ is also constant; and the "principal plane" is still the "invariable plane," as we know à priori. 48. If we now suppose the attraction of another body to be introduced as a disturbing force, we shall have to take for the disturbing function $$\Phi = \Omega - P,$$ where $\Omega$ is the same as above, and $P$ is the potential of one body upon the other, expressed as a function of the elements and the time*. And it follows from the remarks of the last article, that the variation in the position of the principal plane depends wholly upon $P$, and not upon $\Omega$. I shall here conclude this part of the subject, as it would be beyond the scope of this essay to enter into the details of any of the various problems which might be taken in illustration of the theory, such as those which relate to precession and nutation, or to the motion of the moon about its centre of gravity. The investigations of this section have been introduced, because the results, so far as they go, appeared interesting in themselves, and afforded a remarkable example of the application of the general method. P.S. Since the last sheets of this essay were in type, I have seen for the first time two papers by Professor Brioschi, in Tortolini's Annali for August and October 1853, of which the titles are "Sulla variazione delle costanti arbitrarie nei problemi della Dinamica," and "Intorno ad un teorema de Meccanica." I have not had an opportunity of examining them sufficiently to judge how far any of the preceding investigations may have been anticipated in them. June 7. * The variables which determine the position of the disturbing body are supposed to be given explicit functions of $t$.