First Memoir on the Theory of Analytical Operations

Author(s) R. Murphy
Year 1837
Volume 127
Pages 33 pages
Language en
Journal Philosophical Transactions of the Royal Society of London

Full Text (OCR)

PHILOSOPHICAL TRANSACTIONS. XII. First Memoir on the Theory of Analytical Operations. By the Rev. R. Murphy, M.A. Fellow of Caius College, Honorary Member of various Philosophical Societies. Communicated by J. W. Lubbock, Esq. F.R.S. Received December 8,—Read December 22, 1836. § 1. 1. The elements of which every distinct analytical process is composed are three, namely, first the Subject, that is, the symbol on which a certain notified operation is to be performed; secondly, the Operation itself, represented by its own symbol; thirdly, the Result, which may be connected with the former two by the algebraic sign of equality. Thus let \(a\) be the subject representing, we may suppose, some quantity, \(b\) the symbol for multiplication by \(b\), and \(c\) the result or product; for greater distinctness let the subject be inclosed in square brackets, the analytical process in this case is \([a] b = c\). Again, let \(x^n\) be the subject, \(\psi\) a symbol of operation denoting that \(x\) must be changed into \(x + h\), and \((x + h)^n\) will evidently be the result, or \[ [x^n] \psi = (x + h)^n. \] Again, let \(a^x\) be the subject, \(\Delta\) a symbol of operation, which indicates that we are to subtract the subject itself from that which it becomes when \(x\) is changed into \(x + h\), which is usually called taking the finite difference, then the result is \[ a^{x+h} - a^x, \] or \[ [a^x] \Delta = [a^x] (a^h - 1). \] Lastly, let \(d_x\) denote the operation of taking the finite difference, and after dividing it by \(h\), then putting \(h = 0\), which is the same as finding the differential coefficient of the subject, which we may suppose represented by \(u\), then \[ [u] d_x = \frac{du}{dx}. \] 2. The operations written as above are *monomial*, consisting of only one term; and *polynomial* operations give the sums or the differences of the results of the respective monomials of which they are formed, according as these monomials are affected by the signs $+$ or $-$. Thus if 1 as an operation be understood as the multiplying of the subject by unity which leaves it unaltered, and the symbols $\psi$, $\Delta$ have the same signification as in art. 1, then $$[u] (\psi - 1) = [u] \Delta$$ $$[u] (\Delta + 1) = [u] \psi,$$ where the subject $u$ is any quantity whatever. When general relations, such as these, between different symbols exist independently of the particular value of the subject, we may abstract the consideration of the latter, and the sign $=$ between symbols of operation being understood to indicate that they are universally equivalent, the symbols used in art. 1 would have the following relations independent of the subject. $$\Delta = \psi - 1, \quad \psi = \Delta + 1, \quad \psi - \Delta = 1,$$ $$d_x = \frac{\Delta}{h} = \frac{\psi - 1}{h},$$ when $h$ is put $= 0$. 3. A compound operation consists of a series of simple defined operations, *monomial* or *polynomial*, the subject of each individual in this series after the first being the result of all the preceding operations. Thus $[x^2] a \psi \Delta d_x$ denotes that first $x^2$ must be multiplied by $a$, which gives $a x^2$; then the operation $\psi$, which denotes the putting $x + h$ for $x$, gives $a (x + h)^2$; next the operation $\Delta$ will give $a (x + 2 h)^2 - a (x + h)^2$, or $a (2 h x + 3 h^2)$; and lastly, the symbol of taking the differential coefficient relative to $x$ gives $2 h a$: the final or complete result is therefore $$[x^2] a \psi \Delta d_x = 2 h a.$$ When all the symbols in a compound operation are exactly the same, then for abridgment the whole operation is represented by writing an index to the right of the symbol for the simple operation over it, this index expressing the number of times the simple operation is repeated. Thus $$[x^2] \psi^3 = [x^2] \psi \psi \psi = [(x + h)^2] \psi \psi = [(x + 2 h)^2] \psi = (x + 3 h)^2.$$ But when the simple operations are different, they must be written consecutively in the order in which they are to be performed, unless that order of arrangement by the mutual relations of the operations should be indifferent. Thus $$[x^n] x \psi = [x^{n+1}] \psi = (x + h)^{n+1}$$ $$[x^n] \psi x = [(x + h)^n] x = x (x + h)^n.$$ But \[ [x^n] a \psi = [a x^n] \psi = a (x + h)^n \] \[ [x^n] \psi a = [(x + h)^n] a = a (x + h)^n. \] in the latter case the order of the operations is indifferent, because the operation \( \psi \) does not act on the multiplier \( a \), and for the contrary reason the order of \( x \psi \) is fixed in the first case. Operations are therefore relatively fixed or free; in the first case a change in the order in which they are to be performed would affect the result, in the second case it would not do so. In a compound operation any part of the symbols may be taken conjointly with the subject in the square brackets, their result being the subject for the compound operation of the remaining symbols. Thus \[ [x^2] a \psi \Delta d_x = [a x^2] \psi \Delta d_x. \] § 2. 4. Linear operations in analysis are those of which the action on any subject is made up by the several actions on the parts, connected by the sign \(+ \) or \(-\), of which the subject is composed. Let \( p \) denote the operation of multiplying by a quantity \( p \), then \[ [a + b] p = [a] p + [b] p; \] this operation is therefore linear. Let \( \psi \) denote the operation of changing \( x \) into \( x + h \), then if \( f(x), \varphi(x) \) be any functions of \( x \), we have \[ [f(x) + \varphi(x)] \psi = f(x + h) + \varphi(x + h) = [f(x)] \psi + [\varphi(x)] \psi, \] which shows that \( \psi \) is also a linear operation. Let \( X + \xi \) represent the subject acted on, and \( \theta, \theta' \) any linear operations, then \[ [X + \xi] (\theta + \theta') = [X + \xi] \theta + [X + \xi] \theta' \] \[ = [X] \theta + [\xi] \theta + [X] \theta' + [\xi] \theta'; \] hence polynomial operations of which the parts are linear possess themselves the same character. Thus \( \Delta \) the operation of Finite Differences is linear, because \( \Delta = \psi - 1 \), the operation \( \psi \) of changing \( x \) into \( x + h \) and the multiplying by unity being both linear. Also \[ [X + \xi] \theta \theta' = [X \theta + \xi \theta'] \theta' \] \[ = [X] \theta \theta' + [\xi] \theta \theta', \] which shows that the compounds of linear operations are also linear. The operation of taking the differential coefficient is therefore linear, for the opera- tion $\Delta$ of finite differences and that of dividing by $h$ being both of that character, the compound $\Delta \cdot \frac{1}{h}$ is generally linear, and must remain so in the limiting state when $h$ vanishes. Hence every function of a linear operation is itself of the same class of operations. 5. The composition of polynomial linear operations is effected in the same manner as algebraic multiplication; with, however, this peculiarity—the order of the compound operations, when not relatively free, must be strictly preserved. Thus let $\theta$, $\theta'$, $\iota$, $\iota'$ represent linear operations: then $$[u] (\theta + \theta') (\iota + \iota') = [u] (\theta \iota + \theta' \iota + \theta \iota' + \theta' \iota');$$ for $\iota'$ being linear will act on each of the parts $[u] \theta$, $[u] \theta'$, which form their subject. Again, when $\theta$, $\theta'$ are relatively fixed, $[u] (\theta + \theta')^2$ will not, as in algebraic involution, be identical with $[u] (\theta^2 + 2 \theta \theta' + \theta'^2)$, its correct value being $[u] (\theta^2 + \theta' \theta + \theta \theta' + \theta'^2)$; which, however, is the same as the former expression, when $\theta$, $\theta'$ are relatively free; for then $\theta' \theta = \theta \theta'$. Similarly $$(\theta + \theta')^3 = (\theta^3 + \theta' \theta^2 + \theta \theta' \theta + \theta'^2 \theta) + (\theta^2 \theta' + \theta' \theta \theta' + \theta \theta'^2 + \theta'^3);$$ or putting $\theta^{(2)} \theta'$ for the sum of the compound operations in which $\theta$ twice enters, and $\theta'$ and $\theta^{(2)}$ for the sum of those containing $\theta$ once and $\theta'$ twice, this may also be written $$(\theta + \theta')^3 = \theta^3 + \theta^{(2)} \theta' + \theta \theta^{(2)} + \theta'^3;$$ and employing a similar notation, we shall have, when $n$ is a positive integer, $$(\theta + \theta')^n = \theta^{(n)} + \theta^{(n-1)} \theta' + \theta^{(n-2)} \theta^{(2)} + \ldots + \theta^{(n-2)} \theta^{(2)} + \theta \theta^{(n-1)} + \theta^{(n)}.$$ The term $\theta^{(n-1)} \theta'$ in this formula is the sum of $n$ terms, formed by placing $\theta'$ at the beginning, at the end, and in all the $n - 2$ intermediate positions of the expression $\theta \theta \theta \ldots (n - 1)$ times. Similarly, by the known theory of permutations, $\theta^{(n-2)} \theta^{(2)}$ is the sum of $\frac{n(n-1)}{1 \cdot 2}$ terms, &c. Hence when $\theta$, $\theta'$ are relatively free, we have $$(\theta + \theta')^n = \theta^n + n \theta^{n-1} \theta' + \frac{n(n-1)}{1 \cdot 2} \theta^{n-2} \theta'^2 + \ldots + n \theta \theta^{n-1} + \theta^n.$$ 6. The following theorems are immediately deducible from the general expansion of the preceding article: Since $\Delta = \psi - 1$, therefore $$\Delta^n = (\psi - 1)^n = \psi^n - n \psi^{n-1} + \frac{n(n-1)}{1 \cdot 2} \psi^{n-2} - \frac{n(n-1)(n-2)}{1 \cdot 2 \cdot 3} \psi^{n-3} + \ldots + (-1)^n;$$ or, if we introduce the subject $f(x)$, and observe that $[f(x)] \psi^n = f(x + nh)$, then $$\Delta^n f(x) = f(x + nh) - nf\{x + (n-1) \cdot h\} + \frac{n(n-1)}{1 \cdot 2} f\{x + (n-2) \cdot h\}, \ldots,$$ Again, \( \psi = \Delta + 1 \), therefore \[ \psi^n = (\Delta + 1)^n = 1 + n \Delta + \frac{n(n-1)}{1 \cdot 2} \Delta^2 + \frac{n(n-1)(n-2)}{1 \cdot 2 \cdot 3} \Delta^3 + \ldots + \Delta^n; \] or, introducing the subject \( f(x) \), \[ f(x + nh) = f(x) + n \Delta f(x) + \frac{n(n-1)}{1 \cdot 2} \Delta^2 f(x) + \frac{n(n-1)(n-2)}{1 \cdot 2 \cdot 3} \Delta^3 f(x) + \ldots \quad (II.) \] Again, \( 1 = \psi - \Delta \), therefore \[ 1 = (\psi - \Delta)^n = \psi^n - n \psi^{n-1} \Delta + \frac{n(n-1)}{1 \cdot 2} \psi^{n-2} \Delta^2 - \frac{n(n-1)(n-2)}{1 \cdot 2 \cdot 3} \psi^{n-3} \Delta^3 + \ldots + (-1)^n \Delta^n, \] or \[ f(x) = f(x + nh) - n \Delta f(x + (n-1)h) + \frac{n(n-1)}{1 \cdot 2} \Delta^2 f(x + (n-2)h) - \frac{n(n-1)(n-2)}{1 \cdot 2 \cdot 3} \Delta^3 f(x + (n-3)h), \ldots \quad (III.) \] In the expansion (II.) put \( nh = k \), or \( n = \frac{k}{h} \): therefore \[ f(x + k) = f(x) + k \frac{\Delta}{h} f(x) + \frac{k(k-h)}{1 \cdot 2} \left( \frac{\Delta}{h} \right)^2 f(x) + \frac{k(k-h)(k-2h)}{1 \cdot 2 \cdot 3} \left( \frac{\Delta}{h} \right)^3 f(x), \ldots \quad (IV.) \] Now suppose \( n \) to increase infinitely, \( k \) remaining constant, the quantity \( h \), which is the increment of \( x \), diminishes infinitely, and the operation \( \frac{\Delta}{h} \) in the limiting state when \( h \) vanishes, becomes \( d_x \). Hence \[ f(x + k) = f(x) + k d_x f(x) + \frac{k^2}{1 \cdot 2} d_x^2 f(x) + \frac{k^3}{1 \cdot 2 \cdot 3} d_x^3 f(x) + \ldots \quad (IV.) \] The expansions (II.) (IV.) are Taylor's theorems for the development of functions by means of their finite differences and their differential coefficients respectively. Again, if \( h \) be written for \( k \) in the expansion (IV.), and the subject be omitted, it becomes \[ \psi = 1 + h d_x + \frac{h^2}{1 \cdot 2} d_x^2 + \frac{h^3}{1 \cdot 2 \cdot 3} d_x^3 + \ldots \quad (V.) \] and \[ \Delta = h d_x + \frac{h^2}{1 \cdot 2} d_x^2 + \frac{h^3}{1 \cdot 2 \cdot 3} d_x^3, \ldots \quad (V.) \] § 3. 7. The expansion given for the operation \( \psi \), of changing \( x \) into \( x + h \), possesses remarkable properties, which we propose to develop in the present section, from the importance of the theorem of Taylor, which it expresses. Representing, as usual, by \( \psi \) the operation of changing \( x \) into \( x + h \), and by \( \psi' \) that of changing \( x \) into \( x + h' \), the quantities \( h, h' \) being independent of \( x \), and, lastly, denoting by \( \psi_i \) the operation which changes \( x \) into \( x + h + h' \), we have obviously the identity \[ \psi \psi' = \psi_i; \] and putting for these symbols their expansions found in the preceding article, we get \[ \left\{ 1 + h d_x + \frac{h^2 d_x^2}{1 \cdot 2} + \frac{h^3 d_x^3}{1 \cdot 2 \cdot 3} + \ldots \right\} \left\{ 1 + h' d_x + \frac{h'^2 d_x^2}{1 \cdot 2} + \frac{h'^3 d_x^3}{1 \cdot 2 \cdot 3} + \ldots \right\} \] \[ = 1 + (h + h') d_x + \frac{(h + h')^2 d_x^2}{1 \cdot 2} + \frac{(h + h')^3 d_x^3}{1 \cdot 2 \cdot 3} + \ldots; \] a relation which may be verified by actually compounding the two polynomials of the first member. Now in this act of verification the operations \( h d_x, h' d_x \) have only such properties as are common to any two linear operations which are relatively free: hence if \( \theta, \theta' \) represent any such operations, we have generally \[ \left\{ 1 + \theta + \frac{\theta^2}{1 \cdot 2} + \frac{\theta^3}{1 \cdot 2 \cdot 3} + \ldots \right\} \left\{ 1 + \theta' + \frac{\theta'^2}{1 \cdot 2} + \frac{\theta'^3}{1 \cdot 2 \cdot 3} + \ldots \right\} \] \[ = 1 + (\theta + \theta') + \frac{(\theta + \theta')^2}{1 \cdot 2} + \frac{(\theta + \theta')^3}{1 \cdot 2 \cdot 3}; \] and it is easy to extend an identity exactly similar to any number of operations which are all relatively free; for in introducing a new polynomial, \( 1 + \theta'' + \frac{\theta''^2}{1 \cdot 2} + \frac{\theta''^3}{1 \cdot 2 \cdot 3} + \ldots \), we have only to regard \( \theta + \theta' \) as itself a free linear operation, and therefore the result would be \[ 1 + (\theta + \theta' + \theta'') + \frac{(\theta + \theta' + \theta'')^2}{1 \cdot 2} + \frac{(\theta + \theta' + \theta'')^3}{1 \cdot 2 \cdot 3} + \ldots. \] 8. If the subject be a function of two variables, \( x \) and \( y \), then using \( \psi_x \) to denote that \( x \) must be changed into \( x + h \), and \( \psi_y \) that \( y \) must become \( y + k \), these operations are relatively free, it being of no consequence which operation is first performed; therefore the operations \( \Delta_x, \Delta_y \) of taking the corresponding finite differences are also free; from whence, lastly, the differentiations relative to \( x \) and \( y \), represented by \( d_x, d_y \), must be of the same character. Now since \[ \psi_x = 1 + h d_x + \frac{h^2 d_x^2}{1 \cdot 2} + \frac{h^3 d_x^3}{1 \cdot 2 \cdot 3} + \ldots, \] and \[ \psi_y = 1 + k d_y + \frac{k^2 d_y^2}{1 \cdot 2} + \frac{k^3 d_y^3}{1 \cdot 2 \cdot 3} + \ldots, \] therefore, by the general identity of (7.), we have \[ \psi_x \psi_y = 1 + (h d_x + k d_y) + \frac{(h d_x + k d_y)^2}{1 \cdot 2} + \frac{(h d_x + k d_y)^3}{1 \cdot 2 \cdot 3} + \ldots. \] And now introducing the subject \( f(x, y) \), and expanding the terms in the right member of this identity by the formula given in the preceding section, it will become, in the common notation, \[ f(x + h, y + k) = f(x, y) + \frac{df(x, y)}{dx} \cdot h + \frac{d^2f(x, y)}{dx^2} \cdot \frac{h^2}{1 \cdot 2} + \frac{d^3f(x, y)}{dx^3} \cdot \frac{h^3}{1 \cdot 2 \cdot 3} + \ldots \] \[ + \frac{df(x, y)}{dy} \cdot k + \frac{d^2f(x, y)}{dx \cdot dy} \cdot hk + \frac{d^3f(x, y)}{dx^2 \cdot dy} \cdot \frac{hk^2}{1 \cdot 2} + \ldots \] \[ + \frac{d^2f(x, y)}{dy^2} \cdot \frac{k^2}{1 \cdot 2} + \frac{d^3f(x, y)}{dx \cdot dy^2} \cdot \frac{hk^2}{1 \cdot 2} + \ldots \] \[ + \frac{d^3f(x, y)}{dy^3} \cdot \frac{k^3}{1 \cdot 2 \cdot 3} + \ldots \] If a third variable \( z \) becomes \( z + l \) by a third operation \( \psi'' \), then the actual composition of the equivalent polynomial \( 1 + ld_z + \frac{l^2d_z^2}{1 \cdot 2} + \frac{l^3d_z^3}{1 \cdot 2 \cdot 3} + \ldots \) gives \[ \psi \psi' \psi'' = 1 + (hd_x + kd_y + ld_z) + \frac{1}{1 \cdot 2}(hd_x + kd_y + ld_z)^2 + \frac{1}{1 \cdot 2 \cdot 3}(hd_x + kd_y + ld_z)^3 + \ldots \] and so this method may be extended, whatever be the number of variables. 9. Let \( \theta \) denote any linear operation, and make \[ \Theta = 1 + \theta + \frac{\theta^2}{1 \cdot 2} + \frac{\theta^3}{1 \cdot 2 \cdot 3} + \ldots \] then \( \Theta \) is itself a linear operation; and if in (7.) we put \( \theta = \theta' = \theta'' = \ldots \), and suppose the number of these operations to be \( n \), we have by that article \[ \Theta^n = 1 + n\theta + \frac{n^2\theta^2}{1 \cdot 2} + \frac{n^3\theta^3}{1 \cdot 2 \cdot 3} + \ldots \] Again, suppose \[ \varphi = 1 + \frac{\theta}{m} + \frac{\theta^2}{1 \cdot 2 m^2} + \frac{\theta^3}{1 \cdot 2 \cdot 3 m^3} + \ldots \] \( m \) being any positive integer, we have by the same principles \[ \varphi^m = 1 + \theta + \frac{\theta^2}{1 \cdot 2} + \frac{\theta^3}{1 \cdot 2 \cdot 3} + \ldots = \Theta \] \[ \varphi^n = 1 + \frac{n}{m} \cdot \theta + \frac{n^2}{m^2} \cdot \frac{\theta^2}{1 \cdot 2} + \frac{n^3}{m^3} \cdot \frac{\theta^3}{1 \cdot 2 \cdot 3} + \ldots \] Hence if \( \varphi \) denote an operation which, repeated \( m \) times, gives \( \Theta \), and in which we shall employ the notation \( \Theta^{\frac{1}{m}} \), then \( \varphi^n \) denotes the same as \( \Theta^{\frac{n}{m}} \), the latter being the operation which, repeated \( m \) times, is the same as \( \Theta^n \). With this meaning understood, it follows that \[ \Theta^{\frac{n}{m}} = 1 + \frac{n}{m} \cdot \theta + \frac{n^2}{m^2} \cdot \frac{\theta^2}{1 \cdot 2} + \frac{n^3}{m^3} \cdot \frac{\theta^3}{1 \cdot 2 \cdot 3} + \ldots \] Lastly, if we put \[ \Omega = 1 - n\theta + \frac{n^2\theta^2}{1 \cdot 2} - \frac{n^3\theta^3}{1 \cdot 2 \cdot 3} + \ldots \] compound this by the formula of (7.) with the operation \[ \varphi^n = 1 + n\theta + \frac{n^2\theta^2}{1 \cdot 2} + \frac{n^3\theta^3}{1 \cdot 2 \cdot 3} + \ldots \] we have \[ \Omega \varphi^n = 1 + (n - n) \cdot \theta + \frac{(n-n)^2 \theta^2}{1 \cdot 2} + &c. = 1; \] or if we introduce any subject \( f(x) \); \( \Omega, \varphi^n \) being necessarily free, \[ [f(x)] \Omega \varphi^n = [f(x)] \varphi^n \Omega = f(x). \] Hence \( \Omega, \varphi^n \) are mutually inverse operations, the action of the one on the result of the other restoring the original subject. \( \Omega \) may be represented by \( \varphi^{-n} \), attaching to this symbol the meaning here assigned. If \( n \) represent any quantity, positive negative, integer, or fractional, understanding the conventional notation by the definitions laid down, it follows, that if \[ \Theta = 1 + \theta + \frac{\theta^2}{1 \cdot 2} + \frac{\theta^3}{1 \cdot 2 \cdot 3} + , &c., \] then shall \[ \Theta^n = 1 + n \theta + \frac{n^2 \theta^2}{1 \cdot 2} + \frac{n^3 \theta^3}{1 \cdot 2 \cdot 3} + , &c. \] 10. Suppose \( \theta \) to be simply the operation of multiplying by unity, then \[ \Theta = 1 + 1 + \frac{1}{1 \cdot 2} + \frac{1}{1 \cdot 2} + , &c.; \] and putting as usual \( \varepsilon \) for the sum of this series, \( \Theta \) represents the operation of multiplying by \( \varepsilon \); put therefore in the formulæ of art. 9. 1 for \( \theta \), and \( \varepsilon \) for \( \Theta \), and then \[ \varepsilon^n = 1 + n + \frac{n^2}{1 \cdot 2} + \frac{n^3}{1 \cdot 2 \cdot 3} + , &c. \] The properties of this series when any way involved are common, as has been seen, to those in a series where \( \theta \), any linear operation, is put for \( n \), and therefore we may write the purely symbolical identity \[ \varepsilon^\theta = 1 + \theta + \frac{\theta^2}{1 \cdot 2} + \frac{\theta^3}{1 \cdot 2 \cdot 3} + , &c., \] where \( \theta \) may be an imaginary multiplier, or any species of linear operation. Thus if \( \theta = h d_x \) and \( \psi_x \) denote the operation of changing \( x \) into \( x + h \), we have \[ \psi_x = \varepsilon^{h d_x}. \] Similarly \[ \psi_y = \varepsilon^{k d_y}, \] \[ \psi_x \psi_y = \varepsilon^{(h d_x + k d_y)}, \] all of which are proved by the formulæ of art. 8. 11. Having seen in the course of the investigations of this section the signification of the indices of operations when fractional negative or even purely symbolical of linear operations, it is easy to prove by similar steps that in all cases where \( \theta, \theta' \) are relatively free, \[ (\theta + \theta')^n = \theta^n + n \theta^{n-1} \theta' + \frac{n(n-1)}{1 \cdot 2} \cdot \theta^{n-2} \theta'^2 + , &c.; \] for since \((\theta + \theta')^n (\theta + \theta')^m = (\theta + \theta')^{n+m}\), it follows that the composition of the polynomials \[ \left\{ \theta^n + n \theta^{n-1} \theta' + \frac{n(n-1)}{1 \cdot 2} \theta^{n-2} \theta'^2 + \ldots \right\} \cdot \left\{ \theta^m + m \theta^{m-1} \theta' + \frac{m(m-1)}{1 \cdot 2} \theta^{m-2} \theta'^2 + \ldots \right\} \] \[= \theta^{n+m} + (n + m) \theta^{n+m-1} \theta' + \frac{(n + m)(n + m - 1)}{1 \cdot 2} \theta^{n+m-2} \theta'^2 + \ldots;\] and since nothing in the actual verification of this identity depends on their being integers, for which case the expansion has been proved, the identity holds generally, and therefore if \(m = n\), and we take \(p\), such polynomials, we have \[ \left\{ \theta^n + n \theta^{n-1} \theta' + \frac{n(n-1)}{1 \cdot 2} \theta^{n-2} \theta'^2 + \ldots \right\}^p = \theta^{np} + np \theta^{np-1} \theta' + \frac{(np)(np-1)}{1 \cdot 2} \theta^{np-2} \theta'^2 + \ldots \] \[ \left\{ \theta^n + n \theta^{n-1} \theta' + \ldots \right\}^q = \theta^{nq} + nq \theta^{nq-1} \theta' + \ldots \] Put \(n = \frac{q}{p}\), \[ \therefore \left\{ \theta^{\frac{q}{p}} + \frac{q}{p} \theta^{\frac{q}{p}-1} \theta' + \ldots \right\}^p = \theta^q + q \theta^{q-1} \theta' + \ldots = (\theta + \theta')^q, \] or \[ (\theta + \theta')^{\frac{q}{p}} = \theta^{\frac{q}{p}} + \frac{q}{p} \theta^{\frac{q}{p}-1} \theta' + \frac{q(q-1)}{1 \cdot 2} \theta^{\frac{q}{p}-2} \theta'^2 + \ldots \] Again, since \[ \left\{ \theta^n + n \theta^{n-1} \theta' + \frac{n(n-1)}{1 \cdot 2} \theta^{n-2} \theta'^2 + \ldots \right\} \cdot \left\{ \theta^{-n} - n \theta^{-n-1} \theta' + \frac{n(n+1)}{1 \cdot 2} \theta^{-n-2} \theta'^2 + \ldots \right\} = 1, \] we have \[ (\theta + \theta')^{-n} = \theta^{-n} - n \theta^{-n-1} \theta' + \ldots \] These formulæ applied to quantities or fixed operations, suffice after the usual methods for calculating their finite differences, differential coefficients, &c. § 4. 12. Suppose \(\theta\) to represent any operation which performed on a subject \([u]\) gives \(y\) as the result, then the inverse operation is denoted by \(\theta^{-1}\), and is such that when \([y]\) is made the subject \(u\) becomes the result. Thus \(b\) denoting the operation of multiplying by a quantity \(b\), we have \[ [a] b = c \quad \therefore [c] b^{-1} = a. \] Again, \(\psi\) denoting the operation of changing \(x\) into \(x + h\), we have \[ [f(x)] \psi = \varphi(x), \quad \text{where } \varphi(x) = f(x + h) \] \[ \therefore [\varphi(x)] \psi^{-1} = f(x), \quad \text{where } f(x) = \varphi(x \cdot h) \] and in general if \[ [u] \theta = y \quad \text{then} \quad [y] \theta^{-1} = u, \quad \text{and therefore} \quad [y] \theta^{-1} \theta = y, \] the compound operation \( \theta^{-1} \theta \) or \( \theta^0 \) being equivalent to no operation. To invert a compound operation we must invert the order as well as the nature of the component operations, which rule must be strictly observed when the latter are relatively fixed. For let \[ [x] \theta = y \quad \text{and therefore} \quad [x] \theta \theta' = z. \] Then \[ [z] \theta'^{-1} = y \quad [y] \theta^{-1} = x \quad \text{and} \quad \ldots \ldots \quad [z] \theta'^{-1} \theta^{-1} = x, \] which proof is applicable, whatever be the number of the component operations. If all the component operations be alike, we have then but to change the sign of the index to obtain the inverse, thus if \[ [u] \theta^n = y, \quad \text{then} \quad [y] \theta^{-n} = u, \] as in the last section. 13. The consideration of inverse operations leads to the introduction of the appendage, which when the operations are linear must be annexed to the result to give it the most general value of which it is susceptible, for the inverses of such operations are themselves linear; thus if \( \theta \) be a linear operation, \[ [X + \xi] \theta = [X] \theta + [\xi] \theta. \] Put \([X] \theta = X_1\), \([\xi] \theta = \xi_1\), hence \([X_1 + \xi_1] \theta^{-1} = X + \xi\) \[ = [X_1] \theta^{-1} + [\xi_1] \theta^{-1}, \] which shows the linearity of \( \theta^{-1} \). Now suppose the nature of \( \theta \) to be such that \([P] \theta = 0\), the subject \( P \) being thus in some way connected with the nature of the operation \( \theta \), then if we suppose \[ [X] \theta = y, \quad \text{we have also} \quad [X + P] \theta = y, \quad \text{hence} \quad [y] \theta^{-1} = X + P, \] this being the same as writing \( y + 0 \) for \( y \), since \([0] \theta^{-1} = P \). The appendage therefore in a linear operation is the result of its action on zero; \( P \) will express a form, but its magnitude must be susceptible of an infinity of values, that is, it contains arbitrary constants which enter as multipliers, for if \( a \) be such a constant, we have in general \([X] a \theta = [X] \theta a \); and supposing \( X = 0 \), we have \([0 . a] \theta = [0] \theta a \): therefore whatever particular value may be assigned to \([0] \theta \), a more comprehensive value is attained by its arbitrary multiplication by \( a \). A multiplier is the most general form in which the operation represented by \( a \) can enter when \( X \) is a function of but one variable; but it admits of other forms more extended in cases of several variables, as may easily be perceived: thus \([f(y)] \psi_x = f'(y), \psi_x \) denoting the operation of changing \( x \) into \( x + h \): hence \([f(y)] \Delta_x = 0 \). Then if \( X \) be any function of \( x \), and \( \xi \) be any particular value of \([X] \Delta_x^{-1} \), we shall have more generally \([X] \Delta_x^{-1} = \xi + f(y), \) which includes the former; since, the form of \( f(y) \) being arbitrary, we have \( \xi \) alone, amongst the infinite number of values of \( \xi + f(y) \). In compound operations, the appendage obtained by the first simple operation becomes a new subject for the succeeding operations, each of which may in like manner introduce a new appendage. 14. The operation \( \psi_x \), taken directly or inversely, is incapable of introducing any appendage: for suppose \([0] \psi_x^{-1} = \varphi(x)\), then \([\varphi(x)] \psi_x = 0\), or \( \varphi(x + h) = 0 \), which identity being general, we may put \( x \) instead of \( x + h \), which gives \( \varphi(x) = 0 \); from which it also follows that \([0] \psi_x^{-n} = 0 \). To find the appendage introduced by \( d_x^{-1} \), suppose in the same way \([0] d_x^{-1} = \varphi(x) \), hence \([\varphi(x)] d_x = 0 \), therefore \([\varphi(x)] d_x^2 = 0 \) \([\varphi(x)] d_x^3 = 0 \), &c.; and since \[ \varphi(x + h) = \varphi(x) + h \frac{d \varphi(x)}{dx} + \frac{h^2}{1 \cdot 2} \cdot \frac{d^2 \varphi(x)}{dx^2} + &c. \] hence we have \( \varphi(x + h) = \varphi(x) \), and \( h \) being arbitrary, we may put it \(= -x \), therefore \( \varphi(x) = \varphi(0) \), that is, \( \varphi(x) \) is constant relative to \( x \); if therefore \( C \) be any arbitrary quantity independent of \( x \), we have \([0] d_x^{-1} = C \). Again, \[ [0] d_x^{-2} = [0] d_x^{-1} d_x^{-1} = [C] d_x^{-1} + [0] d_x^{-1}, \] as above stated; but since \([C_x] d_x = C \), and \([0] d_x^{-1} = C' \), any constant therefore \([0] d_x^{-2} = C_x + C' \), and in general \[ [0] d_x^{-n} = A_1 x^{n-1} + A_2 x^{n-2} + A_3 x^{n-3} + \ldots + A_n, \] where \( A_1 A_2 \ldots A_n \) denote constant multipliers. Lastly, let \([0] \Delta_x^{-1} = \varphi(x) \), or \([\varphi(x)] \Delta_x = 0 \), therefore \( \varphi(x + h) = \varphi(x) \), and by Taylor's theorem (dividing by \( \varphi(x) \)), \[ 0 = \frac{\varphi'(x)}{\varphi(x)} h + \frac{\varphi''(x)}{\varphi(x)} \cdot \frac{h^2}{1 \cdot 2} + &c. \] where \( \varphi'(x) \) \( \varphi''(x) \) &c. are the differential coefficients of \( \varphi(x) \); this identity being independent of \( x \), the latter quantity must disappear from the series: put therefore \[ \frac{\varphi'(x)}{\varphi(x)} = n \sqrt{-1} \frac{1}{h}, \quad n \text{ being independent of } x; \] hence \[ \varphi''(x) = n \sqrt{-1} \frac{1}{h} \varphi'(x) = -\frac{n^2}{h^2} \varphi(x) \quad \varphi'''(x) = -\frac{n^3 \sqrt{-1}}{h^3} \varphi(x), \quad &c. \] therefore \[ 1 = \sqrt{-1} \left\{ n - \frac{n^3}{1 \cdot 2 \cdot 3} + \frac{n^5}{1 \cdot 2 \cdot 3 \cdot 4 \cdot 5} - &c. \right\} + \left\{ 1 - \frac{n^2}{1 \cdot 2} + \frac{n^4}{1 \cdot 2 \cdot 3 \cdot 4} - &c. \right\} \quad (a.) \] At present suppose \( n \) the least real value which satisfies this equation, then \[ \varphi(x) = C \varepsilon \frac{n x \sqrt{-1}}{h}, \] also since $\varphi(x) = \varphi(x + h)$ change $x$ into $x + h$ $\therefore \varphi(x + h) = \varphi(x + 2h)$ and generally $\varphi(x) = \varphi(x \pm m h)$, $m$ being an integer; and since $\varphi(x) = C \varepsilon^{\frac{mn\sqrt{-1}}{mh}}$ satisfies this equation, it follows that $\pm n \pm 2n \pm 3n$, &c. satisfy the equation (a); the complete appendage will therefore be $$A_1 \varepsilon^{\frac{nx\sqrt{-1}}{h}} + A_2 \varepsilon^{\frac{2nx\sqrt{-1}}{h}} + \ldots + B_1 \varepsilon^{\frac{-nx\sqrt{-1}}{h}} + B_2 \varepsilon^{\frac{-2nx\sqrt{-1}}{h}} + \ldots.$$ the number of constants being infinite. § 5. 15. When the simple operations which compose a compound one are relatively free their places are transmutable, but when fixed a mutation of places will require an alteration in the operations themselves. Let $\psi_x$ denote the changing of $x$ into $x + h$ as before, and let $\theta_x$ be any operation affecting $x$, or, which is the same, fixed relative to $\psi_x$, then considering $\theta_x$ as a subject, put $[\theta_x] \psi_x = \theta'_x$, and if the compound operation $[u] \theta_x \psi_x$ be proposed, its value by transmutation is $[u] \psi_x \theta'_x$, for in the first compound operation $\psi_x$ affects all the preceding symbols as forming its subject. Again, let $[u] \theta_x \Delta_x = y$ be proposed for transmutation, we have $$y = [u] \theta_x (\psi_x - 1)$$ $$= [u] (\psi_x \theta'_x - \theta_x),$$ and putting $\Delta_x + 1$ for $\psi_x$, and $\theta_x \Delta_x$ for the finite difference of $\theta_x$ considered as one operation, we have $$y = [u] (\Delta_x \theta'_x + \theta_x \Delta_x) = [u] \theta_x \Delta_x.$$ Lastly, divide this identity by $h$, and then put $h = 0$. When $\Delta_x/h$ becomes $d_x$, and $\theta'_x$ becomes $\theta_x$, we get for the transmutation of $\theta_x d_x$, $$[u] \theta_x d_x = [u] (d_x \theta_x + \theta_x d_x).$$ These formulae of transmutation separated from the subject are respectively $$\theta_x \psi_x = \psi_x \theta_x \psi_x$$ $$\theta_x \Delta_x = \Delta_x \theta_x \psi_x + \theta_x \Delta_x$$ $$\theta_x d_x = d_x \theta_x + \theta_x d_x.$$ When $\theta_x$ is constant or not dependent on $x$, then $$\theta_x \psi_x = \theta_x \theta_x \Delta_x = 0 \quad \theta_x d_x = 0,$$ and these formulæ will then express merely that $\theta_x \psi_x$, &c. are relatively free. For example, let $\theta_x$ simply denote a multiplier, then $\theta_x \psi$ will be $\theta_x + h$, also a multiplier, $\theta_x \Delta_x = \theta_x + h - \theta_x$, which may be represented by $\theta_x$ and $\theta_x d_x = \text{the limit of } \frac{\theta_x + h - \theta_x}{h}$, when $h$ vanishes, or the differential coefficient of $\theta_x$, which may be denoted by $\theta'_x$, then the general formula becomes \[ [u] \theta_x \psi_x = [u] \psi_x \theta_{x+h} \] \[ [u] \theta_x \Delta_x = [u] (\Delta_x \theta_{x+h} + \theta_x) \] \[ [u] \theta_x d_x = [u] (d_x \theta_x + \theta_x'). \] Again, let the subject be \( f(x,y) \), and suppose now \( \theta_x \) to be the operation of changing \( y \) into \( y + \phi(x) \), then \( \theta_x \psi \) or \( \theta_{x+h} \) is the operation of changing \( y \) into \( y + \phi(x + h) \), \( \theta_x \Delta_x \) or \( \theta_x \) is the operation which changes \( f(x,y) \) into \[ f\{x,y + \phi(x + h)\} - f\{x,y + \phi(x)\} = f\{x,y + \phi(x)\} \] \[ + \frac{df\{x,y + \phi(x)\}}{dy} \phi'(x) \cdot h + &c. - f\{x,y + \phi(x)\}, \] and therefore \( \theta_x d_x \) will give \( \frac{df\{x,y + \phi(x)\}}{dy} \) and is equivalent to \( \theta_x d_y \phi'(x) \); in this case we should have \[ [f(x,y)] \theta_x d_x = [f(x,y)] (d_x \theta_x + \theta_x d_y \phi'(x)), \] which result may be also deduced by putting for \( \theta_x \) its equivalent symbol \( \varepsilon^{dy \phi(x)} \), which gives \[ \theta_x d_x = \varepsilon^{dy \phi(x)} \cdot d_y \phi'(x) = \theta_x d_y \phi'(x), \] where \( \phi'(x) \) is written for \( \frac{d\phi(x)}{dx} \). This example shows how operations may themselves be the subjects of other operations. 16. We now proceed to consider the transformed values of \( \theta_x \psi^n_x, \theta_x \Delta^n_x, \theta_x d^n_x \), when \( n \) is any positive integer. First, \[ \theta_x \psi_x = \psi_x \theta_x \psi_x = \phi_x; \] suppose therefore \[ \theta_x \psi_x^2 = \phi_x \psi_x = \psi_x \phi_x \psi_x. \] Now \( \phi_x \psi_x \) regards \( \phi_x \) solely as the subject of the operation \( \psi_x \), and \[ \theta_x \psi_x \psi_x = \psi_x \theta_x \psi_x \] by the first formula; therefore \[ \psi_x \phi_x \psi_x = \psi_x^2 \theta_x \psi_x^2. \] and in general if we suppose \[ \theta_x \psi_x^{n-1} = \psi_x^{n-1} \theta_x \psi_x^{n-1} = \phi_x, \] then \[ \theta_x \psi_x^n = \phi_x \psi_x = \psi_x \phi_x \psi_x; \] but \[ \theta_x \psi_x^{n-1} \psi_x = \psi_x \theta_x \psi_x^n \] by the first formula; therefore \[ \theta_x \psi_x^n = \psi_x^{n-1} \psi_x \theta_x \psi_x^n = \psi_x^n \theta_x \psi_x^n. \] This general formula may be more readily deduced by considering that $\psi_x^n$ is the operation of changing $x$ into $x + n h$, and consequently $\psi_x^n$ may at once be substituted in the first formula for $\psi_x$. Secondly, $$\theta_x \Delta_x = \Delta_x \theta_x \psi_x + \theta_x \Delta_x,$$ therefore $$\theta_x \Delta_x^2 = \Delta_x \theta_x \psi_x \Delta_x + \theta_x \Delta_x \Delta_x,$$ but $$\theta_x \psi_x \Delta_x = \Delta_x \cdot \theta_x \psi_x^2 + \theta_x \psi_x \Delta_x,$$ writing $\theta_x \psi_x$ for $\theta_x$; and similarly $$\theta_x \Delta_x \Delta_x = \Delta_x \cdot \theta_x \psi_x \Delta_x + \theta_x \Delta_x^2,$$ whence $$\theta_x \Delta_x^2 = \Delta_x^2 \cdot \theta_x \psi_x^2 + 2 \Delta_x \cdot \theta_x \psi_x \Delta_x + \theta_x \Delta_x^2.$$ generally suppose $$\theta_x \Delta_x^{n-1} = \Delta_x^{n-1} \cdot \theta_x \psi_x^{n-1} + (n-1) \Delta_x^{n-2} \cdot \theta_x \psi_x^{n-2} \Delta_x$$ $$+ \frac{(n-1)(n-2)}{1 \cdot 2} \cdot \Delta_x^{n-3} \cdot \theta_x \psi_x^{n-3} \Delta_x^2.$$ Now if we write $\theta_x \psi_x^m$ for $\theta_x$ in the fundamental formula, we have $$\theta_x \psi_x^m \Delta_x = \Delta_x \cdot \theta_x \psi_x^m + \theta_x \psi_x^{m-1} \Delta_x$$ each term when we put for $m, n - 1, n - 2, \&c.$ successively, will thus be divided into two, which being placed in two distinct lines will give $$\theta_x \Delta_x^n = \Delta_x^n \cdot \theta_x \psi_x^n + (n-1) \Delta_x^{n-1} \cdot \theta_x \psi_x^{n-1} \Delta_x + \frac{(n-1)(n-2)}{1 \cdot 2} \cdot \Delta_x^{n-2} \cdot \theta_x \psi_x^{n-2} \Delta_x^2 + \&c.$$ $$+ \Delta_x^{n-1} \cdot \theta_x \psi_x^{n-1} \Delta_x + \frac{(n-1) \cdot 2}{1 \cdot 2} \cdot \Delta_x^{n-2} \cdot \theta_x \psi_x^{n-2} \Delta_x^2 + \&c.$$ $$= \Delta_x^n \theta_x \psi_x^n + n \Delta_x^{n-1} \cdot \theta_x \psi_x^{n-1} \Delta_x + \frac{n(n-1)}{1 \cdot 2} \cdot \Delta_x^{n-2} \cdot \theta_x \psi_x^{n-2} \Delta_x^2 + \&c.$$ which is the general formula sought for. Divide now by $h^n$ and observe that $\frac{\Delta_x}{h}$ becomes $d_x$ and $\psi_x$ becomes unity as a multiplier when $h = 0$, hence the third general formula $$\theta_x d_x^n = d_x^n \theta_x + n d_x^{n-1} \cdot \theta_x d_x + \frac{n(n-1)}{1 \cdot 2} \cdot d_x^{n-2} \cdot \theta_x d_x^2 + \&c.$$ which when $\theta_x$ represents quantity is the theorem commonly called Leibnitz's. 17. We next proceed to investigate the formulæ for negative indices. First since $\psi_x^{-1}$ denotes simply the changing $x$ into $x - h$, we may write $\psi_x^{-1}$ for $\psi_x$ in the first formula. Therefore $$\theta_x \psi_x^{-1} = \psi_x^{-1} \cdot \theta_x \psi_x^{-1}$$ more generally \[ \theta_x \psi_x^{-n} = \psi_x^{-n} \cdot \theta_x \psi_x^{-n}. \] Secondly, since \[ \theta_x \Delta_x = \Delta_x \theta_x \psi_x + \theta_x \Delta_x \] therefore \[ \Delta_x^{-1} \theta_x \Delta_x = \theta_x \psi_x + \Delta_x^{-1} \theta_x \Delta_x. \] Put now \( \theta_x \psi_x^{-1} \) for \( \theta_x \), and therefore \( \theta_x \) for \( \theta_x \psi_x \); hence \[ \Delta_x^{-1} \theta_x \psi_x^{-1} \Delta_x = \theta_x + \Delta_x^{-1} \cdot \theta_x \psi_x^{-1} \Delta_x. \] therefore \[ \theta_x \Delta_x^{-1} = \Delta_x^{-1} \cdot \theta_x \psi_x^{-1} - \Delta_x^{-1} \cdot \theta_x \psi_x^{-1} \Delta_x \cdot \Delta_x^{-1}. \] Put \( \theta_x \psi_x^{-1} \) for \( \theta_x \), thence we have \[ \theta_x \Delta_x^{-1} = \Delta_x^{-1} \cdot \theta_x \psi_x^{-1} - \Delta_x^{-2} \cdot \theta_x \psi_x^{-2} \Delta_x + \Delta_x^{-2} \cdot \theta_x \psi_x^{-2} \Delta_x^2 \cdot \Delta_x^{-1}, \] or we continue this process indefinitely \[ \theta_x \Delta_x^{-1} = \Delta_x^{-1} \cdot \theta_x \psi_x^{-1} - \Delta_x^{-2} \cdot \theta_x \psi_x^{-2} \Delta_x + \Delta_x^{-3} \cdot \theta_x \psi_x^{-3} \Delta_x^2 - &c. \] which is the same as the general formula for \( \theta_x \Delta_x^n \) when \( n = -1 \). Again \[ \theta_x \Delta_x^{-2} = \Delta_x^{-1} \cdot \theta_x \psi_x^{-1} \Delta_x^{-1} - \Delta_x^{-1} \cdot \theta_x \psi_x^{-1} \Delta_x \cdot \Delta_x^{-2}, \] but \[ \Delta_x^{-1} \theta_x \psi_x^{-1} \Delta_x^{-1} = \Delta_x^{-2} \cdot \theta_x \psi_x^{-2} - \Delta_x^{-2} \cdot \theta_x \psi_x^{-2} \Delta_x \cdot \Delta_x^{-1}, \] and \[ \Delta_x^{-1} \cdot \theta_x \psi_x^{-1} \Delta_x \cdot \Delta_x^{-1} = \Delta_x^{-2} \cdot \theta_x \psi_x^{-2} \Delta_x - \Delta_x^{-2} \cdot \theta_x \psi_x^{-2} \Delta_x^2 \cdot \Delta_x^{-1}. \] Hence \[ \theta_x \Delta_x^{-2} = \Delta_x^{-2} \cdot \theta_x \psi_x^{-2} - 2 \Delta_x^{-2} \cdot \theta_x \psi_x^{-2} \Delta_x \cdot \Delta_x^{-1} + \Delta_x^2 \cdot \theta_x \psi_x^{-2} \Delta_x^2 \cdot \Delta_x^{-2}; \] and in a similar manner it is easy to prove generally in a terminating series \[ \theta_x \Delta_x^{-n} = \Delta_x^{-n} \cdot \theta_x \psi_x^{-n} - n \Delta_x^{-n} \cdot \theta_x \psi_x^{-n} \Delta_x \cdot \Delta_x^{-1} + \frac{n(n-1)}{1 \cdot 2} \cdot \theta_x \psi_x^{-n} \Delta_x^2 \cdot \Delta_x^{-2} - &c. \] or in an infinite series, \[ \theta_x \Delta_x^{-n} = \Delta_x^{-n} \cdot \theta_x \psi_x^{-n} - n \Delta_x^{-(n+1)} \cdot \theta_x \psi_x^{-(n+1)} \Delta_x + \frac{n(n+1)}{1 \cdot 2} \Delta_x^{-(n+2)} \cdot \theta_x \psi_x^{-(n+2)} \Delta_x^2 - &c. \] Thirdly, divide \( \Delta_x \) by \( h \), and then put \( h = 0 \), whence \[ \theta_x d_x^{-1} = d_x^{-1} \cdot \theta_x - d_x^{-1} \cdot \theta_x d_x^{-1} \] \[ = d_x^{-1} \cdot \theta_x - d_x^{-2} \cdot \theta_x d_x + d_x^{-3} \cdot \theta_x d_x^2 - &c. \] \[ \theta_x d_x^{-n} = d_x^{-n} \cdot \theta_x - n d_x^{-n} \cdot \theta_x d_x \cdot d_x^{-1} + \frac{n(n-1)}{1 \cdot 2} \cdot d_x^{-n} \cdot \theta_x d_x^2 \cdot d_x^{-2} - &c. \] \[ = d_x^{-n} \cdot \theta_x - n d_x^{-(n+1)} \cdot \theta_x d_x + \frac{n(n+1)}{1 \cdot 2} \cdot \theta_x^{-(n+2)} \cdot \theta_x d_x^2 - &c. \] which formulæ admit of most extensive applications, whether \( \theta_x \) be regarded as a quantity or a fixed operation. § 6. 18. Before proceeding further in the search of the fundamental formulæ for the transformation of operations, we shall exemplify the theory which precedes by inverting binomial operations and applying the results to some simple cases. Let $\theta$, $\theta'$ denote two linear operations relatively fixed or free, and let us seek the value of $(\theta - \theta')^{-1}$. Put $(\theta - \theta')^{-1} = \theta^{-1} + \eta_1$; the latter being the difference of two linear operations must itself be linear. Hence $$1 = (\theta^{-1} + \eta_1)(\theta - \theta') = 1 - \theta^{-1}\theta' + \eta_1(\theta - \theta')$$ therefore $$\eta_1(\theta - \theta') = \theta^{-1}\theta'.$$ Similarly put $$\eta_1 = \theta^{-1}\theta'\theta^{-1} + \eta_2$$ which gives $$\eta_1(\theta - \theta') = \theta^{-1}\theta' - (\theta^{-1}\theta')^2 + \eta_2(\theta - \theta')$$ whence $$\eta_2(\theta - \theta') = (\theta^{-1}\theta')^2$$ so again put $$\eta_2 = (\theta^{-1}\theta')^2\theta^{-1} + \eta_3$$ $$\eta_3 = (\theta^{-1}\theta')^3\theta^{-1} + \eta_4$$ &c. = &c. We thus obtain $$(\theta - \theta')^{-1} = \theta^{-1} + (\theta^{-1}\theta')\theta^{-1} + (\theta^{-1}\theta')^2\theta^{-1} + \ldots + (\theta^{-1}\theta')^{n-1}\theta^{-1} + \eta_n$$ where $\eta_n$ represents the compound operation $(\theta^{-1}\theta')^n(\theta - \theta')^{-1}$. The same formula continued to infinity would be obtained by first putting $\theta^{-1}(1 - \theta^{-1}\theta')^{-1}$ for $(\theta - \theta')^{-1}$; and since the operations represented respectively by 1 and $\theta^{-1}\theta'$ are relatively free, we should have by art. 11. $$(1 - \theta^{-1}\theta')^{-1} = 1 + \theta^{-1}\theta' + (\theta^{-1}\theta')^2 + \text{&c. ad infin.}$$ When $\theta$, $\theta'$ are relatively free the theorem becomes $$(\theta - \theta')^{-1} = \theta^{-1} + \theta^{-2}\theta' + \theta^{-3}\theta'^2 + \theta^{-4}\theta'^3 + \ldots \theta^{-n}\theta'^{n-1} + \theta^{-n}\theta'^n(\theta - \theta')^{-1}.$$ 19. For a first example suppose $\Delta_x$ to denote the finite difference, on the supposition that by the operation $\psi_x$ the quantity $x$ is changed into $x + h$. Then $\Delta_x^{-1}$ is the finite integral, and in the usual notation of analysts is denoted by $\Sigma_x$; we have therefore $$[f(x)]\Sigma_x = [f(x)](\psi_x - 1)^{-1} = [f(x)]\{\psi_x^{-1} + \psi_x^{-2} + \psi_x^{-3} + \ldots \psi_x^{-(n-1)} + \psi_x^{-n}\Sigma_x\}$$ $$= f(x - h) + f(x - 2h) + f(x - 3h) + \ldots + f(x(n - 1)h) + \Sigma_x f(x - nh);$$ where it may be remarked that if \( \frac{x}{h} \) be an integer, the final terms of the series would be \( \ldots f(2h) + f(h) + f(0) \), at any of which, if we suppose the series to stop, its finite difference would be obviously \( f(x) \). For the next example suppose the subject to be \( f(x+y) \), and that by the operation \( \psi_x \), \( x \) receives an increment \( h \), and \( y \) the same increment by the operation \( \psi_y \), then it is obvious that \([f(x+y)](\psi_x - \psi_y) = 0\), therefore \([f(x+y)](\Delta_y - \Delta_x) = 0\); hence \( f(x+y) \) must be included in the general value of \([0](\Delta_y - \Delta_x)^{-1}\). Now \[ [0](\Delta_y - \Delta_x)^{-1} = [0]\{\Delta_y^{-1} + \Delta_y^{-2} \cdot \Delta_x + \Delta_y^{-3} \Delta_x^2 + \Delta_y^{-4} \Delta_x^3 + &c.\} \] and also \([0]\Delta_y^{-1} = \varphi(x)\) an arbitrary function of \( x \). \[ [0]\Delta_y^{-2} = \varphi(x) \cdot \frac{y}{h}, \text{ omitting the appendage, which being a function of } x \text{ would not vanish with } y, \text{ and which in the succeeding terms, if included, would only generate another series similar to that now formed from } \varphi(x), \text{ and consequently in the present case not add to its generality.} \] Again, \[ [0]\Delta_y^{-s} = \varphi(x) \cdot \frac{y(y-h)}{1 \cdot 2 \cdot h^2} \text{ for } [y(y-h)]\Delta_y = (y+h)y - y(y-h) = 2hy. \] Similarly \[ [0]\Delta_y^{-4} = \varphi(x) \cdot \frac{y(y-h)(y-2h)}{1 \cdot 2 \cdot 3 \cdot h^3}, &c. \] Hence \[ [0](\Delta_y - \Delta_x)^{-1} = \varphi(x) + \frac{y}{h} \cdot \Delta \varphi(x) + \frac{y(y-h)}{1 \cdot 2 \cdot h^2} \cdot \Delta^2 \varphi(x) \] \[ + \frac{y(y-h)(y-2h)}{1 \cdot 2 \cdot 3 \cdot h^3} \cdot \Delta^3 \varphi(x) + &c.; \] and since \( f(x+y) \) is included in this general expression, the particular form to be assigned to the arbitrary function \( \varphi(x) \) is known by making \( y = 0 \), which gives \( \varphi(x) = f(x) \); in this formula the ordinary notation has been employed. Suppose, for instance, \( f(x) = ax^n \) and \( h = 1 \), then \( \Delta^n f(x) = ax^n(a-1)^n \); therefore \[ a^{x+y} = a^x + y \cdot a^x \cdot (a-1) + \frac{y(y-1)}{1 \cdot 2} a^x (a-1)^2 + \frac{y(y-1)(y-2)}{1 \cdot 2 \cdot 3} a^x (a-1)^3, &c. \] or putting \( a = 1 + b \), and expunging \( a^x \) from both sides, \[ (1+b)^y = 1 + y \cdot b + \frac{y(y-1)}{1 \cdot 2} \cdot b^2 + \frac{y(y-1)(y-2)}{1 \cdot 2 \cdot 3} \cdot b^3, &c. \] which is the binomial theorem, whatever may be the value of \( y \). For the next example let us take the same subject, \( f(x+y) \), and let \( d_x, d_y \) denote the differential coefficients relative to \( x \) and \( y \); then since \[ [f(x+y)]\frac{\Delta_y - \Delta_x}{h} = 0 \] put \( h = 0 \) and \( \frac{\Delta x}{h} \) becomes \( d_x \), \( \frac{\Delta y}{h} \) being similarly \( d_y \); therefore \[ [f(x+y)] (d_y - d_x) = 0; \] hence \( f(x+y) \) must be included in the general value of \([0] (d_y - d_x)^{-1}\) But \[ [0] (d_y - d_x)^{-1} = [0] (d_y^{-1} + d_y^{-2} d_x + d_y^{-3} d_x^2 + d_y^{-4} d_x^3 +, &c.) \] Now \[ [0] d_y^{-1} = \phi(x) \] an arbitrary function of \( x \); therefore \[ [0] d_y^{-2} = \phi(x) \cdot y \] omitting the appendage for the reason abovementioned; also \[ [0] d_y^{-3} = \phi(x) \cdot \frac{y^2}{1 \cdot 2} \] since \([y^2] d_y = 2y \); and \[ [0] d_y^{-4} = \phi(x) \cdot \frac{y^3}{1 \cdot 2 \cdot 3} \] substituting we have \[ [0] (d_y - d_x)^{-1} = \phi(x) + y \frac{d \phi(x)}{dx} + \frac{y^2}{1 \cdot 2} \cdot \frac{d^2 \phi(x)}{dx^2} + \frac{y^3}{1 \cdot 2 \cdot 3} \cdot \frac{d^3 \phi(x)}{dx^3} +, &c. \] employing the common notation; the form of \( \phi(x) \) is determined by making \( y = 0 \), which gives \( \phi(x) = f(x) \); therefore \[ f(x+y) = f(x) + y f'(x) + \frac{y^2}{1 \cdot 2} \cdot f''(x) +, &c. \] where \( f'(x) \), \( f''(x) \), &c. are the successive differential coefficients of \( f(x) \); this is Taylor's expansion. If we put \( f(x) = a^x \) and for the limit of \( \frac{a^h - 1}{h} \) write \( \log(a) \), we get from this \[ a^y = 1 + y \log(a) + \frac{y^2}{1 \cdot 2} \cdot (1 \cdot a)^2 +, &c. \] These examples suffice to show the mode and use of the inversion of binomial operations. § 7. 20. To return to the general theory, suppose \( \theta, i, z \) to represent three operations connected by the equation \( \theta i = iz \), where the subject is omitted, the identity being supposed general; the symbol \( i \) represents an operation which may be said to be intermediate to those designed by \( \theta, z \). If either of the extreme operations \( \theta, z \) be given, and the intermediate \( i \) be also given, the other extreme may be readily found for \[ \theta = izi^{-1} \text{ and } z = i^{-1} \theta i. \] A remarkable property of intermediate operations is that they are also intermediate between any operations which are the same functions of the extremes. For let \[ \theta \cdot i = i \cdot z \] then performing the operation \( z \) \[ \theta \cdot z = i \cdot z^2 \] put now for \( i \cdot z \) its equivalent operation \( \theta \cdot i \), and we have \[ \theta^2 \cdot i = i \cdot z^2. \] Similarly if we suppose \[ \theta^{n-1} \cdot i = i \cdot z^{n-1} \] then \[ \theta^{n-1} \cdot z = i \cdot z^n \] but \[ i \cdot z = \theta \cdot i; \] therefore \[ \theta^n \cdot i = i \cdot z^n. \] Again, suppose the subject in the last equation to be one on which the operation \( \theta^{-n} \) has been performed, then that equation becomes \[ i = \theta^{-n} \cdot i \cdot z^n, \] or \[ \theta^{-n} \cdot i = i \cdot z^{-n}. \] Again, suppose \( K \) an operation satisfying the equation \[ \frac{n}{\theta^m} \cdot i = i \cdot K. \] We have by the parts of the proof already given in this article, \[ \theta^n \cdot i = i \cdot K^m = i \cdot z^n, \] or \[ K^m = z^n, K = z^{\frac{n}{m}}; \] hence \[ \frac{n}{\theta^m} \cdot i = i \cdot z^{\frac{n}{m}}. \] From these premised equations it follows, that if \( f(\theta) \cdot f(z) \) represent the aggregates of any similar powers of the operations \( \theta, z \), with the same coefficients, we must have generally \[ f(\theta) \cdot i = i \cdot f(z). \] By this theorem, if \( f(\theta) \) be known, \( f(z) \) can be found, supposing that we know \( i \) the operation intermediate to \( \theta, z \). 21. We shall now apply this theorem to cases where \( \theta, i \) are given, and therefore \( z \) known, as above shown. Let one extreme \( \theta \) represent the operation of differentiating relatively to \( x \), and the intermediate \( i \) that of multiplying by \( e^{ax} \), then we have \[ d_x e^{ax} = e^{ax} \cdot z \] therefore \[ \varepsilon^{-a} d_x \varepsilon^{ax} = z; \] but by § 5. \[ \theta_x d_x = d_x \theta_x + \overline{\theta_x d_x}, \] therefore \[ \varepsilon^{-a} d_x = d_x \varepsilon^{-a} - a \varepsilon^{-a} = (d_x - a) \varepsilon^{-a}. \] Hence \[ d_x - a = z \text{ the extreme required.} \] Now if \( \theta \iota = \iota z \), it has been shown that \( f(\theta) \cdot \iota = \iota f(z) \). In this case therefore \[ f(d_x) \cdot \varepsilon^{ax} = \varepsilon^{ax} f(d_x - a) \] To find the intermediate operation between \( d_x + b \) and \( d_x + c \); \( b \) and \( c \) not containing \( x \). Put \( f(d_x) = d_x + b \) in the last identity, and \( b - a = c \) we then have \[ (d_x + b) \cdot \varepsilon^{(b-c)x} = \varepsilon^{(b-c)x} (d_x + c) \] \[ f(d_x + b) \cdot \varepsilon^{(b-c)x} = \varepsilon^{(b-c)x} f(d_x + c). \] 22. Suppose the intermediate operation \( \iota \) to denote \( \varepsilon^P \) considered as a multiplier, \( P \) being a function of \( x \), of which the differential coefficient is \( P \), and \( \theta \) to represent \( d_x \) as before, it is required to find \( z \). Since \[ d_x \varepsilon^P = \varepsilon^P \cdot z; \] therefore \[ z = \varepsilon^{-P} \cdot d_x \cdot \varepsilon^P. \] But \[ \varepsilon^{-P} d_x = d_x \varepsilon^{-P} + \overline{\varepsilon^{-P}} d_x = d_x \varepsilon^{-P} - P \varepsilon^{-P} = (d_x - P) \varepsilon^{-P}. \] Hence \[ z = d_x - P. \] Corollary; \[ f(d_x) \cdot \varepsilon^P = \varepsilon^P f(d_x - P). \] And if \( Q \) be a function, of which \( Q \) is the differential coefficient, we have in like manner \[ f(d_x) \cdot \varepsilon^Q = \varepsilon^Q f(d_x - Q); \] hence \[ \varepsilon^P f(d_x - P) \cdot \varepsilon^{-P} = \varepsilon^Q f(d_x - Q) \varepsilon^{-Q}, \] or \[ f(d_x - P) \cdot \varepsilon^Q \varepsilon^{-P} = \varepsilon^Q \varepsilon^{-P} f(d_x - Q), \] that is, \( \varepsilon^Q \varepsilon^{-P} \) is the intermediate to the operations \( d_x - P, d_x - Q \). 23. Let \( \iota \) now signify the operation of changing \( y \) into \( y + \varphi(x) \), \( \theta \) being, as before, the operation of differentiating relative to \( x \), and the subject being a function of \( x \) and \( y \), and \( \varphi(x) \) the quantity, of which \( \varphi(x) \) is the differential coefficient. Here \[ z = \iota^{-1} d_x \iota. \] Now \( \iota^{-1} \) is the operation which changes \( y \) into \( y - \varphi(x) \), and therefore, as in § 5, \[ \iota^{-1} d_x = d_x \iota^{-1} - \varphi(x) \iota^{-1} d_y; \] therefore \[ z = d_x - \varphi(x) \cdot d_y. \] Corollary: \[ f(d_x) \iota = \iota f(d_x - \varphi(x) d_y); \] and as in the last article, if \( \iota' \) be the operation of changing \( y \) into \( y + \int_x \{ \varphi(x) - F(x) \} \). Then \[ f(d_x - F(x) d_y) \iota' = \iota' f(d_x - \varphi(x) d_y). \] The example last given is capable of being extended to any number of variables in the subject of the operations; if that subject contain \( x, y, z, u, \&c., \) and \( \iota \) denote the operation of changing \( y \) into \( y + \varphi(x), z \) into \( z + \omega(x), u \) into \( u + \Omega(x), \&c., \) where \( \varphi(x) \omega(x) \Omega(x), \&c. \) represent any functions of \( x, \) then will \( \iota \) be the intermediate operation, of which \( d_x \) and \( d_x - \varphi'(x) \cdot d_y - \omega'(x) d_z - \Omega'(x) \cdot d_u - \&c. \) form the extremes, the accents being used to represent the differential coefficients of the functions over which they are placed. 24. Let one of the extremes, \( \theta, \) be now supposed to represent the operation \( \Delta_x \) of finite differences, on the supposition that \( h \) is the increment of \( x. \) Let \( \iota \) represent a multiplier \( P_x, \) constant or variable with \( x. \) Then \[ z = P_x^{-1} \Delta_x P_x. \] But by § 5, \[ P_x^{-1} \Delta_x = \Delta_x \cdot P_{x+h}^{-1} + P_x^{-1} \Delta_x; \] thence \[ z = \Delta_x \cdot P_{x+h}^{-1} + P_x^{-1} \Delta_x \cdot P_x; \] and then \[ f(\Delta_x) \cdot P_x = P_x f\left( \Delta_x \cdot P_{x+h}^{-1} + P_x^{-1} \Delta_x \cdot P_x \right). \] Thus, if \[ P_x = a^{-\frac{x}{h}} \therefore P_{x+h} = a^{-1} \cdot a^{-\frac{x}{h}}, \quad P_x^{-1} \Delta_x = (a-1) \cdot a^{\frac{x}{h}}, \] we obtain \[ f(\Delta_x) \cdot a^{-\frac{x}{h}} = a^{-\frac{x}{h}} f(a \Delta_x + a - 1). \] Let \( \psi_x \) denote the operation of changing \( x \) into \( x + h, \) then \( \psi_x = \Delta_x + 1, \) and the general formula of this article becomes \[ f(\Delta_x) P_x = P_x f(\psi_x \cdot P_{x+h}^{-1} - 1). \] Again, if the subject be a function of \( x \) and \( y, \) let \( \iota \) represent the operation of changing \( y \) into \( y + \varphi(x) \) and \( \iota' \) that which would change \( y \) into \( y - \Delta_x \{ \varphi(x) \}, \) where \( \Delta_x \varphi(x) \) according to the common notation stands for \( \varphi(x + h) - \varphi(x), \) then by a similar procedure we shall obtain the identity \[ f(\Delta_x) \iota = \iota f(\psi_x \iota' - 1). \] 25. In the identity \[ f(\Delta_x) \cdot a^{-\frac{x}{h}} = a^{-\frac{x}{h}} f\left( \Delta_x + \frac{a-1}{a} \right) \text{ put } \frac{a-1}{a} = -b \therefore a = \frac{1}{1+b} \] hence \[ f(\Delta_x) (1 + b)^{\frac{x}{h}} = (1 + b)^{\frac{x}{h}} f(\Delta_x - b). \] Now by the nature of an identity both sides in this expression would be alike, if expanded according to the powers of \( b \), the coefficients of like powers must therefore be equal, and in consequence the identity must remain if \( b \) instead of being a multiplier represent any operation which is free, relative to \( \Delta_x \), thus if \( b = \Delta_y \), \( 1 + b = \psi_y \), and therefore \[ f(\Delta_x) \cdot \psi_y^{\frac{x}{h}} = \psi_y^{\frac{x}{h}} f(\Delta_x - \Delta_y) \] As a particular example of this, suppose the subject to be \( \varphi(y) \) independent of \( x \), and \( f(\Delta_x) \) merely to stand for \( \Delta_x \), then \[ [\varphi(y)] (\Delta_x) = 0, \] the general identity therefore becomes \[ [\varphi(y)] \psi_y^{\frac{x}{h}} (\Delta_x - \Delta_y) = 0, \] or \[ [\varphi(y + \frac{kx}{h})] (\Delta_x - \Delta_y) = 0, \] \( k \) being the increment of \( y \); now \[ [\varphi(y + \frac{kx}{h})] \psi_x = \varphi(y + k \cdot \frac{x + h}{h}) = \varphi(y + k + \frac{kx}{h}) = [\varphi(y + \frac{kx}{h})] \psi_y \] which verifies the result deduced from the general identity. 26. Thus when the intermediate operation and one extreme are given the other extreme may generally be found, but it seems more difficult to discover the intermediate operation when both extremes are known; here follow some examples of the latter process. Let \( \theta \) represent any linear operation, \( d_x \) that of taking the differential coefficient of the subject relative to \( x \), and \( i \) the required intermediate. Then \[ \theta i = i d_x = d_x i + i d_x \] perform on both sides the inverse operation \( i^{-1} \). Hence \[ \theta = d_x + i d_x i^{-1} \] or \[ i d_x i^{-1} = \theta - d_x. \] Suppose \[ i = e^{f(\theta - d_x)} = 1 + f(\theta - d_x) + \left\{ \frac{f(\theta - d_x)}{1 \cdot 2} \right\}^2 &c. \] then by § 3, \[ \iota^{-1} = e^{-f(\theta - d_x)} = 1 - f(\theta - d_x) + \frac{f(\theta - d_x)^2}{1 \cdot 2} - \&c., \] and \[ \overline{\iota d_x} = e^{f(\theta - d_x)} \cdot f(\theta - d_x) \overline{d_x} \] also \[ (\theta - d_x) \iota = (\theta - d_x) e^{f(\theta - d_x)}. \] Now \( \theta - d_x \) is free relative to its own functions, hence \[ \theta - d_x = f(\theta - d_x) \overline{d_x}; \] therefore \[ f(\theta - d_x) = (\theta - d_x) \cdot d_x^{-1}. \] from whence \( \iota \) is known. For a second example, suppose \( \psi \) the operation of changing \( x \) into \( x + h \), \( \Delta \) that of taking the finite difference on the same hypothesis, \( \theta \) any linear operation, then \( \iota \) is required to be such that \[ (\psi - \theta) \iota = \iota \Delta \theta. \] By art. 15, \[ \iota \Delta = \Delta \overline{\psi} + \overline{\iota \Delta} \] \[ = \psi \overline{\psi} - \overline{\psi} + \overline{\iota \Delta} \] \[ = (\psi - \iota \overline{\psi}^{-1}) \overline{\psi}. \] Substituting we obtain \[ (\psi - \theta) \iota = (\psi - \iota \overline{\psi}^{-1}) \overline{\psi} \theta. \] To satisfy this identity, suppose \[ \iota \overline{\psi}^{-1} = \theta; \] now \( \overline{\psi} \) is free relative to \( \iota \), hence \[ \overline{\psi}^{-1} \iota = \theta, \] prefix to each side the operation \( \overline{\psi} \), which only alters the subject, which is perfectly general; therefore \[ \iota = \overline{\psi} \cdot \theta; \] hence the preceding supposition fully satisfies; therefore we have this theorem, if \( \iota \) be determined such that \[ \iota \overline{\psi}^{-1} = \theta, \] then shall \[ (\psi - \theta) \iota = \iota \Delta \theta, \] and \[ f(\psi - \theta) \iota = \iota f(\Delta \theta). \] As a particular case, suppose \( \theta \) to be a multiplier \( P_x \), then \( \iota \) will be another multiplier \( v_x \), such that \[ v_x \cdot (v_x + h)^{-1} = P_x, \] or \[(\log v_x) \Delta x = -\log P_x,\] from whence \(v_x\) or \(t\) is determinable. § 8. 27. In this section we shall give some examples of the use of the formulæ investigated in the last section. In general \[d_x e^{ax} = e^{ax} (d_x - a),\] therefore \[d_x - a = e^{-ax} d_x e^{ax}.\] Put for \(a\) in this formula the terms of the series \(0, h, 2h, \ldots, (n-1)h\), and compound all the binomials which result from these substitutions, hence \[d_x (d_x - h)(d_x - 2h) \ldots (d_x - (n-1)h) = d_x e^{-hx} d_x e^{hx} \cdot e^{-2hx} d_x e^{2hx} \ldots \ldots e^{-(n-1)hx} d_x e^{(n-1)hx}.\] Now \[d_x e^{-hx} = h d_y, \text{ putting } y = e^{-hx},\] therefore \[d_x (d_x - h)(d_x - 2h) \ldots (d_x - (n-1)h) = d_y^n \cdot h^n e^{nhx}.\] Example. The expansions of \((a + b)^n\), viz. \[a^n + n a^{n-1} b + \frac{n(n-1)}{1 \cdot 2} a^{n-2} b^2 + \ldots + c.\] and \[1 + n l_1 (a + b) + \left\{\frac{n \cdot l_1 (a + b)}{1 \cdot 2}\right\}^2 + \ldots + c.\] being identical when \(n\) is a quantity, ought to remain so when \(n\) is a linear operation, to verify which, suppose \(n = d_x\), now it has been shown that \([f(x)] e^{hx} = f(x + h);\) hence \([f(x)] (a + b)^{dx} = f\{x + l_1 (a + b)\}.\) But \[(a + b)^{dx} = a^{dx} + d_x a^{dx-1} b + \frac{d_x(d_x - 1)}{1 \cdot 2} a^{dx-2} b^2 + \frac{d_x(d_x - 1)(d_x - 2)}{1 \cdot 2 \cdot 3} a^{dx-3} b^3 + \ldots + c.\] \[= a^{dx} \left\{1 + d_x \cdot \frac{b}{a} + \frac{d_x(d_x - 1)}{1 \cdot 2} \left(\frac{b}{a}\right)^2 + \frac{d_x(d_x - 1)(d_x - 2)}{1 \cdot 2 \cdot 3} \left(\frac{b}{a}\right)^3 + \ldots + c.\right\}\] But by this article \[d_x (d_x - 1)(d_x - 2) \ldots (d_x - n + 1) = d_y^n \cdot y^n, \text{ if } y = e^x;\] therefore \[(a + b)^{dx} = a^{dx} \left\{1 + d_y \cdot \frac{by}{a} + \frac{d_y^2}{1 \cdot 2} \left(\frac{by}{a}\right)^2 + \frac{d_y^3}{1 \cdot 2 \cdot 3} \left(\frac{by}{a}\right)^3 + \ldots + c.\right\},\] and now introducing the subject \(f(x)\), we get \[f\{x + l_1 (a + b)\} = f(x + l_1 (a)) + \frac{by}{a} \cdot \frac{d_x f(x + l_1 (a))}{dy} + \frac{b^2 y^2}{a^2} \cdot \frac{d_x^2 f(x + l_1 (a))}{dy^2} + \ldots + c.\] which series, if we substitute for \(x\) its value \(l_1 y\), and put \(f(l_1 y) = \varphi(y)\), becomes \[ \varphi(ay + by) = \varphi(ay) + \frac{by}{a} \cdot \frac{d\varphi(ay)}{dy} + \frac{b^2y^2}{1 \cdot 2} \cdot \frac{d^2\varphi(ay)}{a^2 dy^2} + \ldots, \text{&c.} \] which being also deducible from Taylor's theorem gives the required verification. 28. Let \( v_1 v_2 v_3 \ldots v_n \) represent functions of \( x \) as multipliers, or other operations fixed relative to \( d_x \), it is required to find the polynomial arranged according to the powers (usually called orders) of \( d_x \) which shall be equivalent to the compound operation represented by \( v_1 d_x v_2 d_x v_3 d_x \ldots \ldots v_n d_x \). It is easy to see that this polynomial cannot contain the powers of the differential coefficients of any of the multipliers, that is such as \( \left(\frac{dv_1}{dx}\right)^2 \), &c.; moreover, if we suppose \( v_1 = \varepsilon^{a_1 x} \), then \( \frac{dv_1}{dx} = a_1^n \cdot \varepsilon^{a_1 x} \); thus the order of the differential coefficients of \( v_1 \) in the required result will be the same as the power of the multiplier \( a_1^n \) when we substitute \( \varepsilon^{a_1 x} \) for \( v_1 \); and the same method will apply to discover how \( v_2 v_3 \), &c. are involved; and when \( a_1 a_2 \), &c. do not enter as multipliers we thence know that \( v_1 v_2 \) &c. are themselves multipliers, and not their differential coefficients. This being considered, the question is reduced to find the value of the compound operation \( \varepsilon^{a_1 x} d_x \varepsilon^{a_2 x} d_x \varepsilon^{a_3 x} d_x \ldots \ldots \varepsilon^{a_n x} d_x \) arranged according to the decreasing indices of \( d_x \), and then where we have \( a_1^m \varepsilon^{a_1 x} \) if we put \( \frac{dv_1}{dx^m} \), and when we have \( a_2^p \varepsilon^{a_2 x} \) put \( \frac{dv_2}{dx^p} \), &c., we shall obtain more readily the result of the question proposed. Now by the last section \[ \varepsilon^{a_1 x} d_x = (d_x + a_1) \cdot \varepsilon^{a_1 x}; \] therefore \[ \varepsilon^{a_1 x} d_x \varepsilon^{a_2 x} d_x = (d_x + a_1) \cdot \varepsilon^{(a_1+a_2)x} d_x = (d_x + a_1)(d_x + a_1 + a_2) \varepsilon^{(a_1+a_2)x}. \] Similarly \[ \varepsilon^{a_1 x} d_x \varepsilon^{a_2 x} d_x \varepsilon^{a_3 x} d_x = (d_x + a_1)(d_x + a_1 + a_2)(d_x + a_1 + a_2 + a_3) \varepsilon^{(a_1+a_2+a_3)x}; \] and generally \[ \varepsilon^{a_1 x} d_x \varepsilon^{a_2 x} d_x \varepsilon^{a_3 x} d_x \ldots \varepsilon^{a_n x} d_x = (d_x + a_1)(d_x + a_1 + a_2) \ldots (d_x + a_1 + a_2 + \ldots + a_n) \varepsilon^{(a_1+a_2+\ldots+a_n)x}. \] If therefore we expand according to the decreasing indices of \( d_x \), the compound operation, \[ (d_x + a_1)(d_x + a_1 + a_2) \ldots (d_x + a_1 + a_2 + \ldots + a_n) \cdot \varepsilon^{a_1 x} \varepsilon^{a_2 x} \varepsilon^{a_3 x} \ldots \varepsilon^{a_n x}, \] we shall then have only to put \( v_1 \) for \( \varepsilon^{a_1 x} \), \( v_2 \) for \( \varepsilon^{a_2 x} \), &c. \[ \frac{dv_1}{dx} \text{ for } a_1 \varepsilon^{a_1 x}, \quad \frac{dv_2}{dx} \text{ for } a_2 \varepsilon^{a_2 x}, \text{ &c.} \] To effect the composition above indicated, let us seek the product (arranged according to the powers of \( x \)) of \[ (x + a_1)(x + a_1 + a_2)(x + a_1 + a_2 + a_3) \ldots (x + a_1 + a_2 + a_3 + \ldots + a_n). \] Represent this product by \[ x^n + A_1 x^{n-1} + A_2 x^{n-2} + \ldots + A_m x^{n-m} + \ldots + A_{n-1} x + A_n, \] the general coefficient \( A_m \) being the sums of products, each of which contain \( m \) factors. For \( A_1 \) it is easily seen that its value is \[ n \alpha_1 + (n - 1) \alpha_2 + (n - 2) \alpha_3 + \ldots 2 \alpha_{n-1} + \alpha_n. \] Again, \( A_2 \) consists of products such as \( \alpha_1 \alpha_2, \alpha_1 \alpha_3, \alpha_2 \alpha_3, \) &c., and pure powers, as \( \alpha_1^2, \alpha_2^2, \) &c.; the general form of the first class of terms is \( \alpha_p \alpha_q \), and we now proceed to find its coefficient, or the number of times this combination occurs, which number may be denoted by \( (\alpha_p \alpha_q) \), and supposing \( p \) less than \( q \), no factor preceding \( x + \alpha_1 + \alpha_2 + \ldots \alpha_p \) will be concerned in forming the combination in question, and in the factor itself and the succeeding ones the terms preceding \( \alpha_p \) may also be neglected. The factors commencing from the above, arranged horizontally, will form this diagram. \[ \begin{align*} x + \alpha_1 + \ldots + \alpha_p \\ x + \alpha_1 + \ldots + \alpha_p + \alpha_{p+1} \\ x + \alpha_1 + \ldots \alpha_p + \alpha_{p+1} + \alpha_{p+2} \\ \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \\ x + \alpha_1 + \ldots \alpha_p + \alpha_{p+1} + \alpha_{p+2} + \ldots + \alpha_q \ast \\ x + \alpha_1 + \ldots \alpha_p + \ldots \ldots \ldots \ldots \ldots + \alpha_q + \alpha_{q+1} \\ x + \alpha_1 + \ldots \alpha_p + \ldots \ldots \ldots \ldots \ldots \alpha_q + \alpha_{q+1} + \alpha_{q+2} \\ &\text{etc.} \end{align*} \] Now if \( \alpha_{q+1} \) were placed where the asterisk stands, the combination of \( \alpha_p \) with \( \alpha_q \) and \( \alpha_{q+1} \) would be alike, \( \therefore (\alpha_p \alpha_q) - (\alpha_p \alpha_{q+1}) = \) the number of combinations of one term at the asterisk with the terms in the vertical column of \( \alpha_p \), except that \( \alpha_p \) which is the same horizontal line with the asterisk; it is therefore the number of terms minus one in that column which (since \( p - 1 \) factors precede the first above written) will be \( n - p \). Therefore \( \Delta \) denoting the finite difference, when \( q \) increases by unity we have \[ \Delta (\alpha_p \alpha_q) = -(n - p); \] therefore \[ (\alpha_p \alpha_q) = (n - p)(c - q), \] \( c \) being independent of \( q \). Suppose \( q = n \), \( (\alpha_p \alpha_n) \) will be the number of terms minus one in the column of \( \alpha_p \), since \( \alpha_n \) enters only once; that is \( (\alpha_p \alpha_n) = n - p \), therefore \( c - n = 1 \), or \( c = n + 1 \) which gives \[ (\alpha_p \alpha_q) = (n - p)(n - q + 1). \] As for the coefficients of the powers as \( \alpha_p^2 \), denoting such by a similar notation \( (\alpha_p^2) \), they will not be affected by the supposition that \[ \alpha_1 = 0 \alpha_2 = 0 \ldots \alpha_{p-1} = 0 \alpha_{p+1} = 0 \ldots \alpha_n = 0, \] they are therefore the same as in \((1 + \alpha_p)^{n-p+1}\) that is \(\frac{(n-p)(n-p+1)}{1 \cdot 2}\) which is half of the formula obtained by putting \(q = p\). Hence \[ A_2 = \frac{n \cdot (n-1)}{1 \cdot 2} \cdot \alpha_1^2 + \frac{(n-1)(n-2)}{1 \cdot 2} \cdot \alpha_2^2 + \frac{(n-2)(n-3)}{1 \cdot 2} \cdot \alpha_3^2 + \ldots (n-1)(n-1) \alpha_1 \alpha_2 \\ + (n-1)(n-2) \alpha_1 \alpha_3 + (n-1)(n-3) \alpha_1 \alpha_4 + \ldots \\ + (n-2)(n-2) \alpha_2 \alpha_3 + (n-2)(n-3) \alpha_2 \alpha_4 + \ldots \\ + (n-3)(n-3) \alpha_3 \alpha_4 + \ldots \] In like manner we may classify the terms of which \(A_3\) is composed into terms of the forms \(\alpha_p \alpha_q \alpha_r, \alpha_p^2 \alpha_q, \alpha_p^3\) respectively, \(p, q, r\) being arranged according to magnitude; their coefficients may be represented as before by the same letters in brackets. Every combination of \(\alpha_p \alpha_q\) may be combined with \(\alpha_r\) except such as are formed from the \(\alpha_p\) and \(\alpha_q\) which are in the same horizontal line with it, if these are erased the number \(n\) is reduced to \(n-1\), and the combinations of \(\alpha_p \alpha_q\) are then by what has been already shown only \((n-p-1)(n-q)\) in number, therefore the excess of the number of the combinations of \(\alpha_r\) with \(\alpha_p \alpha_q\) above that of \(\alpha_{r+1}\) is \((n-p-1)(n-q)\) or taking the finite difference in reference to \(r\) \[ \Delta (\alpha_p \alpha_q \alpha_r) = -(n-p-1)(n-q); \] thence \[ (\alpha_p \alpha_q \alpha_r) = (n-p-1)(n-q)(c-r), \] and putting \(r = n\) we find as before \(c = n + 1\); therefore \[ (\alpha_p \alpha_q \alpha_r) = (n-p-1)(n-q)(n-r+1) \] and generally if \(s > r, r > q, q > p, \&c.\), then by the same process \[ (\alpha_s \alpha_r \alpha_q \alpha_p \ldots) = (n-s+1)(n-r)(n-q-1)(n-p-2) \ldots \] Again, if we erase the \(\alpha_p\) which is in the same horizontal line with \(\alpha_r\) the number of combinations of the remaining terms \(\alpha_p\) (in number \(n-p\)) are \(\frac{(n-p)(n-p-1)}{1 \cdot 2}\), and since the number of terms in the vertical line where \(\alpha_q\) stands is \(n-q+1\), it follows that \[ (\alpha_p^2 \alpha_q) = \frac{(n-p)(n-p-1)}{1 \cdot 2} \cdot (n-q+1), \] and generally \[ (\alpha_s' \alpha_r' \alpha_q' \ldots) = \frac{(n-s+1)(n-s) \ldots (s' \text{ times})}{1 \cdot 2 \ldots s'} \cdot \frac{(n-r)(n-r-1) \ldots (r' \text{ times})}{1 \cdot 2 \ldots r'} \cdot \frac{(n-q-1)(n-q-2) \ldots q' \text{ times}}{1 \cdot 2 \ldots q'} \ldots \] Lastly \( (\alpha_p^3) \) is the same as if all the terms \( \alpha_1 \alpha_2 \) &c. were zero, except \( \alpha_p \) and is therefore \[ \frac{(n-p+1)(n-p)(n-p-1)}{1 \cdot 2 \cdot 3}. \] More generally \[ (\alpha_p^{p'}) = \frac{(n-p+1)(n-p) \cdots (p' \text{ times})}{1 \cdot 2 \cdots p'}. \] We have thus investigated the coefficients of every combination which enter the whole product, and then if only \( \frac{d^{p'}v_p}{dx^{p'}} \) be substituted for any general symbol \( (\alpha_p^{p'}) \), the required development is completely obtained. It may be remarked that the coefficients of the combinations of consecutive terms are pure powers, thus \[ (\alpha_1 \alpha_2) = (n-1)^2 \quad \alpha_2 \alpha_3 = (n-2)^2, \text{ &c.} \quad \alpha_1 \alpha_2 \alpha_3 = (n-2)^3. \] 29. By the preceding investigation we have obtained the following general formula, in which the subject is any function of \( x \): \[ \frac{v_1 d_x v_2 d_x v_3 d_x \cdots v_n d_x}{v_1 \cdot v_2 \cdot v_3 \cdots v_n} = d_x^n + d_x^{n-1} \left\{ \frac{n d v_1}{v_1 d x} + (n-1) \frac{d v_2}{v_2 d x} + (n-2) \frac{d v_3}{v_3 d x} + \cdots \frac{d v_n}{v_n d x} \right\} \] \[ + d_x^{n-2} \left\{ (n-1)(n-1) \frac{d v_1}{v_1 d x} \cdot \frac{d v_2}{v_2 d x} + (n-1)(n-2) \frac{d v_1}{v_1 d x} \cdot \frac{d v_3}{v_3 d x} + \cdots \right. \] \[ + (n-2)(n-2) \frac{d v_2}{v_2 d x} \cdot \frac{d v_3}{v_3 d x} + (n-2)(n-3) \frac{d v_2}{v_2 d x} \cdot \frac{d v_4}{v_4 d x} + \cdots \] \[ + (n-3)(n-3) \frac{d v_3}{v_3 d x} \cdot \frac{d v_4}{v_4 d x} + \cdots \] \[ + \cdots \cdots \cdots \cdots \cdots \cdots \] \[ + \frac{n(n-1)}{2} \cdot \frac{d^2 v_1}{v_1 d x^2} + \frac{(n-1)(n-2)}{2} \cdot \frac{d^2 v_2}{v_2 d x^2} \] \[ + \frac{(n-2)(n-3)}{2} \cdot \frac{d^2 v_3}{v_3 d x^2} + \cdots \right\} \] \[ + d_x^{n-3} \left\{ (n-2)(n-2)(n-2) \cdot \frac{d v_1}{v_1 d x} \cdot \frac{d v_2}{v_2 d x} \cdot \frac{d v_3}{v_3 d x} + \text{&c.} \right\} \] \[ + \text{&c.} \] Put \( v_1 = v_2 = v_3 = \cdots = v_n = v \) hence, \[ (v d_x)^n = d_x^n v^n + \frac{n(n+1)}{2} d_x^{n-1} v^{n-1} \cdot \frac{d v}{d x} \] \[ + d_x^{n-2} \left\{ \frac{(n-1)n(n+1)(3n-2)}{2 \cdot 3 \cdot 4} \cdot v^{n-2} \frac{d v^3}{d x^3} + \frac{(n-1)n(n+1)}{2 \cdot 3} \cdot v^{n-1} \frac{d^2 v}{d x^2} \right\} \] and similarly if we put in the general formula \( v_1 = 1 \), and write \( v_1 \) for \( v_2 \), \( v_2 \) for \( v_3 \), &c. and then multiply by \( v_n \), finally making all equal to \( v \), we obtain \[ (d_x v)^n = d_x^n v^n + \frac{(n-1)n}{2} d_x^{n-1} v^{n-1} \frac{dv}{dx} \] \[ + d_x^{n-2} \left\{ \frac{(n-2)(n-1)n(3n-5)}{2 \cdot 3 \cdot 4} v^{n-2} \frac{d^2v}{dx^2} + \frac{(n-2)(n-1)n}{2 \cdot 3} v^{n-1} \frac{d^2v}{dx^2} \right\} + \ldots \] Put now \( \frac{1}{v} \) for \( v \) in this formula, whence \[ (d_x v^{-1})^n = d_x^n v^{-n} - \frac{(n-1)n}{2} d_x^{n-1} v^{-n-1} \frac{dv}{dx} \] \[ + d_x^{n-2} \left\{ \frac{(n-2)(n-1)n(n+1)}{2 \cdot 4} v^{-n-2} \frac{d^3v}{dx^3} - \frac{(n-2)(n-1)n}{2 \cdot 3} v^{-n-1} \frac{d^2v}{dx^2} \right\} + \ldots \] 30. Change of the independent variable. When \( u \) is a function of \( y \), and \( y \) of \( x \), then it is easily shown that \[ \frac{du}{dy} = \frac{du}{dx} \cdot \left( \frac{dy}{dx} \right)^{-1}, \quad \text{or} \quad dy = dx \cdot \left( \frac{dy}{dx} \right)^{-1} \] omitting the subject \( u \); hence by substitution in the preceding general formula we have \[ (dy)^n = d_x^n \left( \frac{dy}{dx} \right)^{-n} - \frac{(n-1)n}{2} d_x^{n-1} \left( \frac{dy}{dx} \right)^{-n-1} \cdot \left( \frac{d^2y}{dx^2} \right) \] \[ + d_x^{n-2} \left\{ \frac{(n-2)(n-1)n(n+1)}{2 \cdot 4} \left( \frac{dy}{dx} \right)^{-n-2} \left( \frac{d^3y}{dx^3} \right)^2 - \frac{(n-2)(n-1)n}{2 \cdot 3} \left( \frac{dy}{dx} \right)^{-n-1} \cdot \frac{d^3y}{dx^3} \right\} \] \[ + \ldots \] Thus, for example, if \( n = 3 \), \[ \frac{d^3u}{dy^3} = \frac{d^3u}{dx^3} \left( \frac{dy}{dx} \right)^{-3} - 3 \cdot \frac{d^2u}{dx^2} \left( \frac{dy}{dx} \right)^{-4} \cdot \frac{d^2y}{dx^2} + \frac{du}{dx} \left\{ 3 \left( \frac{dy}{dx} \right)^{-5} \left( \frac{d^2y}{dx^2} \right)^2 - \left( \frac{dy}{dx} \right)^{-4} \cdot \frac{d^3y}{dx^3} \right\}; \] and \[ \frac{d^3x}{dy^3} = 3 \left( \frac{dy}{dx} \right)^{-5} \cdot \left( \frac{d^3y}{dx^3} \right)^2 - \left( \frac{dy}{dx} \right)^{-4} \cdot \frac{d^3y}{dx^3}. \] As I am not aware that any formula has heretofore been given for the general change of the independent variable, I shall here add a reinvestigation of the same subject on simple principles. When \( x \) is changed into \( x + h \), suppose \( y \) to become \( y + k \), and \( u \) to be changed into \( u + l \). Now \( u + l \) may be expressed by Taylor's theorem in two ways, \[ u + l = u + \frac{du}{dx} \cdot h + \frac{d^2u}{dx^2} \cdot \frac{h^2}{1 \cdot 2} + \frac{d^3u}{dx^3} \cdot \frac{h^3}{1 \cdot 2 \cdot 3} + \ldots \] \[ = u + \frac{du}{dy} \cdot k + \frac{d^2u}{dy^2} \cdot \frac{k^2}{1 \cdot 2} + \frac{d^3u}{dy^3} \cdot \frac{k^3}{1 \cdot 2 \cdot 3} + \ldots \] Hence \( \frac{d^n u}{dy^n} \cdot \frac{1}{1 \cdot 2 \ldots n} \) is the coefficient of \( k^n \) in \( l \), that is, in the expression \[ l = \frac{du}{dx} \cdot h + \frac{d^2u}{dx^2} \cdot \frac{h^2}{1 \cdot 2} + \frac{d^3u}{dx^3} \cdot \frac{h^3}{1 \cdot 2 \cdot 3} + \ldots + \frac{d^n u}{dx^n} \cdot \frac{h^n}{1 \cdot 2 \ldots n} + \ldots \] and since it is visible that \( h \) may be expanded in the form \( A k + B k^2 + \ldots \), it will be unnecessary to consider in \( l \) any term after \( \frac{d^n u}{dx^n} \cdot \frac{h^n}{1 \cdot 2 \ldots n} \). Hence \( \frac{d^n u}{dy^n} \) is the coefficient of \( k^n \), when for \( h \) we substitute its value in terms of \( k \) in the polynomial. \[ H = \frac{d^n u}{dx^n} \cdot h^n + n \cdot \frac{d^{n-1} u}{dx^{n-1}} \cdot h^{n-1} + n(n-1) \cdot \frac{d^{n-2} u}{dx^{n-2}} \cdot h^{n-2} + \ldots 2 \cdot 3 \cdot 4 \ldots n \cdot \frac{du}{dx} \cdot h. \] Again, \( k \) is given in terms of \( h \) by Taylor's theorem. \[ k = \frac{dy}{dx} \cdot h + \frac{d^2 y}{dx^2} \cdot \frac{h^2}{1 \cdot 2} + \frac{d^3 y}{dx^3} \cdot \frac{h^3}{1 \cdot 2 \cdot 3} + \ldots, \] Put for abridgement \[ Y = \frac{d^2 y}{dx^2} \cdot \left( \frac{dy}{dx} \right)^{-1} \cdot \frac{h}{2} + \frac{d^3 y}{dx^3} \cdot \left( \frac{dy}{dx} \right)^{-1} \cdot \frac{h^3}{2 \cdot 3} + \frac{d^4 y}{dx^4} \cdot \left( \frac{dy}{dx} \right)^{-1} \cdot \frac{h^4}{2 \cdot 3 \cdot 4} + \ldots, \] The equation for determining \( h \) in terms of \( k \) becomes then \[ h - \left\{ k \left( \frac{dy}{dx} \right)^{-1} - h Y \right\} = 0. \] In a memoir on the Resolution of Algebraic Equations, published in the fourth volume of the Transactions of the Cambridge Philosophical Society, I have proved the following rule. If \( \varphi(x) = 0 \) be an equation which contains only entire positive powers of \( x \), and \( f(x) \) any other function of the same kind, of which the differential coefficient or derived function is \( f'(x) \), then the value of \( f(x) \) will be found by taking the coefficient of \( \frac{1}{x} \) in the expression \( -f'(x) \cdot \frac{\varphi(x)}{x} \). Applying this rule to the case before us we find that \( H \) is the coefficient of \( \frac{1}{h} \) in the formula \[ -\frac{dH}{dh} \cdot 1 \cdot \left\{ 1 - \left( \frac{k}{h} \cdot \left( \frac{dy}{dx} \right)^{-1} - Y \right) \right\}, \] and consequently \( \frac{d^n y}{dx^n} \) is the coefficient of \( \frac{k^n}{h} \) in the same formula. The first \((n-1)\) terms in the expansion of the logarithm do not contain \( k^n \) and may therefore be neglected, instead then of \[ -1 \cdot \left\{ 1 - \left( \frac{k}{h} \cdot \left( \frac{dy}{dx} \right)^{-1} - Y \right) \right\} \] we may use the series \[ \frac{1}{n} \left\{ \frac{k}{h} \cdot \left( \frac{dy}{dx} \right)^{-n} - Y \right\}^n + \frac{1}{n+1} \left\{ \frac{k}{h} \cdot \left( \frac{dy}{dx} \right)^{-1} - Y \right\}^{n+1} + \ldots = S, \] and the value of \[ \frac{dH}{dh} \text{ is } n \cdot \frac{d^n u}{dx^n} \cdot h^{n-1} + n(n-1) \cdot \frac{d^{n-1} u}{dx^{n-1}} \cdot h^{n-2} \] in the product of both which series the coefficient of \( \frac{k^n}{h} \) being sought will give the required value of \( \frac{d^n u}{dy^n} \) and this is manifestly of the form \[ A_1 \cdot \frac{d^n u}{dx^n} + A_2 \cdot \frac{d^{n-1} u}{dx^{n-1}} + A_3 \cdot \frac{d^{n-2} u}{dx^{n-2}} + \ldots + A_n \cdot \frac{du}{dx} \] \( A_1 \) is the coefficient of \( \frac{k^n}{h} \) in \( n h^{n-1} S \), or of \( \frac{k^n}{h^n} \) in \( n S \) \( A_2 \) ............... of \( \frac{k^n}{h} \) in \( n (n-1) h^{n-2} S \), or of \( \frac{k^n}{h^{n-1}} \) in \( n (n-1) . S \), &c. Now if we observe that \( Y \) contains \( h \) as a factor it follows that the coefficient of \( k^n \) in the series \( S \) is of the form \[ \frac{\alpha_1}{h^n} + \frac{\alpha_2}{h^{n-1}} + \frac{\alpha_3}{h^{n-2}} + \ldots \] and consequently \[ A_1 = n \alpha_1 \quad A_2 = n(n-1) \alpha_2 \quad A_3 = n(n-1)(n-2) \alpha_3 \ldots \ldots A_n = n(n-1)(n-2) \ldots 1 \alpha_n. \] Therefore \[ \frac{d^n u}{dy^n} = n \alpha_1 \frac{d^n u}{dx^n} + n(n-1) \alpha_2 \frac{d^{n-1} u}{dx^{n-1}} + n(n-1)(n-2) \alpha_3 \frac{d^{n-2} u}{dx^{n-2}} + \ldots \] By taking in fact the coefficient of \( k^n \) in \( S \) and multiplying it by \( h^n \) we find that the product, viz. \( \alpha_1 + \alpha_2 h + \alpha_3 h^2 + \ldots \alpha_n h^{n-1} + \ldots \), &c. is equivalent to \[ \frac{1}{n} \left( \frac{dy}{dx} \right)^{-n} - \frac{n+1}{2} \left( \frac{dy}{dx} \right)^{-n-1} \left\{ \frac{d^2 y}{dx^2} \cdot \frac{1}{2} + \frac{d^3 y}{dx^3} \cdot \frac{h}{2.3} + \ldots \right\} \] \[ + \frac{n+1}{2} h^2 \left( \frac{dy}{dx} \right)^{-n-2} \left\{ \frac{d^2 y}{dx^2} \cdot \frac{1}{2} + \frac{d^3 y}{dx^3} \cdot \frac{h}{2.3} + \ldots \right\}^2 \] \[ - \frac{(n+1)(n+2)}{2.3} h^3 \left( \frac{dy}{dx} \right)^{-n-3} \left\{ \frac{d^2 y}{dx^2} \cdot \frac{1}{2} + \frac{d^3 y}{dx^3} \cdot \frac{h}{2.3} + \ldots \right\}^3 \] \[ + \ldots \] Hence \[ \alpha_1 = \frac{1}{n} \left( \frac{dy}{dx} \right)^{-n} \] \[ \alpha_2 = - \left( \frac{dy}{dx} \right)^{-n-1} \cdot \frac{d^2 y}{dx^2} \cdot \frac{1}{2} \] \[ \alpha_3 = \frac{n+1}{2} \left( \frac{dy}{dx} \right)^{-n-2} \cdot \left( \frac{d^2 y}{dx^2} \right)^2 - \left( \frac{dy}{dx} \right)^{-n-1} \cdot \frac{d^3 y}{dx^3} \] \[ \alpha_4 = -\frac{(n+1)(n+2)}{2 \cdot 3} \cdot \left( \frac{dy}{dx} \right)^{-n-3} \cdot \left( \frac{d^2y}{dx^2} \right)^3 \] \[ + \frac{n+1}{2} \cdot \left( \frac{dy}{dx} \right)^{-n-2} \cdot 2 \cdot \left( \frac{d^2y}{dx^2} \right) \cdot \left( \frac{d^3y}{2 \cdot 3 \cdot dx^3} \right) - \left( \frac{dy}{dx} \right)^{-n-1} \cdot \frac{d^4y}{dx^4} \cdot \frac{1}{2 \cdot 3 \cdot 4}. \] In general let \[ \left( \frac{d^2y}{dx^2} \cdot \frac{1}{2} + \frac{d^3y}{dx^3} \cdot \frac{h}{2 \cdot 3} + \frac{d^4y}{dx^4} \cdot \frac{h^2}{2 \cdot 3 \cdot 4} + \text{etc.} \right)^m = y_{m,0} + y_{m,1} \cdot h + y_{m,2} \cdot h^2 + \text{etc.} \] Then \[ (-1)^{m-1} \alpha_m = \frac{(n+1)(n+2) \ldots (n+m-2)}{2 \cdot 3 \ldots (m-1)} \cdot \left( \frac{dy}{dx} \right)^{-n-m+1} \cdot y_{m-1,0} \] \[ - \frac{(n+1)(n+2) \ldots (n+m-3)}{2 \cdot 3 \ldots (m-2)} \cdot \left( \frac{dy}{dx} \right)^{-n-m+2} \cdot y_{m-2,1} \] \[ + \frac{(n+1)(n+2) \ldots (n+m-4)}{2 \cdot 3 \ldots (m-3)} \cdot \left( \frac{dy}{dx} \right)^{-n-m+3} \cdot y_{m-3,2} \] \[ \vdots \] \[ \mp \left( \frac{dy}{dx} \right)^{-n-1} \cdot y_{1,m-2}, \] \( m \) being \( > 1 \). Corollary. Put \( u = x \), then \[ \frac{dx^n}{dy^n} = n(n-1)(n-2) \ldots 1 \alpha_n, \] where \[ (-1)^{n-1} \cdot \alpha_n = \frac{(n+1)(n+2) \ldots (2n-2)}{2 \cdot 3 \ldots n-1} \cdot \left( \frac{dy}{dx} \right)^{-2n+1} \cdot y_{n-1,0} \] \[ - \frac{(n+1)(n+2) \ldots (2n-3)}{2 \cdot 3 \ldots (n-2)} \cdot \left( \frac{dy}{dx} \right)^{-2n+2} \cdot y_{n-2,1}, \text{etc.} \] \[ \therefore (-1)^{n-1} \cdot \frac{dx^n}{dy^n} = \left\{ n(n+1) \ldots (2n-2) \cdot \frac{dy}{dx} \cdot y_{n-1,0} \right. \] \[ - (n-1) \cdot n(n+1) \ldots (2n-3) \cdot \left( \frac{dy}{dx} \right)^2 \cdot y_{n-2,1} \] \[ + (n-2) \cdot (n-1) \ldots (2n-4) \cdot \left( \frac{dy}{dx} \right)^3 \cdot y_{n-3,2}, \text{etc.} \left\} \cdot \left( \frac{dy}{dx} \right)^{-2n}. \] Thus \[ \frac{dx}{dy} = \left\{ \frac{dy}{dx} \cdot y_{0,0} \right\} \cdot \left( \frac{dy}{dx} \right)^{-2} = \left( \frac{dy}{dx} \right)^{-1} \] \[ \frac{d^2x}{dy^2} = \left\{ 2 \frac{dy}{dx} \cdot y_{1,0} \right\} \cdot \left( \frac{dy}{dx} \right)^{-4} = \left( \frac{dy}{dx} \right)^{-3} \cdot \frac{d^2y}{dx^2} \] \[ \frac{d^3x}{dy^3} = \left\{ 3 \cdot 4 \cdot \frac{dy}{dx} \cdot y_{2,0} - 2 \cdot 3 \cdot \left( \frac{dy}{dx} \right)^2 \cdot y_{1,1} \right\} \left( \frac{dy}{dx} \right)^{-6} \] \[ = 3 \cdot \left( \frac{dy}{dx} \right)^{-5} \cdot \left( \frac{d^2y}{dx^2} \right)^2 - \left( \frac{dy}{dx} \right)^{-4} \cdot \frac{d^3y}{dx^3}, \text{etc. etc.} \]