On the Developement of Exponential Functions; Together with Several New Theorems Relating to Finite Differences
Author(s)
John Frederick W. Herschel
Year
1816
Volume
106
Pages
22 pages
Language
en
Journal
Philosophical Transactions of the Royal Society of London
Full Text (OCR)
III. On the developement of exponential functions; together with several new theorems relating to finite differences. By John Frederick W. Herschel, Esq. F. R. S.
Read December 14, 1815.
In the year 1772, Lagrange, in a Memoir, published among those of the Berlin Academy, announced those celebrated theorems expressing the connection between simple exponential indices, and those of differentiation and integration. The demonstration of those theorems, although it escaped their illustrious discoverer, has been since accomplished by many analysts, and in a great variety of ways. Laplace set the first example in two Memoirs presented to the Academy of Sciences,* and may be supposed in the course of these researches, to have caught the first hint of the Calcul des Fonctions Generatrices with which they are so intimately connected; as, after an interval of two years, another demonstration of them, drawn solely from the principles of that calculus appeared, together with the calculus itself, in the memoirs of the Academy. This demonstration, involving, however, the passage from finite to infinite, is therefore (although preferable perhaps in a systematic arrangement, where all is made to flow from one fundamental principle) less elegant; not on account of any confusion of ideas, or want of evidence; but, because the ideas of finite and infinite, as such, are extraneous to symbolic language, and, if we
* Mém. des Savans Etrangers, 1773. p. 535.—Mém. de l'Acad. 1772. p. 102.
would avoid their use, much circumlocution, as well as very unwieldy formulæ must be introduced. Arbogast also, in his work on derivations, has given two most ingenious demonstrations of them, and added greatly to their generality; and lastly, Dr. Brinkley has made them the subject of a paper in the Transactions of this Society,* to which I shall have occasion again to refer. Considered as insulated truths, unconnected with any other considerable branch of analysis, the method employed by the latter author seems the most simple and elegant which could have been devised. It has however the great inconvenience of not making us acquainted with the bearings and dependencies of these important theorems, which, in this instance, as in many others, are far more valuable than the mere formulæ.
The theorems above referred to are comprehended in the equation.
\[ \Delta^n u_x = \{ e^{\Delta x.D} \}_n u_x ; \ldots \ldots \ldots \ldots (a) \]
or, more generally
\[ f(1 + \Delta) u_x = f \{ e^{\Delta x.D} \} u_x ; \ldots \ldots \ldots \ldots (b) \]
where the \( \Delta \) applies to the variation of \( x \), and the \( D \) to the functional characteristic \( u \); and where \( n \) may have any value whatever.
Taking these theorems for granted, I shall observe, that, in their present form, they are but abridged expressions of their meaning, and that to become practically useful, their second members must be developed in a series of the powers of \( \Delta x.D \). This part of their theory has been most beautifully and satisfactorily treated by Laplace in the case of \( n = -1 \)
* Phil. Trans. 1807. I.
(one of the most important). Unfortunately, his method turns upon an artifice which, although remarkably ingenious, fails to afford us any satisfaction except in this particular case; and I am not aware that his researches have since extended beyond it. The essay of Dr. Brinkley (the only author I have met with who has attempted the general problem) goes to the bottom of the difficulty, and leads to a formula which, considering the complex nature of the subject, must be allowed to be far more simple than could have been expected. It is often, however, advantageous to undertake the solution of the same problem by different methods. The excellent geometer I have mentioned, has adopted one which appears at first sight very inartificial. It consists in expanding the second member of the equation \((a)\) reduced to the form
\[
\left\{ \frac{t}{1} + \frac{t^2}{1.2} + \frac{t^3}{1.2.3} + &c. \right\}^n
\]
by the well-known theorem for raising a multinomial to the \(n^{th}\) power. The difficulties and apparent obstacles which this method presents, he has overcome or eluded by a singularly acute discussion of the combinations of the various numerical coefficients and their powers. But it is obvious that this method, applied to the more general equation \((b)\), would lead into details of extreme complexity. This consideration induced me to begin with that equation, regarding the other as a particular case of it; and I have thus arrived at a general and highly interesting formula (equation \((2)\) of the following pages) hitherto, I believe, totally unnoticed, and which in the particular case of the equation \((a)\), when \(n\) is a positive integer, affords precisely the same result as Dr. Brinkley has given: when \(n\), however, is negative, it yields an expres-
sion widely differing from his in point of form (though of course affording the same numerical results) and which in the most important case, where \( n = -1 \), takes a form of greater simplicity than any I am yet aware of.
I purpose then to consider the second member of \((b)\) as developed in a series of powers of \( \Delta x \cdot D \) (which for the sake of brevity we will denote by \( t \)). If then we suppose
\[
f(\epsilon^t) = A_0 + A_1 t + A_2 t^2 + \&c.
\]
we shall have
\[
A_x = \frac{d^x f(\epsilon^t)}{1.2.\ldots.x. dt^x};
\]
where \( t = 0 \) after the differentiations.
Now, it is easy to see that \( \frac{d^x f(\epsilon^t)}{dt^x} \) will, by performing the operations indicated, assume the form
\[
K_{x,1} \epsilon^t \cdot Df(\epsilon^t) + K_{x,2} \epsilon^{2t} \cdot D^2f(\epsilon^t) + \&c.
\]
\( K_{x,y} \) being a certain numerical coefficient, depending on an equation of differences
\[
K_{x+1,y+1} = (y+1). K_{x,y+1} + K_{x,y}
\]
whose complete integral is
\[
K_{x,y} = C_y y^x \cdot C_{y-1} \frac{(y-1)^x}{1} + \ldots.(-1)^{y+1} \frac{1^x}{1.2.\ldots.(y-1)}.C_1
\]
\( C_y \) being an arbitrary function of \( y \), to determine which we have only to consider that \( K_{x,x} \) is always, necessarily, unity; and consequently
\[
C_x \cdot x^x - C_{x-1} \frac{(x-1)^x}{1} + \ldots.(-1)^{x+1} C_1 \cdot \frac{1^x}{1.2.\ldots.(x-1)} = 1
\]
Now, we know that
\[
x^x - x \cdot \frac{(x-1)^x}{1} + \&c. = 1.2.\ldots.x
\]
that is,
$$\frac{1}{1 \cdot 2 \cdot \ldots \cdot x} \cdot x^x = \frac{1}{1 \cdot 2 \cdot \ldots \cdot (x-1)} \cdot \frac{(x-1)^x}{1} + \text{&c.} = 1;$$
whence it is plain that
$$C_y = \frac{1}{1 \cdot 2 \cdot \ldots \cdot y}$$
and of course that
$$K_{x,y} = \frac{y^x - \frac{y}{1} \cdot (y-1)^x + \text{&c.}}{1 \cdot 2 \cdot \ldots \cdot y}$$
where $\Delta^y o^x$ denotes the first term of the $y^{th}$ differences of the terms of a series $o^x$, $1^x$, $2^x$, &c. We have then making $t=0$,
$$\frac{d^x f(t)}{dt^x} = \frac{Df(1)}{1} \Delta o^x + \ldots \ldots \ldots \frac{D_x f(1)}{1 \cdot 2 \cdot \ldots \cdot x} \Delta^x o^x.$$
If we separate the symbols of operation from those of quantity, the second member of this equation may be much more elegantly written as follows:
$$\left\{ \frac{D \Delta}{1} + \frac{(D \Delta)^2}{1 \cdot 2} + \ldots \ldots \frac{(D \Delta)^x}{1 \cdot 2 \cdot \ldots \cdot x} \right\} f(1) \cdot o^x; \ldots \ldots \ldots \ldots (1)$$
referring the $D$ to the functional characteristic $f$, and the $\Delta$ to the $o$ and its powers.—Or, we may throw it into the following form,
$$\left\{ \frac{Df(1)}{1} \Delta + \frac{D^2 f(1)}{1 \cdot 2} \Delta^2 + \ldots \ldots \frac{D_x f(1)}{1 \cdot \ldots \cdot x} \Delta^x \right\} o^x$$
Upon this, we have to observe—first, that the addition of the term $f(1)$ at the beginning of the series within the brackets makes no difference in the result; adding only to it the term $f(1) \times o^x$, which vanishes of itself: and, in the next place, that we are at liberty to suppose the series continued to infinity; as every term beyond $\frac{D_x f(1)}{1 \cdot \ldots \cdot x} \Delta^x o^x$ vanishes, owing
to the well-known property of the functions $\Delta^x + \Delta^{2x} o^x$, $\Delta^{x+2} o^x$, &c., each of which is equal to zero. Our series then becomes
$$\{ f(1) + \frac{Df(1)}{1} \Delta + \&c. \} o^x = f(1 + \Delta) o^x$$
and we have therefore
$$f(\varepsilon^t) = f(1) + \frac{t}{1} \cdot f(1 + \Delta) o + \frac{t^2}{1.2} f(1 + \Delta) o^2 + \&c. \ldots \ldots (2)$$
In applying this series to any particular case we have only to develope $f(1 + \Delta)$ in powers of $\Delta$: then striking out the first term, as well as all those where the exponent of $\Delta$ is higher than that of $t$, to apply each of the remaining ones immediately before the annexed power of $o$, and the developement is then in a form adapted to numerical computation. This formula may be also farther compressed into
$$f(\varepsilon^t) = f(1 + \Delta) \varepsilon^{o \cdot t}; \ldots \ldots \ldots \ldots \ldots \ldots (3)$$
by simply writing it as follows:
$$f(\varepsilon^t) = f(1 + \Delta) \left\{ 1 + \frac{t \cdot o}{1} + \frac{t^2 \cdot o^2}{1.2} + \&c. \right\}.$$
I shall notice one more form in which the same result may be exhibited. If we continue the series (1), as before, to infinity, and add the term 1 at its commencement, it becomes
$$\{ 1 + \frac{\Delta \cdot D}{1} + \frac{\Delta^2 \cdot D^2}{1.2} + \&c. \} o^x \cdot f(1) = \varepsilon^{\Delta D} o^x \cdot f(1)$$
whence, we obtain
$$f(\varepsilon^t) = f(1) + \frac{t}{1} \cdot \varepsilon^{\Delta D} o \cdot f(1) + \frac{t^2}{1.2} \cdot \varepsilon^{\Delta D} o^2 \cdot f(1) + \&c.$$
or, attending carefully to the application of the symbols
$$f(\varepsilon^t) = \varepsilon^{\Delta D} \left\{ 1 + \frac{o \cdot t}{1} + \frac{o^2 \cdot t^2}{1.2} + \&c. \right\}$$
$$= \varepsilon^{\Delta D} + o \cdot t f(1); \ldots \ldots \ldots \ldots \ldots \ldots (4)$$
We will now proceed to apply these results to the actual
developement of exponential functions, &c.
development of equation (a). And, first, in the case where \( n \) is a positive integer, we have
\[
f(t) = (\varepsilon^t - 1)^n; \quad f(t) = (t-1)^n.
\]
consequently,
\[
f(1+\Delta) = (1+\Delta-1)^n = \Delta^n
\]
wherefore the equation (2) becomes
\[
(\varepsilon^t - 1)^n = \frac{t}{1} \cdot \Delta^n o + \frac{t^2}{1.2} \cdot \Delta^n o^2 + \frac{t^3}{1.2.3} \cdot \Delta^n o^3 + \&c.; \ldots \ldots (5).
\]
of which the first \( n-1 \) vanish of themselves.
Let us next consider the formula \( (\varepsilon^t - 1)^{-n}; \) — \( n \) being a negative integer. As this function, when developed, must evidently contain the negative powers of \( t \), as far as \( t^{-n} \), we first throw it into the form
\[
t^{-n} \left\{ \frac{t}{\varepsilon^t - 1} \right\}^n, \text{ or its equal } t^{-n} \left\{ \frac{\log_e \varepsilon^t}{\varepsilon^t - 1} \right\}^n
\]
supposing then \( f(\varepsilon^t) = \left\{ \frac{\log_e \varepsilon^t}{\varepsilon^t - 1} \right\}^n \), we shall have by applying the equation (2)
\[
\left\{ \frac{t}{\varepsilon^t - 1} \right\}^n = 1 + \frac{t}{1} \cdot \left\{ \frac{\log_e (1+\Delta)}{\Delta} \right\}^n o + \frac{t^2}{1.2} \cdot \left\{ \frac{\log_e (1+\Delta)}{\Delta} \right\}^n o^2 + \&c.; \ldots \ldots \ldots (6).
\]
All that now remains to be done is, to develope the function \( \left\{ \frac{\log_e (1+\Delta)}{\Delta} \right\}^n \) in powers of \( \Delta \). When \( n = 1 \), the development is well known to be
\[
\frac{1}{1} - \frac{\Delta}{2} + \frac{\Delta^2}{3} - \&c.
\]
Hence, if we suppose
\[
\frac{t}{\varepsilon^t - 1} = 1 + B_1 \cdot \frac{t}{1} + B_2 \cdot \frac{t^2}{1.2} + \&c.
\]
we shall have
\[
B_x = \frac{\log_e (1+\Delta)}{\Delta} o^x
\]
\[
= - \left\{ \frac{\Delta o^x}{2} - \frac{\Delta^2 o^x}{3} + \&c.; \ldots \ldots \ldots (7).
\]
and in general, if
\[ \left( \frac{t}{t-1} \right)^n = 1 + n B_1 \cdot t^1 + n B_2 \cdot t^2 + \&c. \]
we shall have
\[ n B_x = \left\{ \log.(1+\Delta) \over \Delta \right\}^n o^x; \ldots \ldots \ldots \ldots (8). \]
The coefficient of \( \Delta^x \) in the function \( \left\{ \log.(1+\Delta) \over \Delta \right\}^n \), developed in powers of \( \Delta \), is evidently
\[ \frac{d^{x+n}.(\log.t)^n}{1.2.\ldots.(x+n).dt^{x+n}} \]
\( t \) being made \( = 1 \) after the differentiations. Now we easily find, that the expression
\[ \frac{d^{x+n}.(\log.t)^n}{dt^{x+n}} \]
After executing the operations indicated must take the form
\[ \frac{A_{x+1} A_x \cdot \log.t + \ldots n^{-1} A_x (\log.t)^{n-1}}{t^{x+n}} \]
and the equations which determine \( A_x, \&c. \) are
\[ n^{-1} A_{x+1} = -(x+n) \cdot n^{-1} A_x \]
\[ n^{-2} A_{x+1} = -(x+n) \cdot n^{-2} A_x + (n-1) \cdot n^{-1} A_x \]
\[ \ldots \ldots \ldots \ldots \ldots \ldots \ldots \]
\[ A_{x+1} = -(x+n) \cdot A_x + 1 \cdot 1 A_x \]
The integration of these equations is attended with no difficulty, and gives for the value of \( A_x \) (the only one wanted) as follows:
\[ (-1)^x \cdot 1.2.\ldots(x+n-1) \cdot \Sigma \frac{1}{x+n} \Sigma \frac{1}{x+n} \Sigma \ldots \ldots \Sigma \frac{1}{x+n} \]
where there are \( (n-1) \) signs of integration; a constant being included under each. If now we suppose
\[ \frac{1}{1} + \frac{1}{2} + \ldots \ldots \frac{1}{x} = S \left\{ \frac{1}{x} \right\}, \]
\[ \frac{1}{1.2} + \frac{1}{1.3} + \frac{1}{2.3} + \ldots \frac{1}{(x-1).x} = S^2 \left\{ \frac{1}{x} \right\} \]
and so on, we shall have no difficulty in convincing ourselves that
\[ A_x = (-1)^x \cdot 1.2.\ldots.(x + n-1).n^{-1}S\left\{\frac{1}{x+n-1}\right\} \]
All the constants vanishing but that added at the first integration which is equal to \(1.2.\ldots.n\). When \(t = 1\), the expression for \(d^x + n(\log_t)^n\) reduces itself to \(A_x\), and therefore the coefficient of \(\Delta^x\) will become
\[ (-1)^x \cdot \frac{1.2.\ldots.n}{x+n}.n^{-1}S\left\{\frac{1}{x+n-1}\right\} \]
We are thus conducted to the following value of \(nB_x\),
\[ nB_x = -1.2.\ldots.n.\left\{\frac{\Delta^0}{n+1}.n^{-1}S\left\{\frac{1}{n}\right\} - \frac{\Delta^2}{n+2}.n^{-1}S\left\{\frac{1}{n+1}\right\}\right\} + \ldots\ldots\frac{\Delta^x}{n+x}.n^{-1}S\left\{\frac{1}{x+n-1}\right\} \ldots\ldots\ldots\ldots (9) \]
The cases where \(n = 1\) and \(n = 2\) are the only ones of sufficient importance to merit a more particular consideration. In the former, we have already in our equation (7) given the expression for \(B_x\) or \(B_x\). Its alternate, even values (the signs alone excepted) are those numbers so well known in analysis by the name of the "Numbers of Bernoulli," and among the variety of expressions they admit, I know of none so compendious, or so readily computed arithmetically. Indeed, to compute the higher numbers of Bernoulli directly has always been attended with some labour. If we examine the values of \(B_0, B_1, B_2, \&c.\), we shall observe that all the odd ones (with the exception of \(B_1 = -\frac{1}{2}\)) vanish: as indeed may easily be shown a priori from the nature of the function \(\frac{t}{t-1}\).
A considerable simplification of the latter case takes place owing to this circumstance: the alternate values of \(B_x\) being
susceptible of an expression by means of those of \( B_x \): In fact, the odd values of \( B_x \) vanishing (except \( B_1 \)), we have
\[
1 + \frac{t}{i-1} + \text{&c.} = \left( \frac{t}{i-1} \right)^2
\]
\[
= \left( B_1 \cdot t + \left( 1 + \frac{t^2}{1.2} B_2 + \text{&c.} \right) \right)^2
\]
and, comparing the coefficients of \( t^{2x+1} \) in the two members of this equation, we obtain
\[
B_{2x+1} = -(2x+1).B_{2x}.
\]
Hence this remarkable theorem,
\[
\left\{ \frac{\Delta}{3} S\left(\frac{1}{2}\right) - \frac{\Delta^2}{4}.S\left(\frac{1}{3}\right) + \text{&c.} \right\} o^{2x+1} = -(2x+1).B_{2x}; \ldots \ldots (10)
\]
which may also be regarded as affording another general expression for the numbers of BERNOULLI.
LAPLACE has shown that the developement of the function \( \frac{t}{i-1} \) may be derived from that of \( \frac{1}{i+1} \), and that, if the coefficient of \( t^x \) in the developement of the latter be represented by \( a_x \), it will be \( -\frac{a_x}{2x-1} \) in that of the former. Now, by the application of our equation (2), we find that
\[
\left\{ \frac{1}{i+1} \right\}^n = \left( \frac{1}{2} \right)^n + \frac{t}{1} \left\{ \frac{1}{2+\Delta} \right\}^n o + \frac{t^2}{1.2} \left\{ \frac{1}{2+\Delta} \right\}^n o^2 + \text{&c.}; \ldots \ldots (11)
\]
Making then \( n = 1 \), we find for the value of \( a_{x-1} \)
\[
a_{x-1} = \frac{-\left\{ 2^{x-2} \Delta - 2^{x-3} \Delta^2 + \ldots \pm \Delta^{x-1} \right\} o^{x-1}}{1.2 \ldots \ldots (x-1).2^x}
\]
and consequently the coefficient of \( t^x \) in \( \frac{t}{i-1} \) will be
\[
\left\{ 2^{x-2} \Delta - 2^{x-3} \Delta^2 + \ldots \pm \Delta^{x-1} \right\} o^{x-1}
\]
\[
\frac{1.2 \ldots \ldots (x-1).2^x}{(2^x-1)}; \ldots \ldots \ldots (12)
\]
Dr. BRINKLEY has arrived at the same result.
The computation of the functions \( n^{-1} S \left\{ \frac{1}{n} \right\}, n^{-1} S \left\{ \frac{1}{n+1} \right\} \) & c. is attended with very little difficulty; for, if we multiply together successively the terms \( 1+z, 2+z, 3+z, \text{&c.} \) and call
the co-efficient of $x^{n-p}$ in the product $S(x)$, we shall have
$$\frac{x-n+1}{1 \cdot 2 \cdot 3 \cdots \cdots \cdots \cdots x} S(x) = n-1 S\left\{\frac{1}{x}\right\}$$
and, as every value of $n-1 S\left\{\frac{1}{x}\right\}$, from $x=n$ up to $x=\infty$ is wanted, the principal part of the work consists in calculating the first $n$ terms of the successive products, which, (being derived from one another) except $n$ is considerable, is attended with very little trouble.
The remarkable form of our equation \((2)\) enables us to exhibit a variety of properties of the functions comprehended under the expression $\Delta^n o^*$, some of the principal of which I shall now proceed to notice.
Suppose $f(\varepsilon') = a_0 + a_1 t + a_2 t^2 + \&c.$
Then, as we have shown,
$$a_x = \frac{f(1 + \Delta)}{1 \cdot 2 \cdots \cdots \cdots \cdots x}; \quad \cdots \cdots \cdots \cdots (13)$$
from which we find
$$f(1 + \Delta) o^* = 1 \cdot 2 \cdots \cdots \cdots \cdots x \cdot a_x \cdots \cdots \cdots \cdots \quad (14)$$
If then the developement of $f(\varepsilon')$ be given, we are enabled to assign the value of $f(1 + \Delta) o^*$ in functions of $x$, and the converse. It is scarcely necessary, however, to remark, that the extent of these equations is not limited to cases in which the actual developement of $f(1 + \Delta)$ in powers of $\Delta$ is practicable, or in which the form of $f$ is known, or even dependent on analytical relations.
Let us suppose a function $F(t)$, and any two others $f(t)$ and $f'(t)$, so related that
$$F(t) = f(t) \cdot f'(t)$$
Let also
$$F(t) = A_o + A_1 t + A_2 t^2 + \&c.$$
a similar notation being used, for \( f(t) \) and \( f'(t) \), changing only \( A \) into \( a \) and \( a' \). It is evident then, that
\[
A_x = a_0 \cdot a'_x + a_1 \cdot a'_{x-1} + \ldots \ldots a_x \cdot a'_0.
\]
In this equation, substituting for \( A_x \&c. \), their values drawn from (13), we find
\[
F(1+\Delta)^o = f(1+\Delta)^o \cdot f'(1+\Delta)^o \cdot x + \frac{x}{1} \cdot f(1+\Delta)^o \cdot f'(1+\Delta)^o \cdot x^{-1} \\
+ \frac{x(x-1)}{1 \cdot 2} \cdot f(1+\Delta)^o \cdot f'(1+\Delta)^o \cdot x^{-2} \\
+ \&c.
\]
This equation may be abbreviated, upon the principles we have all along adopted, by a very simple and convenient artifice of notation, viz. by applying an accent to one of the \( \Delta \) and also to the corresponding \( o \); these accents not altering the meaning of the symbols, but solely pointing out those which are to be applied to one another. The second number of this equation then becomes
\[
f(1+\Delta)^o \cdot f'(1+\Delta')^o \cdot x + \frac{x}{1} \cdot f(1+\Delta)^o \cdot f'(1+\Delta')^o \cdot x^{-1} + \&c.
\]
in which the symbols of operation may now, without confusion, be separated from those of quantity, when it will take the form
\[
f(1+\Delta) \cdot f'(1+\Delta') \left\{ o'^x + \frac{x}{1} \cdot o \cdot o'^{x-1} + \&c. \right\}
\]
And our equation becomes
\[
F(1+\Delta)^o = f(1+\Delta) \cdot f'(1+\Delta') \left\{ o + o' \right\}^x; \ldots \ldots \ldots \ldots (15)
\]
We must here notice, that the second member of this equation is precisely what the first would become, if, instead of \( F(1+\Delta) \) we had written \( f(1+\Delta) \cdot f'(1+\Delta) \), its equivalent, and instead of \( o \) the symbolic expression \( o + o' \) which is equal to it in quantity, and then applied the former \( \Delta \) to the former \( o \), and the latter to the latter, by the method of accentuation.
above explained. Pursuing this idea, let us suppose $F(t)$ to be decomposable into any number of factors $f(t), f'(t), f''(t), \&c.$, and by executing the same mechanical process on the expression $F(1 + \Delta)o^x$, we resolve it into
$$f(1 + \Delta)f'(1 + \Delta').\&c.\{o + o' + o'' + \&c.\}^x.$$
A moment’s attention to the method by which (15) was originally derived, will convince us that (attending to the proper application of the symbols) we are at liberty to develope the expression $\{o + o' + o'' + \&c.\}^x$, and thus we have the equation
$$F(1 + \Delta)o^x = f(1 + \Delta)f'(1 + \Delta').\&c.\{o + o' + \&c.\}^x. \cdots \cdots \cdots \cdots (16)$$
Should any one of the functions $f(1 + \Delta), \&c.$, be of the form $(1 + \Delta)^k$ any term multiplied by $o^i$ in the developement of $\{o + o' + \&c.\}^x$ will acquire the coefficient $(1 + \Delta)^k o^i$, which, being, by (14), the coefficient of $t^i$ in the developement of $(1 + \varepsilon - 1)^k$, or $\varepsilon^k$, multiplied into $1.2.3.\ldots.i$, is evidently equal to $k^i$. Now it is the same thing whether we write $k^i$ for $(1 + \Delta)^k o^i$ after the developement, or at once strike out $(1 + \Delta)^k$, and for $o$ write $k$ previously to it. Hence we conclude that
$$(1 + \Delta)^k F(1 + \Delta)o^x = f(1 + \Delta)f'(1 + \Delta').\&c.\{k + o + o' + \&c.\}^x; \cdots \cdots \cdots \cdots (17)$$
where, as before, $F(t) = f(t).f'(t).\&c.$.
The expression $f(1 + \Delta)o^x$ is susceptible of a somewhat varied form, deducible from the identical equation
$$f(\varepsilon') = f\left\{\left(\frac{1}{n}\right)^nt^n\right\}$$
The coefficient of $t^n$ in the second member of this is equal to that of $t^n$ in $f\left\{\left(\varepsilon'\right)^n\right\}$ multiplied by $\frac{1}{n^n}$, that is, by (13), to
and thus we obtain
\[ f \left\{ (1 + \Delta)^n \right\}_{x_0}^{x_n} \]
From this equation it is easy to derive the two following
\[ o = \left\{ f(1 + \Delta) + f\left(\frac{1}{1+\Delta}\right) \right\}_{ozx-1}^{ozx}; \ldots \ldots \ldots \ldots \ldots (19) \]
\[ o = \left\{ f(1 + \Delta) - f\left(\frac{1}{1+\Delta}\right) \right\}_{ozx}; \ldots \ldots \ldots \ldots \ldots (20) \]
Let \( f(t^x) \) be a rational, integral, finite function of \( t \), and suppose it to contain the powers of \( t, t^p, t^q, t^r, \&c. \); it is evident then that we shall have, by (14)
\[ f(1 + \Delta)o^x = o; \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots (21) \]
in every case except where \( x \) is equal to either of the numbers \( p, q, r, \&c. \). The following forms of \( f \) satisfy this condition
\[ f(t) = (\log_t)^n \]
\[ f(t) = ^L(t) + ^L\left\{ \frac{1}{t} \right\} \]
\[ f(t) = ^nL(1+t) + (-1)^n \cdot ^nL\left\{ 1 + \frac{1}{t} \right\} \]
\[ f(t) = ^nC(t) - (-1)^n \cdot ^nC\left\{ \frac{1}{t} \right\} \]
or, lastly, the sums, powers, or products of any of these forms, any how combined.* The excepted values of \( x \), are—for the first of these forms, \( x = n— \) for the second, \( x = 2, \)—and for the third and fourth, \( x = n, \) or \( n = 2, n = 4, \&c. \). Also from the general theorems delivered by Mr. Spence, we find for the value of \( f(1 + \Delta)^{on-2z} \) (which comprehends all the excepted cases) in the third and fourth of the above forms respectively \( 2zL(2) \) and \( 2z+1C(1) \).
It may not be uninteresting to descend to a few more particular applications of these general theorems. If we suppose
* Logarithmic transcendentals, pages 45, 69.
$f(t) = (\log_t)^n$, $n$ being a positive integer, we have $f(\epsilon) = t^n$ and consequently, by equation (14),
$$\{\log(1 + \Delta)\}^n o^\times = 0; \ldots \ldots \ldots \ldots \ldots \ldots (22).$$
in every case but where $x = n$, when it becomes $1, 2, \ldots, n$. If $n = 1$, this becomes
$$o = \frac{\Delta o^\times}{1} - \frac{\Delta^2 o^\times}{2} + \ldots \ldots \pm \frac{\Delta x o^\times}{x} \ldots \ldots \ldots \ldots \ldots \ldots (23).$$
in every case but where $x = 1$
If we take $f(t) = \frac{1}{t}$, or $f(\epsilon) = \epsilon^{-t}$, we find in the same way
$$1 = \Delta^\times o^\times - \Delta^{x-1} o^\times + \ldots \ldots \pm \Delta o^\times \ldots \ldots \ldots \ldots \ldots \ldots (24).$$
Again, let $f(t) = \frac{2t}{1 + t^2}$, then will $f(\epsilon) = \sec\left(\frac{t}{\sqrt{-1}}\right)$, and as the coefficient of $\theta^{2x}$ in sec. $\theta$ is (as Euler has shown)*
$$\frac{2^{2x+2}}{\pi^{2x+1}} \cdot C(1),$$
that of $t^{2x}$ in sec. $\frac{t}{\sqrt{-1}}$ will be
$$\frac{(-1)^x \cdot 2^{2x+2}}{\pi^{2x+1}} \cdot C(1).$$
which, compared with the expression $\frac{f(1+\Delta)o^{2x}}{1 \cdot 2 \cdot \ldots \cdot 2x}$, gives
$$C(1) = (-1)^x \left\{\frac{\pi}{2}\right\}^{2x+1} \cdot \frac{1 + \Delta}{1 + (1 + \Delta)^2} o^{2x}; \ldots \ldots \ldots \ldots (25).$$
which seems the most compendious form in which this complicated function is capable of being exhibited in finite terms, as well as the most easy of computation in any insulated case.
If $f(t) = \frac{1}{\sqrt{-1}} \cdot \frac{t^2-1}{t^2+1}$, we have $f(\epsilon) = \tan\left(\frac{t}{\sqrt{-1}}\right)$, and
$$a_{2x-1} = \frac{1}{1 \cdot 2 \cdot \ldots \cdot (2x-1)} \cdot \frac{2}{1 + (1 + \Delta)^2} o^{2x-1}$$
* Calc. differentialis. $2^{2x+1} C(1)$ is used to denote the series
$$\frac{1}{1^{2x+1}} - \frac{1}{3^{2x+1}} + \frac{1}{5^{2x+1}} - \ldots \ldots \ldots \ldots \ldots \ldots$$
But the coefficient of \( t^{2x-1} \) in tan. \( \frac{t}{\sqrt{-1}} \) is
\[
(-1)^x \frac{2^{2x}.(2^{2x}-1)}{1.2.\ldots.(2x)} B_{2x-1}
\]
where \( B_{2x-1} \) here denotes the \( x^b \) in order of the numbers of Bernoulli. Equating these two values, we find
\[
B_{2x-1} = \frac{(-1)^x.2x}{2^{2x-1}.(2^{2x-1})} \cdot \frac{1}{1+(1+\Delta)^2} o^{2x-1}; \ldots \ldots (26).
\]
We will now proceed to consider the development of any function of the form
\[
u = f(\varepsilon', \varepsilon'', \varepsilon''', \&c.)
\]
\( t, t', t'', \&c. \) being any number of independent variables. The coefficient of \( t^x.t'^y.t''^z.\&c. \) being denoted by \( A_{x,y,z,\&c.} \), we have
\[
A_{x,y,z,\&c.} = \frac{d^x+y+\&c.u}{1.2.\ldots.x\times y\times \&c.\times dt^x.dt'^y.\&c.}
\]
Now, regarding \( u \) as a function of \( \varepsilon' \), we have
\[
\frac{d^xu}{dt^x} = f(1+\Delta, \varepsilon'', \varepsilon''', \&c.) o^x
\]
Again, considering this as a function of \( \varepsilon'' \), we obtain
\[
\frac{d^x+yu}{dt^x.dt'^y} = f(1+\Delta, 1+\Delta', \varepsilon''', \&c.) o^x.o'^y.
\]
(the accents over the \( \Delta \), and \( o \), indicating, as before, the application of the symbols)—and so on. Thus we find
\[
\frac{d^x+y+z+\&c.u}{dt^x.dt'^y.dt''^z,\&c.} = f(1+\Delta, 1+\Delta', \&c.) o^x.o'^y.o''^z.\&c.
\]
and of course,
\[
A_{x,y,z,\&c.} = \frac{f(1+\Delta, 1+\Delta', 1+\Delta'', \&c.) o^x.o'^y.o''^z.\&c.}{1.\ldots.x\times y\times z\times \&c.} \ldots \ldots (27.)
\]
Laplace has shown,* that, in any function \( u_{x,y,z,\&c.} \) of \( x, y, z, \&c. \) if \( w \) be made to vary by \( \alpha, y \) by \( \beta, \&c. \) simultaneously, the following equation, analogous to (a) will hold good;
* Theorie Analytique des Probabilités, p. 70.
developement of exponential functions, &c.
\[ \Delta^n u_{x,y,z,\text{&c.}} = \left\{ \varepsilon^a \frac{d}{dx} + \beta \frac{d}{dy} + \text{&c.} - 1 \right\}^n u_{x,y,z,\text{&c.}} \ldots \ldots (d) \]
Hence the function to be developed is
\[ \left\{ \varepsilon^t \varepsilon'^t \varepsilon''t \text{&c.} - 1 \right\}^n \]
\( n \) being a positive or negative integer
In the former case, the coefficient of \( t^x \cdot t^y \cdot \text{&c.} \) is
\[ \frac{\{(1+\Delta)(1+\Delta')\text{&c.} - 1\}^n o^x o^y \text{&c.}}{1 \cdot 2 \ldots x \times 1 \cdot 2 \ldots y \times \text{&c.}} \]
that is, developing the numerator
\[ \frac{\{(1+\Delta)(1+\Delta')\text{&c.} - 1\}^n o^x o^y \text{&c.}}{1 \cdot 2 \ldots x \times 1 \cdot 2 \ldots y \times \text{&c.}} \]
Now, \( (1+\Delta)^n o^x = n^x, (1+\Delta')^n o^y = n^y, \text{&c.} \) and thus the numerator of this expression becomes,
\[ n^x + y + \text{&c.} - \frac{n}{1}(n-1)^x + y + \text{&c.} + \text{&c.} \]
\[ = \Delta^n o^x + y + \text{&c.} \]
and the coefficient of \( t^x \cdot t^y \cdot \text{&c.} \) therefore becomes
\[ \frac{\Delta^n o^x + y + \text{&c.}}{1 \cdot 2 \ldots x \times 1 \cdot 2 \ldots y \times \text{&c.}} = A_{x,y,\text{&c.}} \ldots \ldots \ldots (28). \]
In the latter case, where the exponent is negative (\( = -n \))
the function to be developed is
\[ \left\{ t + t' + \text{&c.} \right\}^{-n} \left\{ \varepsilon^t \varepsilon'^t \text{&c.} - 1 \right\}^n \]
the coefficient of \( t^x \cdot t^y \cdot \text{&c.} \) in the latter part of this expression, is
\[ \frac{\log \left\{ (1+\Delta)(1+\Delta')\text{&c.} \right\}^n o^x o^y \text{&c.}}{1 \cdot 2 \ldots x \times 1 \cdot 2 \ldots y \times \text{&c.}}; \ldots \ldots (e). \]
Now, let us for an instant suppose the expression
\[ \frac{\log \left\{ (1+\Delta)(1+\Delta')\text{&c.} \right\}^n}{(1+\Delta)(1+\Delta')\text{&c.} - 1} \]
developed in a series of powers of \((1+\Delta)(1+\Delta')\), &c. continued both ways to infinity, (which is evidently possible) and let \(K(1+\Delta)^i\). \((1+\Delta')^i\). &c. be any term of the development. The corresponding term in the above coefficient will be \(K(1+\Delta)^i o^x\). \((1+\Delta)^i o^y\). &c. —— that is, \(K i^x i^y\). &c., or \(K i^{x+y}\). &c. But it is plain that the performance of the same operations on \(\left\{\frac{\log.(1+\Delta)}{\Delta}\right\}^n o^{x+y}\). &c. would have led to the same result: and we may therefore conclude that the numerator of \((e)\) is rightly represented by this latter expression, whose value we have already determined (equations 8, and 9). The coefficient therefore of \(t^x t^y\). &c. in the development of \(\left\{\frac{t+t'+\&c.}{\varepsilon^x \varepsilon'^x \&c.-1}\right\}^n\), is
\[A_{x,y,\&c.} = \frac{nB_{x+y+\&c.}}{1 \ldots x \times 1 \ldots y \times \&c.}; \ldots \ldots \ldots \ldots (29.)\]
and the same reasoning may be applied to any function of \(\varepsilon', \varepsilon''\). &c. whatever.
Analogous theorems to those we have deduced respecting functions of one variable may easily be deduced from the value of \(A_{x,y,\&c.}\) given in (27). Thus, since
\[f\left\{\varepsilon^n, \varepsilon'^n \&c.\right\} = f\left\{(\varepsilon^t)^n, (\varepsilon'^t)^n, \&c.\right\}\]
we ought to have
\[f\left\{(1+\Delta)^n, (1+\Delta')^n, \&c.\right\} o^x o^y \&c. = n^x n^y \&c.\]
\[f\left\{1+\Delta, 1+\Delta', \&c.\right\} o^x o^y \&c. \ldots \ldots \ldots (30)\]
which, by assigning particular values to \(n, n'\), &c. affords an infinite number of theorems analogous to (19) and (20).
Similar theorems respecting the product of two or more functions of \(\varepsilon', \varepsilon''\), &c. may be derived. For instance, if
we shall have
\[ F(t, t') = F(t, t') \times f(t, t') \]
\[ F\{(1 + \Delta), (1 + \Delta')\}^{o_x \cdot o_y} = \]
\[ = f\{(1 + \Delta), (1 + \Delta')\} \times f'\{(1 + \Delta), (1 + \Delta')\}(o + o')^{x \cdot (o' + o')^y}, \]
\[ \ldots \ldots \ldots (31). \]
This, as well as other analogous theorems, flows with such facility from the principles above laid down, that it is unnecessary, as it would lead me beyond the limits I proposed, to enter into any detail respecting them.
Let us now consider the developement of a function of the form \( f \psi^n(t) \), \( f \), and \( \psi \) being functional characteristics of a given form, and \( \psi^n(t) \) denoting the result of \( n - 1 \) successive substitutions of \( \psi(t) \) for \( t \) in the expression of \( \psi(t) \). Let us then suppose, for brevity's sake, \( \psi(\log_1(1 + t)) = \phi(t) \), and equation (3) will give
\[ f \psi(t) = f \phi(\Delta) \varepsilon^{o_t}. \ldots \ldots \ldots (f) \]
and for \( f \) writing \( f \psi^{n-1} \)
\[ f \psi^n(t) = f \psi^{n-1}\{\phi(\Delta)\}^{o_t} \]
again for \( f \) writing \( f \psi^{n-2} \) and for \( t, \phi(\Delta) \) we get
\[ f \psi^{n-1}\{\phi(\Delta)\} = f \psi^{n-2}\{\phi(\Delta')\}^{o_t \cdot \phi(\Delta)} \]
and so on to
\[ f \psi\{\phi(\Delta^{n-2})\} = f \phi(\Delta^{n-1}) \varepsilon^{o_{(n-1)}} \cdot \phi(\Delta^{n-2}). \]
Collecting, now, the whole result, and, for the sake of convenience, inverting the order of the accents, we obtain,
\[ f \psi^n(t) = f \phi(\Delta)\varepsilon^{o_t \cdot \phi(\Delta') + o_t \cdot \phi(\Delta'' + \ldots + o_t^{(n-1)} \cdot t; \ldots \ldots \ldots \ldots (32) \]
The second number of this equation, actually developed becomes
\[ S\left\{ \frac{f \phi(\Delta)^{o_t}}{1.2 \ldots \alpha} \times \frac{\{\phi(\Delta)\}^{o_t \beta}}{1.2 \ldots \beta} \times \ldots \times \frac{\{\phi(\Delta)\}^{o_t \gamma}}{1.2 \ldots \gamma} \cdot t^\nu \right\} (g) \]
S denoting that the sum of all the possible values of the expression within the brackets is to be taken; \( \alpha, \beta, \ldots, \nu \), (whose number is \( n \)) varying through all integer values, separately, from 0 to \( \infty \). Now the several factors which compose this expression are respectively, the coefficients of \( t^\alpha, t^\beta, \ldots, t^\nu \) in the developments of \( f\psi(t) \), \( (\psi t)^\alpha, (\psi t)^\beta, \ldots, (\psi t)^\nu \). Let these coefficients be represented by \( H_\alpha, K_\beta, K_\gamma, \ldots, K_\nu \), and we shall find
\[ f\psi^n(t) = S \left\{ H_\alpha \cdot K_\beta \cdot K_\gamma \cdot \ldots \cdot K_\nu \cdot t^\nu \right\}; \ldots \ldots \ldots \quad (33) \]
If for instance, \( \psi(t) = e^t = \log_{-1}(t) \), we have
\[ H_\alpha = \frac{f(1 + \Delta)^\alpha}{12 \ldots \alpha}, \quad K_\beta = \frac{\alpha^\beta}{12 \ldots \beta}, \text{ &c.} \]
whence we obtain
\[ f\log_{-n}(t) = S \left\{ f(1 + \Delta)^\alpha \times \alpha^\beta \beta^\gamma \ldots \mu^\nu \cdot t^\nu \right\}; \ldots \ldots \quad (34) \]
To take another example, let us suppose the development of \( f\psi^n(t) \) were required, where \( \psi(t) = e^t - 1 \). In this case equation (f) becomes simply
\[ f\psi(t) = f(\Delta)e^{o.t} \]
and the formula in (32) gives
\[ f\psi^n(t) = f(\Delta)e^{o.\Delta' + o'.\Delta'' + \ldots o(n-1).t} \]
In this case also (33) becomes
\[ f\psi^n(t) = S \left\{ D^\alpha f(o) \cdot \Delta^\alpha o^\beta \cdot \Delta^\beta o^\gamma \ldots \Delta^\mu o^\nu \cdot t^\nu \right\}; \ldots \ldots \quad (35) \]
which gives, if \( f(t) = t \),
\[ \psi^n(t) = S \left\{ \Delta^\alpha o^\beta \cdot \Delta^\beta o^\gamma \ldots \Delta^\mu o^\nu \cdot t^\nu \right\} \]
Now \( \Delta^\alpha o^\beta = 1 \), and if, for the sake of symmetry we write \( \alpha, \beta, \ldots, \mu \) instead of \( \beta, \gamma, \ldots, \nu \), we shall have
\[
\psi^n(t) = S \left\{ \frac{\Delta^\alpha_0 \beta \times \Delta^\beta_0 \gamma \times \ldots \times \Delta^\lambda_0 \mu}{1 \ldots \alpha \times 1 \ldots \beta \times \ldots \times 1 \ldots \mu} t^\mu \right\}; \quad \ldots \ldots \ldots (36)
\]
the number of the indices \( \alpha, \beta, \ldots, \mu \), being \( n - 1 \).
It seems hardly necessary, after what has been said, to notice that the developement of any function, such as
\[
f \left\{ \psi^n(t), \psi^{n'}(t'), \&c. \right\}
\]
in which \( t, t', \&c. \) denote any number of independent variables, \( \psi, \psi', \&c. \), any functional characteristics, and \( n, n', \&c. \), any indices, may be accomplished by the same means—or, more conveniently, derived from (33) in the same manner as the formula (27) was obtained from our equation (2); and the result included in a brief and simple expression. The cases however are few, where the results afforded appear, if I may so express it, in their natural form, and it would therefore be useless at present to extend our views farther in this direction.
JOHN F. W. HERSCHEL.
Cambridge, Nov. 17, 1815.