On the Comparison of Transcendents, with Certain Applications to the Theory of Definite Integrals
Author(s)
George Boole
Year
1857
Volume
147
Pages
60 pages
Language
en
Journal
Philosophical Transactions of the Royal Society of London
Full Text (OCR)
XXXVI. On the Comparison of Transcendents, with certain applications to the Theory of Definite Integrals. By George Boole, Esq., Professor of Mathematics in the Queen's University. Communicated by Professor W. F. Donkin, F.R.S.
Received March 16,—Read May 7, 1857.
1. The following objects are contemplated in this paper:—
1st. The demonstration of a fundamental theorem for the summation of integrals whose limits are determined by the roots of an algebraic equation.
2ndly. The application of that theorem to the problem of the comparison of algebraic transcendent.
The immediate object of this application will in each case be the finite expression of the value of the sum of a series of integrals, $\Sigma \int X dx$, the differential coefficient, $X$, being an algebraic function, and the values of $x$ at the limits being determined by the roots of an algebraic equation.
3rdly. The application of the same theorem in a new, and, as is conceived, more remarkable line of investigation, to the comparison of functional transcendent.
The terms 'algebraic' and 'functional' are not here used by way of logical division to indicate classes of transcendent wholly distinct, but the term functional transcendent is simply employed to designate an integral $\int X dx$ in which $X$ involves an arbitrary symbol of functionality.
Under this third head of the comparison of functional transcendent will fall the most important special result of the entire investigation. A case will arise in which, without any limitation of the functional symbol, the several integrals included under the form $\Sigma \int X dx$ will close up, if the expression may be allowed, into a single integral taken between the limits $-\infty$ and $\infty$. The result is a very remarkable theorem of definite integration, fruitful in important consequences. In its general form, this theorem is, I believe, entirely new. A particular case of it was discovered by me several years ago, and was published without demonstration in the Cambridge and Dublin Mathematical Journal*, and in Liouville's Journal de Mathématiques†. A memoir by Cauchy on integrals taken between the limits 0 and $\infty$‡, contains also a very limited case of the same theorem. It appears there, however, as an isolated result, quite apart from the doctrine of the comparison of transcendent.
In the concluding sections of this paper I shall apply the results of this part of the investigation to the extension of the theory of multiple definite integrals.
As respects the methods and processes which will be employed in this paper, the only
* Vol. iv. p. 14. † Tom. xiii. ‡ Exercices, vol. i. p. 54.
peculiarity to which it seems necessary to direct attention, is the introduction of a symbol, differing in interpretation only by the addition of one element, from that which Cauchy has employed in his 'Calculus of Residues.' Of the nature of this connexion I was not aware until my researches were nearly completed; and I should then have abandoned my own symbol and adopted the other, already associated by the labours of its distinguished inventor with so many important discoveries in the higher departments of the integral calculus, if it had not appeared to me that the several elements combined in the interpretation of the former symbol were so allied that no one of them could without a manifest defect of completeness be omitted. It seemed to me also that many of Cauchy's own applications of his symbol would gain in simplicity and in generality of expression, by the adoption of the more enlarged interpretation.
2. Beside the above special objects, in the attainment of which whatever claim to originality this paper may possess will consist, I have proposed to myself, as a general object, the simplification of a branch of analysis which possesses some practical and much speculative importance. To this object the introduction of the symbol above referred to, contributes in a very important degree. The necessity of simplification will, I think, be admitted by all who are acquainted with the literature of the subject. As presented in the writings of Abel and of those who immediately followed in his steps, the doctrine of the comparison of transcendentals is repulsive from the complexity of the formulæ in which its general conclusions are embodied. The particular result known as Abel's theorem, the only one of its class which has been adopted into English works of education, will at once suggest itself in confirmation of this remark. Perhaps this complexity will not be thought surprising if we consider the nature of the problems involved,—the discovery of finite relations among integrals which derive their very name from the circumstance that individually their finite expression transcends the powers of analysis. On the other hand, and this is a juster ground of inference, the theory upon which such applications rest is far from being difficult or recondite, and, considered a priori, should be capable of a simpler analytical development than it has yet received. I hope that I shall be able to show that this anticipation is confirmed by the results of the present inquiry. Simplicity, though it is not to be gained at the expense of that which is the chief object of scientific methods, the discovery of truth, is nevertheless a highly valuable quality. And so far from being inconsistent with generality in the processes and the results of analysis, it is sometimes an indication of the measure of our approach to completeness and unity. I think that this is more especially the case where, as through the labours of Abel in the present instance, the subject matter of investigation has been clearly defined, and the entire series of methods and results foreshown to be the evolution of some one general principle or idea.
3. It will be proper, before entering upon the special investigation, to give some general account of the doctrine of the comparison of transcendentals. In doing this, I cannot but refer to the able report of Mr. Leslie Ellis on the Progress of Analysis, published in the Report of the British Association for 1846. It contains a most valuable
summary and criticism of the chief contributions which have been made, both by English and by foreign mathematicians, to the theory of transcendental integrals in all its branches. I had completed my investigations on that particular branch of the theory which relates to the comparison of transcendentals before I had the opportunity of studying Mr. Ellis's report; but I know of but a single point, and that of no real importance, in which I should be disposed to dissent from the general views which he has expressed on this subject.
The fundamental idea upon which the doctrine of the comparison of transcendentals rests may be thus stated. We know that where we cannot express in finite terms the values of the roots of an algebraic equation, we can nevertheless finitely determine the values of various symmetrical functions of the roots, e.g. their sum, the sum of their squares, the sum of their binary products, &c. Those values will be expressed in terms of the coefficients of the equation, or, speaking more generally, of the independent constants involved in any way in the equation. If we represent the equation in the form
\[ E(x, a_1, a_2, \ldots a_r) = 0, \]
\(E\) being a functional symbol, and \(a_1, a_2, \ldots a_r\) the independent constants, and if we represent the roots of the equation by \(x_1, x_2, \ldots x_n\), then whatever the interpretation of the functional symbol \(\varphi\) may be, the expression
\[ \varphi(x_1) + \varphi(x_2) + \cdots + \varphi(x_n) \]
will denote a symmetrical function of the roots, whose value, in terms of \(a_1, a_2, \ldots a_r\) can always be determined when \(\varphi(x_i)\) is a rational function of \(x_i\), and can very often be determined when \(\varphi(x_i)\) is an irrational function of \(x_i\). Thus the value of the sum \(\Sigma \varphi(x)\), where the different values of \(x\) are the roots of an algebraic equation, can often be found when the values of the separate terms of which that sum is composed cannot be found.
Now this suggests to us the question whether it is not possible in certain cases to find the value of the integral-sum \(\Sigma \int X dx\), where we cannot find the value of the separate integrals involved in that sum. What we usually mean by finding the value of the single integral \(\int X dx\), is the expressing of the value of that integral in terms of its superior limit, an arbitrary constant being annexed, and consists, in fact, in determining the function \(\varphi(x_i)\) in the equation
\[ \int_{x_1}^{x_2} X dx = \varphi(x_i) + C. \]
The suggested problem is the finding of the value of the integral-sum
\[ \int_{x_1}^{x_2} X dx + \int_{x_2}^{x_3} X dx + \cdots + \int_{x_n}^{x_1} X dx, \]
where \(x_1, x_2, \ldots x_n\) are the roots of an algebraic equation, which we will suppose to be (1).
Representing (4.) by \(\Sigma \int_{x_1}^{x_2} X dx\), the solution of the problem will, according to the nature of the analysis employed, assume one of the two following equivalent forms, viz. either the form
\[ \Sigma \int_{x_1}^{x_2} X dx = \varphi(a_1, a_2, \ldots a_r) + C, \]
(5.)
\(a_1, a_2, \ldots a_r\) being the independent constants in the equation (1.) by which the limits are determined, or the form
\[
\Sigma \int_{x_i}^{x_n} X dx = \psi(x_1, x_2, \ldots x_n) + C, \ldots \ldots \ldots (6.)
\]
\(\psi(x_1, x_2, \ldots x_n)\) being a function, and manifestly a symmetrical function, of the limits \(x_1, x_2, \ldots x_n\). Either of these forms may be converted into the other by means of (1.), but the first is the one to which we shall give the preference.
We may remark, that if in the equation (1.) \(a_1, a_2, \ldots a_r\) vary, the values of \(x\), determined by the solution of that equation, will vary also. For each set of values of \(a_1, a_2, \ldots a_r\), there will exist a simultaneous set of values of \(x\). We may in this way consider the variables \(x\) in the several integrals under the sign \(\Sigma\) as always, in the course of their transition from the lower to the upper limits of integration, determined by the roots of the equation
\[E(x, a_1, a_2, \ldots a_r) = 0.\]
According to this more general view, \(a_1, a_2 \ldots a_r\) become a set of variables with which \(x\) is connected by the above equation, but the variation of \(x\) in each integral represents only the variation of one root of the equation. And the determination of the values of \(x\) at the upper or at the lower limits of integration by the solution of that equation, particular values being assigned to \(a_1, a_2, \ldots a_r\), is only a special case of the determination of the simultaneous values of the variable \(x\).
4. Thus the problem with which we are concerned may be more briefly expressed in this form. Required the value of the expression \(\Sigma \int X dx\), the simultaneous values of \(x\) in the several integrals being determined by an equation of the form
\[E(x, a_1, a_2, \ldots a_r) = 0, \ldots \ldots \ldots \ldots \ldots (7.)\]
in which \(a_1, a_2, \ldots a_r\) are variable quantities, by the assigning of particular values to which in the solution of the equation, the particular values of \(x\) at the limits of integration are determined.
The solution of the above problem is to be effected by giving to the expression \(\Sigma \int X dx\) the equivalent form \(\int \Sigma X dx\), transforming \(\Sigma X dx\) into a complete differential with respect to the variables \(a_1, a_2, \ldots a_r\), and then effecting the integration. For this reason I shall designate (7.), or, as it may for convenience be written, \(E = 0\), as the ‘transforming equation,’ except when it is employed to determine the limits, in which case the designation of ‘equation of the limits’ will be preferable.
For the solution of the problem, as thus stated, it is usually necessary that the form of the function \(E\) in the transforming equation, and the form of the function \(X\) under the sign of integration, should have a certain connexion. The connexion implied is the following. The transforming equation must in general be such, that it may be possible by means of it to reduce the differential expression \(X\) under the sign of integration to a form \(F(x, a_1, a_2, \ldots a_r)\), which, considered as an explicit function of \(x\) and of \(a_1, a_2, \ldots a_n\), shall be rational with respect to \(x\). This is not a necessity \textit{à priori}. It is a necessity founded in the limitations of the powers of analysis.
5. To illustrate these remarks, let us take as an example the integral
\[ \int \frac{dx}{\sqrt{1+x^4}}, \]
which is known to express the length of the arc of the lemniscate. The following would be a legitimate example of the kind of problem which it is proposed to investigate, viz. Required the value of the expression
\[ \Sigma \int \frac{dx}{\sqrt{1+x^4}}, \]
the simultaneous values of \( x \) in the successive integrals being given by the roots of the equation
\[ 1+x^4=(a+x+x^2)^2, \quad \ldots \ldots \ldots \ldots \quad (8.) \]
reducible to the form
\[ x^3+(a+\frac{1}{2})x^2+ax+\frac{a^2-1}{2}=0. \quad \ldots \ldots \ldots \ldots \quad (9.) \]
We shall represent these roots by \( x_1, x_2, x_3 \).
The solution of the problem would then assume either the form
\[ \Sigma \int \frac{dx}{\sqrt{1+x^4}}=\varphi(a)+C, \quad \ldots \ldots \ldots \ldots \quad (10.) \]
\( C \) being the constant of integration and \( \varphi(a) \) the function to be determined, or the form
\[ \Sigma \int \frac{dx}{\sqrt{1+x^4}}=\psi(x_1, x_2, x_3)+C, \quad \ldots \ldots \ldots \ldots \quad (11.) \]
wherein \( \psi \) is to be determined.
It will be observed that the transforming equation (8.) is such as to enable us, in accordance with the requirement of art. 4, to reduce the function under the sign of integration to a form in which, considered as an explicit function of \( x \) and \( a \), it is rational with respect to \( x \). For it gives
\[ \int \frac{dx}{\sqrt{1+x^4}}=\int \frac{dx}{a+x+x^2}, \quad \ldots \ldots \ldots \ldots \quad (12.) \]
the second member of which fulfils the condition in question. It is to be remarked, however, that \( a \) and \( x \) are still connected by the equation (8.) or (9.), and that we are not permitted to integrate as if \( a \) were constant.
In the above problem, only one arbitrary element, \( a \), presents itself in the transforming equation. There might, however, have been two or any greater number of arbitrary elements. Thus the equation might have been
\[ 1+x^4=(a+bx+x^2)^2, \]
or
\[ x^3+\frac{b^2+2a}{2b}x^2+ax+\frac{a^2-1}{2b}=0, \quad \ldots \ldots \ldots \ldots \quad (13.) \]
and the solution of the problem would then have assumed one of the two following forms, viz. either the form
\[ \Sigma \int \frac{dx}{\sqrt{1+x^4}}=\varphi(a, b)+C, \quad \ldots \ldots \ldots \ldots \quad (14.) \]
or the form
$$\sum \int \frac{dx}{\sqrt{1 + x^4}} = \psi(x_1, x_2, x_3) + C.$$
(15.)
From these solutions the corresponding solutions of the previous and less general problem would be obtained, in the one case by making $b=1$, in the other by imposing upon the limits $x_1, x_2, x_3$ conditions thereto equivalent.
6. And not only the solution, but the original statement of a problem may be exhibited in the two forms above described. Instead of supposing the limits determined by an equation involving arbitrary quantities in its coefficients, we may suppose them directly connected by symmetrical equations, i.e. we may suppose those relations explicitly given which are only implied in the equation determining the limits, and can only thence be deduced by eliminating the arbitrary elements. Thus (9.) furnishes us with the three following equations:
$$x_1 + x_2 + x_3 = -a - \frac{1}{2},$$
$$x_1 x_2 + x_2 x_3 + x_3 x_1 = a,$$
$$x_1 x_2 x_3 = \frac{1-a^2}{2}.$$
From which, eliminating the arbitrary quantity $a$, we have
$$x_1 + x_2 + x_3 = x_1 x_2 + x_2 x_3 + x_3 x_1 - \frac{1}{2},$$
$$2x_1 x_2 x_3 = 1 - (x_1 x_2 + x_2 x_3 + x_3 x_1)^2.$$
(16.)
The problem first considered now assumes the following form. Required the value of the integral expression
$$\sum \int \frac{dx}{\sqrt{1 + x^4}},$$
when the superior limits $x_1, x_2, x_3$ are connected by the explicit relations (16.).
The second problem, similarly transformed, would, as there are two arbitrary elements to be eliminated, lead to a single equation between $x_1, x_2, x_3$, in place of the two equations (16.).
7. We may observe, from the above examples, that when the number of integrals to be added is three, the existence of two arbitrary elements in the equation of the limits involves the existence of one symmetrical equation among the limits, and the existence of one arbitrary constant in the equation of the limits involves the existence of two symmetrical equations among the limits themselves. And thus generally if there be $n$ integrals to be added, the existence of $r$ arbitrary elements in the equation of the limits will involve the existence of $n-r$ symmetrical equations among the limits. The converse of this proposition is obviously true also. If any number $r$ of symmetrical equations among the limits $x_1, x_2, \ldots x_n$ are given, and if we regard $x_1, x_2, \ldots x_n$ as roots of the equation of the $n$th degree,
$$x^n + p_1 x^{n-1} + p_2 x^{n-2} \ldots + p_n = 0,$$
the symmetrical conditions referred to, will, by the theory of equations, establish among the coefficients $p_1, p_2, \ldots p_n$, a system of relations by means of which we can determine $r$ of
those coefficients as functions of the others deemed arbitrary,—or choosing in some other way \( n-r \) arbitrary elements, express all the coefficients by means of those elements.
It is further seen that in the one form of the problem the greater the number of arbitrary quantities in the equation determining the limits,—in the other form the smaller the number of symmetrical equations connecting the limits with each other,—the more general is the statement of the problem itself.
8. The Norwegian and German mathematicians, by whom this branch of analysis has been chiefly cultivated, have almost universally followed Abel in his mode of stating the general problem, i.e. they have regarded the limits as roots of an equation involving a greater or less number of arbitrary elements in its coefficients. Mr. Fox Talbot, in his interesting papers "On the Comparison of Transcendents," published in the Philosophical Transactions for the years 1836–37, has in the earlier examples of his method expressed by symmetrical equations among the limits the conditions to which the latter are subject: in his later examples he adopts the more succinct notation of Abel. Indeed, this mode of statement, as it replaces a system of equations by a single equation from which such systems may be considered as derived, is far better suited to general investigations, and will be adopted in this paper.
9. The complete solution of the problem of the comparison of transcendentals, as above explained, involves two distinct steps,—1st, regarding the equation
\[ E = 0 \]
of (7.), art. 4, as an equation expressing the dependence of the variable \( x \) upon the variables \( a_1, a_2, \ldots a_r \), we must seek to convert the differential expression \( \Sigma X dx \) into a complete differential relative to \( a_1, a_2, \ldots a_r \) as independent variables, and which will therefore virtually be in the form
\[ \Sigma X dx = A_1 da_1 + A_2 da_2 + \ldots + A_r da_r, \]
each of the differential coefficients \( A_1, A_2, \ldots A_r \) being a function of \( a_1, a_2, \ldots a_r \); 2ndly, we must integrate this expression.
The introduction of the symbol of operation adverted to in art. I will enable us to dispense with the explicit determination of the coefficients \( A_1, A_2, \ldots A_r \), and to reduce the corresponding differential expression to one involving only a single variable. I shall now proceed to define the symbol in question, and to investigate its chief properties.
**Definition and Properties of the Symbol \( \Theta \).**
10. It is an evident consequence of Taylor's theorem that we can develope any function of \( x, f(x) \), in ascending powers of \( x-a \), provided that neither \( f(x) \) nor any one of its differential coefficients becomes infinite when \( x=a \). To effect this development, we have only to assume \( x-a=z \); then \( x=a+z \), whence
\[
\begin{align*}
f(x) &= f(a+z) \\
&= f(a) + f'(a)z + f''(a) \frac{z^2}{1.2} + &c. \\
&= f(a) + f'(a)(x-a) + f''(a) \frac{(x-a)^2}{1.2} + &c.
\end{align*}
\]
Now \( f(x) \) being as above, let \( F(x) = \frac{f(x)}{(x-a)^m} \) where \( m \) is an integer. Then
\[
F(x) = \frac{f(a)}{(x-a)^m} + \frac{f'(a)}{(x-a)^{m-1}} + \cdots + \frac{f^{(m-1)}(a)}{m-1!} \cdot \frac{1}{x-a} + \frac{f^{(m)}(a)}{m!} + \frac{f^{(m+1)}(a) \times (x-a)}{m+1} + \text{&c.}
\]
Here we obtain, as before, a development of \( F(x) \) in ascending powers of \( x-a \), but the development begins with negative powers of that quantity. To this species of development we shall have frequent occasion to refer, and no doubt after this explanation will arise as to what is meant when we speak of the development of any function \( F(x) \) in ascending powers of \( x-a \).
These things being premised, let the symbol \( \Theta \) be thus defined, viz. If \( \varphi(x)f(x) \) be any function of \( x \) composed of two factors \( \varphi(x) \) and \( f(x) \) whereof \( \varphi(x) \) is rational, let
\[
\Theta[\varphi(x)]f(x)
\]
denote the result obtained by successively developing the function in ascending powers of each distinct simple factor \( x-a \) in the denominator of \( \varphi(x) \), taking in each development the coefficient of \( \frac{1}{x-a} \), adding together the coefficients thus obtained from the several developments, and subtracting from the result the coefficient of \( \frac{1}{x} \) in the development of the same function \( \varphi(x)f(x) \) in descending powers of \( x \).
It is seen from the above that the interpretation of \( \Theta \) is relative. It directs us to obtain certain developments, but the nature of these developments, if we except the last of them, depends upon the nature of the function within the brackets \([ ]\). Thus while in the expression \( \Theta[\varphi(x)]f(x) \) the operation of the symbol \( \Theta \) extends over the entire function \( \varphi(x)f(x) \), the interpretation of \( \Theta \), by which the nature of that operation is defined, is derived solely from the factor \( \varphi(x) \).
Thus to take an example of some generality, let it be required to deduce an expression for
\[
\Theta\left[ \frac{x^2}{(x-a)(x-b)^3} \right] f(x),
\]
where \( f(x) \) denotes some function of \( x \) which does not become infinite when \( x=a \) or \( b \).
The distinct simple factors in the denominator of the function within the brackets \([ ]\) are \( x-a \) and \( x-b \). If we make \( x-a=z \), or \( x=a+z \), we have to develope
\[
\frac{(a+z)^2}{z(a-b+z)^3} f(a+z)
\]
in ascending powers of \( z \). The coefficient of \( \frac{1}{z} \) in that expansion is
\[
\frac{a^2f(a)}{(a-b)^2}.
\]
Again, making \( x-b=z \) and \( x=b+z \), we have to develope the function
\[
\frac{(b+z)^2}{z^2(b-a+z)} f(b+z)
\]
in ascending powers of \( z \). The development is
\[
\frac{1}{z^2} \left[ b^2 f(b) + d \frac{b^2 f(b)}{b-a} z + \frac{1}{1.2} \frac{d^2}{db^2} \frac{b^2 f(b)}{b-a} z^2 + \&c. \right],
\]
in which the coefficient of \( \frac{1}{z} \) is
\[
\frac{d}{db} \frac{b^2 f(b)}{b-a}.
\]
Lastly, the coefficient of \( \frac{1}{x} \) in the development of the function
\[
\frac{x^2 f(x)}{(x-a)(x-b)^2}
\]
in descending powers of \( x \) being represented according to a familiar notation by
\[
C_1 \frac{x^2 f(x)}{(x-a)(x-b)^2},
\]
we have, on adding (4.) and (5.), and subtracting (6.) from the sum,
\[
\Theta \left[ \frac{x^2}{(x-a)(x-b)^2} \right] f(x) = \frac{a^2 f(a)}{(a-b)^2} + d \frac{b^2 f(b)}{b-a} - C_1 \frac{x^2 f(x)}{(x-a)(x-b)^2}.
\]
As a particular illustration, let \( f(x) = \log \left( c + \frac{1}{x} \right) \), and let us seek the value of the last term in the above expression. Now
\[
\frac{x^2}{(x-a)(x-b)^2} = \frac{1}{x} + \frac{a+2b}{x^2} + \&c.
\]
on developing in descending powers of \( x \), and
\[
\log \left( c + \frac{1}{x} \right) = \log c + \frac{1}{cx} - \frac{1}{c^2 x^2} + \&c.
\]
Multiplying these together, the coefficient of \( \frac{1}{x} \) in the result is \( \log c \). Whence
\[
\Theta \left[ \frac{x^2}{(x-a)(x-b)^2} \right] \log \left( c + \frac{1}{x} \right) = \frac{a^2 \log \left( c + \frac{1}{a} \right)}{(a-b)^2} + d \frac{b^2 \log \left( c + \frac{1}{b} \right)}{b-a} - \log c,
\]
in which it only remains to perform the differentiation in the second term.
Formulæ applicable to the determination of the result of the operation \( \Theta \) in any case, may readily be found by the aid of Taylor's theorem. Thus we should have, \( f'(x) \) not becoming infinite when \( x = a \) or \( x = b \),
\[
\Theta \left[ \frac{1}{(x-a)^m (x-b)^n} \right] f(x) = \frac{1}{1.2 \ldots (m-1)} \left( \frac{d}{da} \right)^{m-1} \frac{f(a)}{(a-b)^n} \\
+ \frac{1}{1.2 \ldots (n-1)} \left( \frac{d}{db} \right)^{n-1} \frac{f(b)}{(b-a)^m} - C_1 \frac{f(x)}{x(x-a)^m (x-b)^n}
\]
an expression in which the general law of such formulæ is manifest.
If there be \( n \) distinct simple factors in the denominator of the rational fraction within the brackets, the result of the operation \( \Theta \) will consist of \( n+1 \) terms, the first \( n \) of which
will be determined as above by Taylor's theorem. The \( n+1 \)th term will involve the operation denoted by the symbol \( C_x \), and it is by this term only that the interpretation of \( \Theta \) differs from that of Cauchy's symbol, \( E \). We have in fact
\[
\Theta = E - C_x,
\]
the complete interpretation of \( \Theta \) involving two distinct elements.
It happens in certain problems that one of those elements alone presents itself, the symbol \( \Theta \) becoming equivalent either to \( E \) or to \( -C_x \). In other problems they both appear, but I am not aware of any problems in which the vanishing of one of the elements is not due to some special circumstance of restriction or limitation affecting the interpretation of the symbol \( \Theta \) in the case supposed. In a Note to this paper I have endeavoured to illustrate the above remark by employing the symbol \( \Theta \) in certain problems, in which Cauchy has made use of the symbol of residues, and to that Note the reader who is interested in the comparison is referred.
11. The properties of the symbol \( \Theta \) are now to be considered. The two following are the most important of them:
1st. The operation \( \Theta \) is distributive as respects both the function without and the function within the brackets, provided that those functions do not become together infinite for any finite value of \( x \).
Proof.—First, as respects the function without the brackets, we have the theorem
\[
\Theta[\varphi(x)](f_1(x)+f_2(x)\ldots+f_n(x)) = \Theta[\varphi(x)]f_1(x)+\Theta[\varphi(x)]f_2(x)\ldots+\Theta[\varphi(x)]f_n(x); \quad (1.)
\]
for the coefficient of a particular term, as \( \frac{1}{x-a}, \frac{1}{x}, \ldots \) in the development of a function is equal to the sum of the coefficients of all the corresponding terms in the development of the several component functions from which the proposed function is formed by addition.
Secondly, as respects the function within the brackets, we have the theorem
\[
\Theta[\varphi_1(x)+\varphi_2(x)\ldots+\varphi_n(x)]f(x) = \Theta[\varphi_1(x)]f(x)+\Theta[\varphi_2(x)]f(x)\ldots+\Theta[\varphi_n(x)]f(x). \quad (2.)
\]
Here, it is to be observed that \( \varphi_1(x), \varphi_2(x), \ldots \varphi_n(x) \) represent any rational fractions into which the rational fraction \( \varphi(x) \) within the brackets \([\varphi(x)]\) is resolved. Thus the theorem might be written in the form
\[
\Theta[\varphi(x)]f(x) = \Theta[\varphi_1(x)]f(x)+\Theta[\varphi_2(x)]f(x)\ldots+\Theta[\varphi_n(x)]f(x), \quad \ldots \quad (3.)
\]
wherein
\[
\varphi(x) = \varphi_1(x)+\varphi_2(x)\ldots+\varphi_n(x).
\]
To prove this theorem I shall show that it is necessarily true for each of the operations into which \( \Theta \) is resolvable. Let one of the distinct factors of the denominator of \( \varphi(x) \) be \( x-a \), then one of the component operations in \( \Theta \) consists in developing in ascending powers of \( x-a \), and taking in the development the coefficient of \( \frac{1}{x-a} \). Now
whether this operation be performed at once upon the function $\varphi(x)f(x)$ or separately upon the several functions
$$\varphi_1(x)f(x), \varphi_2(x)f(x) \ldots \varphi_n(x)f(x)$$
of which that function is composed, and the several results then collected together, is a matter of indifference. If we represent this part of the operation $\Theta$ by $R$, we have therefore
$$R\varphi(x)f(x) = R\varphi_1(x)f(x) + R\varphi_2(x)f(x) \ldots + R\varphi_n(x)f(x). \ldots \ldots (4.)$$
Now any term, $R\varphi_i(x)f(x)$, in the second member of the above will either form a part of the corresponding term $\Theta[\varphi_i(x)]f(x)$ in the second member of (3.), or will vanish. The former will obviously be the case if $x-a$ is contained in the denominator of $\varphi_i(x)$; the latter will be the case if $x-a$ is not included in the denominator of $\varphi_i(x)$, for then the function $\varphi_i(x)f(x)$ not becoming infinite when $x=a$, is developable in a series of the form $A+B(x-a)+C(x-a)^2 \ldots$ Art. 10, and in this series the coefficient of $\frac{1}{x-a}$ is 0.
Hence, all the terms in (4.) are contained in (3.); neither are there any terms resulting from the component part of the operation $\Theta$ denoted by $R$ which are not contained in the second member of (4.).
Hence, if we cause $R$ to stand in succession for each of the component operations of $\Theta$, and add the several equations thus obtained and typically represented by (4.) together, we shall obtain the theorem (3.).
As an example, we have
$$\Theta\left[\frac{2a}{x^2-a^2}\right]f(x) = \Theta\left[\frac{1}{x-a}\right]f(x) + \Theta\left[\frac{-1}{x+a}\right]f(x).$$
which is easily verified, since the first member gives
$$f(a) - f(-a) - C_1 \frac{2af(x)}{x^2-a^2},$$
and the second member
$$f(a) - f(-a) - C_1 \frac{f(x)}{x-a} + C_1 \frac{f(x)}{x+a},$$
an equivalent result.
2ndly. If $f(x)$ be a rational and entire function of $x$, we have always
$$\Theta[\varphi(x)]f(x) = 0. \ldots \ldots \ldots \ldots \ldots \ldots (5.)$$
Proof.—As $f(x)$ must be of the form $\Sigma Ax^i$, $i$ being an integer, we have
$$\Theta[\varphi(x)]f(x) = \Theta[\varphi(x)]\Sigma Ax^i$$
$$= \Sigma A\Theta[\varphi(x)]x^i$$
by the last proposition. Thus we have to consider a series of terms of the form
$$\Theta[\varphi(x)]x^i. \ldots \ldots \ldots \ldots \ldots \ldots (6.)$$
Again, $\varphi(x)$ being a rational fraction may be resolved into a series of terms which will be of the form $ax^m$ or of the form $\frac{b}{(x-c)^n}$. Hence, availing ourselves of the distributive
property of Θ with respect to the term within the brackets, we see that (6.) is resolvable ultimately into a series of terms falling under the two typical forms
\[ \Theta[x^m]x^i, \quad \Theta\left[\frac{1}{(x-a)^n}\right]x^i. \]
All terms of the first form obviously vanish, since they can have no negative indices. It only remains, therefore, to consider terms of the second form.
First, let \( i \) be equal to or greater than \( n \). Then putting \( x-a=z \), and \( x=a+z \), the coefficient of \( \frac{1}{z} \) in the development of the function \( \frac{(a+z)^i}{z^n} \), to which \( \frac{x^i}{(x-a)^n} \) is reduced, in ascending powers of \( z \) will be
\[ \frac{i(i-1)\ldots(i-n+2)}{1\cdot2\ldots(n-1)} a^{i-n+1}; \quad \ldots \ldots \ldots \ldots \ldots \quad (7.) \]
and the coefficient of \( \frac{1}{x} \) in the development of the function \( \frac{x^i}{(x-a)^n} \) in descending powers of \( x \) will be
\[ \frac{n(n+1)\ldots i}{1\cdot2\ldots(i-n+1)} a^{i-n+1}. \quad \ldots \ldots \ldots \ldots \ldots \quad (8.) \]
These expressions are equal, as may be shown by equating them and clearing of fractions. Hence, in this case,
\[ \Theta\left[\frac{1}{(x-a)^n}\right]x^i=0. \quad \ldots \ldots \ldots \ldots \ldots \quad (9.) \]
Secondly, if \( i=n-1 \), each of the above expressions (7.) and (8.) reducing to unity, the equation (9.) is still true.
Lastly, if \( i \) is less than \( n-1 \), neither will any term containing \( \frac{1}{z} \) present itself in the ascending development, nor any term containing \( \frac{1}{x} \) in the descending development, so that the equation (9.) remains true in this case also.
Wherefore the theorem is proved generally. See Note A.
**General Theorem of Transformation.**
12. The foregoing properties of the symbol \( \Theta \) have an important bearing upon the general theorem for the transformation of integrals under the sign \( \Sigma \), to the demonstration of which we shall now proceed.
**Theorem.** — If \( E=0 \) be an equation connecting the variable \( x \) with another set of variables \( a_1, a_2, \ldots, a_r \), the function \( E \) being rational and entire with respect to \( x \), and if \( F \) be any function of \( x \) and of \( a_1, a_2, \ldots, a_r \), which is rational with respect to \( x \), then, provided that \( F \) does not become infinite when \( E=0 \), we have
\[ \Sigma Fdx = \Theta[F]\frac{\delta E}{E}, \]
where \( \delta \) indicates complete differentiation with respect to the variables \( a_1, a_2, \ldots, a_r \), and the
symbol $\Theta$ directs us, according to previous definition, to develope the function $F \frac{\delta E}{E}$ in ascending powers of $x-h_1$, $x-h_2$... the distinct simple factors of the denominator of $F$, to take in those successive developments the coefficients of $\frac{1}{x-h_1}$, $\frac{1}{x-h_2}$, &c., and from the sum of these coefficients to subtract the coefficient of $\frac{1}{x}$ in the development of the same function in descending powers of $x$.
The object of this theorem will become apparent if it be compared with the statement in Art. 4. It will be observed that $E$ stands for the rational and entire function $E(x, a_1, a_2, \ldots a_r)$, and $F$ for the rational function $F(x, a_1, a_2, \ldots a_r)$ in that article. Thus $F$ is that rational function of $x$ to which the differential coefficient $X$ in the integral $\int X dx$ is supposed to be capable of being reduced by means of the equation of transformation $E=0$.
**Demonstration.**—13. First, it will be necessary to prove the following subsidiary proposition:
**Proposition.**—If $\varphi(x)$ be any rational function of $x$, and if $E=0$ be any equation rational and entire with respect to $x$, by which $x$ is connected with a new set of variables $a_1, a_2, \ldots a_r$, then, provided that $\varphi(x)$ does not become infinite when $E=0$, we have
$$\Sigma \varphi(x) = -\Theta[\varphi(x)] \frac{d \log E}{dx}.$$
**Proof.**—As $\varphi(x)$ is a rational function of $x$, it is capable of being resolved into a series of terms, each of which is either of the form $ax^i$, or of the form $\frac{a}{(x-p)^i}$, $a$ being constant, and $i$ an integer.
Consider then, first, the expression
$$\Sigma ax^i,$$
the different values of $x$ in the several terms under the sign $\Sigma$ being roots of the equation $E=0$. Representing these roots by $x_1, x_2, \ldots x_n$, any two or more of which may be equal, we have
$$E = A(x-x_1)(x-x_2)\ldots(x-x_n),$$
$A$ being constant. Hence
$$\frac{d}{dx} \log E = \frac{1}{x-x_1} + \frac{1}{x-x_2} + \ldots + \frac{1}{x-x_n}.$$
Developing the several terms of the second member in descending powers of $x$, the aggregate coefficient of $\frac{1}{x^{i+1}}$ will be
$$x_1^i + x_2^i + \ldots + x_n^i,$$
or
$$\Sigma x^i.$$
Hence
$$\Sigma ax^i = a \times \text{coefficient of } \frac{1}{x^{i+1}} \text{ in the development of } \frac{d}{dx} \log E \text{ in descending powers of } x.$$
$$= \text{coefficient of } \frac{1}{x} \text{ in the development of } ax^i \frac{d}{dx} \log E \text{ in descending powers of } x.$$
Now \( \Theta [ax^r] \frac{d}{dx} \log E = -\text{coefficient of } \frac{1}{x} \text{ in the development of } ax^i \frac{d}{dx} \log E \) by the definition of \( \Theta \), since, as the function within the brackets is entire, there will be no ascending developments. Hence
\[
\Sigma ax^i = -\Theta [ax^r] \frac{d}{dx} \log E. \quad \ldots \quad \ldots \quad \ldots \quad \ldots \quad (4.)
\]
Consider, secondly, the expression
\[
\Sigma \frac{a}{(x-p)^i}.
\]
Now expressing (3.) in the form
\[
\frac{d}{dx} \log E = \frac{1}{x-p-(x_1-p)} + \frac{1}{x-p-(x_2-p)} \cdots + \frac{1}{x-p-(x_n-p)},
\]
and developing the several terms in the second member in ascending powers of \( x-p \), the aggregate coefficient of \( (x-p)^{i-1} \) will be
\[
-\left\{ \frac{1}{(x_1-p)^i} + \frac{1}{(x_2-p)^i} \cdots + \frac{1}{(x_n-p)^i} \right\} = -\Sigma \frac{1}{(x-p)^i}.
\]
Hence \( \Sigma \frac{1}{(x-p)^i} = -\text{coefficient of } (x-p)^{i-1} \text{ in the development of } \frac{d}{dx} \log E \text{ in ascending powers of } x-p; \)
\[
= -\text{coefficient of } \frac{1}{x-p} \text{ in the development of } \frac{1}{(x-p)^i} \frac{d}{dx} \log E \text{ in ascending powers of } x-p;
\]
\[
\Sigma \frac{a}{(x-p)^i} = -\text{coefficient of } \frac{1}{x-p} \text{ in the development of } \frac{a}{(x-p)^i} \frac{d}{dx} \log E \text{ in ascending powers of } x-p.
\]
But \( \Theta \left[ \frac{a}{(x-p)^i} \right] \frac{d}{dx} \log E = \text{coefficient of } \frac{1}{x-p} \text{ in the development of } \frac{a}{(x-p)^i} \frac{d}{dx} \log E \text{ in ascending powers of } x-p. \)
For 1st, the only distinct simple factor in the denominator of the expression within the brackets is \( x-p \); 2ndly, there will be no term of the form \( \frac{A}{x} \) in the development of \( \frac{a}{(x-p)^i} \frac{d}{dx} \log E \) in descending powers of \( x \), the first term of that development being evidently \( \frac{na}{x^{i+1}} \). Hence
\[
\Sigma \frac{a}{(x-p)^i} = -\Theta \left[ \frac{a}{(x-p)^i} \right] \frac{d}{dx} \log E. \quad \ldots \quad \ldots \quad \ldots \quad \ldots \quad (5.)
\]
It appears from the above, that when we decompose the rational fraction \( \varphi(x) \) into a series of terms \( \varphi_1(x), + \varphi_2(x) \cdots + \varphi_m(x) \), which are individually either of the form \( ax^i \) or of the form \( \frac{a}{(x-p)^i} \), we have
\[
\Sigma \varphi_1(x) + \Sigma \varphi_2(x) \cdots + \Sigma \varphi_m(x) = -\Theta [\varphi_1(x)] \frac{d \log E}{dx} - \Theta [\varphi_2(x)] \frac{d \log E}{dx} \cdots - \Theta [\varphi_m(x)] \frac{d \log E}{dx}
\]
\[
= -\Theta [\varphi_1(x) + \varphi_2(x) \cdots + \varphi_m(x)] \frac{d \log E}{dx},
\]
by art. 11. Whence
\[ \Sigma \varphi(x) = -\Theta[\varphi(x)] \frac{d \log E}{dx}, \]
which is the expression of the subsidiary proposition in question.
By this proposition, \( \Sigma \varphi(x) \), when the different values of \( x \) in the terms under the sign \( \Sigma \) are the roots of an equation \( E = 0 \), is determined as a function of any independent quantities \( a_1, a_2, \ldots a_r \), with which \( x \) is connected by that equation. In order that these quantities may be independent, it is, of course, necessary that their number should not exceed the index of the degree of the equation \( E = 0 \).
14. It may be well before continuing the demonstration to exemplify the theorem just obtained. Suppose it then required to determine the value of the expression \( \Sigma \frac{x}{x-a} \), the values of \( x \) being the roots of the algebraic equation \( px^2 + (1-p)x + q = 0 \). Here we have
\[ \Sigma \frac{x}{x-a} = -\Theta \left[ \frac{x}{x-a} \right] \frac{2px + 1 - p}{pa^2 + (1-p)x + q}. \]
Developing the function
\[ -\frac{x}{x-a} \times \frac{2px + 1 - p}{pa^2 + (1-p)x + q} \]
in ascending powers of \( x-a \), the coefficient of \( \frac{1}{x-a} \) will evidently be
\[ -\frac{2pa + 1 - p}{pa^2 + (1-p)a + q}. \]
Again, developing the same function in descending powers of \( x \), the coefficient of \( \frac{1}{x} \) will be \(-2\).
Hence
\[ \Sigma \frac{x}{x-a} = \frac{2pa^2 + (1-p)a}{pa^2 + (1-p)a + q} + 2 = \frac{(1-p)a + 2q}{pa^2 + (1-p)a + q}, \]
which is easily verified by the known theory of equations.
The quantity \( a \) may be itself a function of \( p \) and \( q \) without affecting the truth of the above result. The reasoning by which (6.) is established remains quite unaffected by the consideration whether the function \( \varphi(x) \) contains, together with \( x \), any of the independent quantities \( a_1, a_2, \ldots a_r \) or not, provided only that if they do enter into its expression they enter determinately, e.g. that the same value which is given to any radical as \( \sqrt{1+a^2} \), in one of the functions \( \varphi(x) \) shall be retained in all.
15. Now let us resume the expression \( \Sigma F dx \), in which the values of \( x \) are subject to the condition
\[ E = 0. \]
As by this equation the value of \( x \) is made to depend upon the quantities \( a_1, a_2, \ldots a_r \), we have, on differentiating with respect to all the variables at once,
\[ \frac{dE}{dx} dx + \frac{dE}{da_1} da_1 + \frac{dE}{da_2} da_2 + \cdots + \frac{dE}{da_r} da_r = 0. \]
(8.)
Or if we appropriate the symbol $\delta$ to express complete differentiation with respect to $a_1, a_2, \ldots a_r$,
$$\frac{dE}{dx} dx + \delta E = 0.$$
Whence
$$dx = -\frac{\delta E}{\frac{dE}{dx}}.$$
(9.)
Wherefore
$$\Sigma F dx = -\Sigma \frac{F \delta E}{\frac{dE}{dx}}.$$
(10.)
Now $F$ being a rational, and $E$ a rational and entire function of $x$, the expression $\frac{F \delta E}{\frac{dE}{dx}}$ will be rational with respect to $x$, whence, by the subsidiary proposition just demonstrated,
$$\Sigma \frac{F \delta E}{\frac{dE}{dx}} = -\Theta \left[ F \frac{\delta E}{\frac{dE}{dx}} \right] \frac{d \log E}{dx}.$$
Therefore
$$\Sigma F dx = \Theta \left[ F \frac{\delta E}{\frac{dE}{dx}} \right] \frac{dE}{E}.$$
(11.)
Now the distinctive part of the performance of the operation $\Theta$ in the second member, consists in developing the entire function
$$F \frac{\delta E}{\frac{dE}{dx}} \times \frac{dE}{E}, \text{ or } F \frac{\delta E}{E}.$$
(12.)
in ascending powers of certain simple factors of the form $x-p$, those simple factors being, 1st, such as are found in the denominator of $F$; 2ndly, such as are not found in the denominator of $F$, but are found in the denominator of $\frac{\delta E}{\frac{dE}{dx}}$. It may be shown that the result of that portion of the operation $\Theta$ which depends upon the latter class of factors is 0. For the only factors of the form $x-p$ which produce terms of the form $\frac{a}{x-p}$ in the ascending development of (12.), and which are not found in the denominator of $F$, must be found in $E$. Let $x-p$ be any such factor, then we may write
$$E = H(x-p)^m,$$
where $H$ does not contain $x-p$.
Therefore
$$\frac{\delta E}{\frac{dE}{dx}} = \frac{(x-p)^m \delta H - mH(x-p)^{m-1} \delta p}{(x-p)^m \frac{dH}{dx} + mH(x-p)^{m-1}} = \frac{(x-p) \delta H - mH \delta p}{(x-p) \frac{dH}{dx} + mH}.$$
(13.)
The denominator of this expression does not contain \( x - p \) as a factor. Hence there will be no factor of the above description in the denominator of the fraction within the brackets, and therefore no corresponding development in the performance of \( \Theta \). Thus the only factors which produce any effect are those found in the denominator of \( F \). The part of the operation \( \Theta \) expressed by \(-C_1\) is of course unaffected by the nature of the function within the brackets.
On these accounts then the theorem (11.), seeing that its second member indicates the performance of the operation \( \Theta \) on the function \( F \frac{\delta E}{E} \), the interpretation of that operation being derived solely from the factor \( F \), is reduced to the comparatively simple form
\[
\Sigma F dx = \Theta[F] \frac{\delta E}{E}. . . . . . . . . . . . . . . . . . (14.)
\]
And in this form it constitutes the general theorem of transformation which it was required to demonstrate.
**Application of the general Theorem of Transformation to the Comparison of Algebraical Transcendents.**
16. In treating of the algebraical transcendentals, I shall first exemplify the direct application of the general theorem of transformation to the solution of special problems, and for this application I shall select by preference examples known and familiar. I shall subsequently apply the theorem to the investigation of general formulæ from which the solution of all special problems may be derived.
There is no difficulty in the direct application of the theorem to special problems. The following directions will meet every case.
Let \( \Sigma \int X dx \) be the expression whose finite value is to be found, the simultaneous values of \( x \) being determined by an equation
\[
X = F(x, a_1, a_2, \ldots, a_r), . . . . . . . . . . . . . . . . . . (1.)
\]
\( a_1, a_2, \ldots, a_r \) being the new variables in terms of which the value of the integral expression is to be obtained. The second member, which we shall represent by \( F \), is supposed rational with respect to \( x \). Let also the equation (1.), made rational and entire with respect to \( x \) by reduction to the form
\[
p_0 x^n + p_1 x^{n-1} + p_2 x^{n-2} \ldots + p_n = 0 . . . . . . . . . . . . . . . . . . (2.)
\]
be represented by \( E = 0 \).
Then observing that \( X \) does not become infinite when \( E = 0 \), we have
\[
\Sigma \int X dx = \Sigma \int F dx = \int \Theta[F] \frac{\delta E}{E}. . . . . . . . . . . . . . . . . . (3.)
\]
On performing the operation \( \Theta \) in the second member, the function under the sign of integration will be an exact differential relatively to \( a_1, a_2, \ldots, a_r \), and, being integrated, will
give the value sought. If the number of integrals under the sign $\Sigma$ is specified, suppose it $n$, the function $F$ must be so chosen that the reduced equation $E=0$ may be of the $n$th degree.
The algebraic sign of each of the integrals in $\Sigma \int X dx$ will be the same as the sign of the corresponding function $F$, which, as being rational, is not ambiguous.
17. Example 1.—The following theorem is given by a writer in the Cambridge Mathematical Journal, vol. i. p. 268, as a generalization of a theorem of Mr. Fox Talbot's relating to the arcs of the equilateral hyperbola. The equation of any hyperbola referred to its asymptotes being $xy = \frac{a^2 + b^2}{2}$, or for simplicity, $xy = m^2$, we have, supposing $\theta$ the angle between the asymptotes,
$$\text{arc} = \int \frac{\sqrt{x^4 - 2m^2x^2 \cos \theta + m^4}}{x^2} dx.$$
Supposing then three values of $x$ to be determined by the equation
$$\sqrt{x^4 - 2m^2x^2 \cos \theta + m^4} = vx + m^2,$$
the sum of the corresponding arcs will be
$$\frac{3}{2} v + \frac{m^2 \cos \theta}{v} + \text{const.}$$
To demonstrate this theorem, it must be observed that the equation (1.), reduced to a rational and integral form, becomes
$$x^3 - (2m^2 \cos^2 \theta + v^2)x - 2m^2v = 0,$$
which occupies the place of $E=0$, art. 16. By virtue of the same equation we have
$$\Sigma \int \frac{\sqrt{x^4 - 2m^2x^2 \cos \theta + m^4}}{x^2} dx = \Sigma \int \frac{vx + m^2}{x^2} dx.$$
Thus we have, 1st, to transform the expression
$$\Sigma \frac{vx + m^2}{x^2} dx,$$
the simultaneous values of $x$ being determined by (3.); 2ndly, to integrate the result with respect to the new variable $v$.
Applying the general theorem of transformation, art. 12, we have
$$\Sigma \frac{vx + m^2}{x^2} dx = \Theta \left[ \frac{vx + m^2}{x^2} \right]_{x^3 - (2m^2 \cos \theta + v^2)x - 2m^2v},$$
Here we must develope the function
$$\frac{vx + m^2}{x^2} \times \frac{- (2xv + 2m^2) \delta v}{x^3 - (2m^2 \cos \theta + v^2)x - 2m^2v},$$
in ascending powers of $x$, and take therein the coefficient of $\frac{1}{x}$. From this we must subtract the coefficient of $\frac{1}{x}$ in the development of the same function in descending powers of $x$.
Finally, we must integrate the result relatively to \( v \).
Developing the two factors of (6.) by ordinary division in ascending powers of \( x \), we may express it in the form
\[
\left( \frac{m^2 + v}{x^2} \right) \left( \frac{\delta v}{v} + \frac{v^2 - 2m^2 \cos \theta}{2m^2 v^2} x \delta v + \&c. \right).
\]
Whence, on multiplication of the factors, we find for the coefficient of \( \frac{1}{x^2} \)
\[
\left( \frac{3}{2} - \frac{m^2 \cos \theta}{v^2} \right) \delta v. \quad \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \quad (7.)
\]
The coefficient of \( \frac{1}{x} \) in the descending development will evidently be 0. Integrating, we have
\[
\frac{3}{2} v + \frac{m^2 \cos \theta}{v} + C \quad \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \quad (8.)
\]
for the value sought.
It must be observed that, in applying the theorem, the signs of the integrals under the symbol \( \Sigma \), which would otherwise be ambiguous from the square root, are made determinate by the equivalent rational expression (4.). Each must be of the same sign as the corresponding value of the function \( \frac{vx + m^2}{x^2} \).
18. In the example we have just considered, the equation of transformation connects \( x \) with but a single new variable \( v \). In the following examples, which are intended to illustrate the doctrine of the comparison of the different orders of elliptic functions, two new variables are introduced.
Example 2.—Required the finite value of the expression
\[
\Sigma \int \frac{dx}{\sqrt{(1-x^2)(1-c^2x^2)}}, \quad \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \quad (1.)
\]
the simultaneous values of \( x \) being determined by the equation
\[
\sqrt{(1-x^2)(1-c^2x^2)} = 1 + vx + wx^2. \quad \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \quad (2.)
\]
By virtue of this equation (1.) assumes the rational form
\[
\Sigma \int \frac{dx}{1 + vx + wx^2}.
\]
Again, the equation (2.) becomes rational and integral when expressed in the form
\[
(1 + vx + wx^2)^2 - (1 - x^2)(1 - c^2x^2) = 0, \quad \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \quad (3.)
\]
and occupies the place of \( E = 0 \) in the general theorem. Hence we have
\[
\Sigma \int_{1 + vx + wx^2} = 0 \left[ \frac{1}{1 + vx + wx^2} \right]^2 \frac{2(1 + vx + wx^2)(xdv + x^2dv)}{(1 + vx + wx^2)^2 - (1 - x^2)(1 - c^2x^2)}.
\]
Here the function
\[
\frac{2(xedv + x^2dv)}{(1 + vx + wx^2)^2 - (1 - x^2)(1 - c^2x^2)}
\]
is to be developed in ascending powers of the simple factors of \( 1 + vx + wx^2 \). Let \( x - h \) be one of those factors. Then it is evident that there cannot be a term of the form \( \frac{A}{x-h} \) in the development of the function in ascending powers of \( x-h \), inasmuch as that
factor does not exist in the term \((1-x^2)(1-c^2x^2)\). Again, there cannot be a term of the form \(\frac{A}{x}\) in the development of the function in descending powers of \(x\). Hence the result of the operation \(\Theta\) is 0, and we have, finally,
\[
\Sigma \int \frac{dx}{\sqrt{(1-x^2)(1-c^2x^2)}} = \text{const}. \quad \ldots \quad \ldots \quad \ldots \quad (4.)
\]
The above constitutes in reality what is usually termed the fundamental theorem for the comparison of elliptic functions of the first order. The equation (3.), arranged according to the powers of \(x\), gives
\[
(w^2-c^2)x^3 + 2vwx^2 + (v^2 + 2w + c^2 + 1)x + 2v = 0.
\]
If we then represent the simultaneous values of \(x\) by \(x_1, x_2, x_3\), we have
\[
x_1 + x_2 + x_3 = \frac{-2vw}{w^2 - c^2}
\]
\[
x_1x_2 + x_2x_3 + x_3x_1 = \frac{v^2 + 2w + c^2 + 1}{w^2 - c^2}
\]
\[
x_1x_2x_3 = \frac{2v}{w^2 - c^2}
\]
whence, eliminating \(v\) and \(w\), we find
\[
(1-x_1^2)(1-x_2^2)(1-x_3^2) = (2-x_1^2-x_2^2-x_3^2+c^2x_1^2x_2^2x_3^2)^2. \quad \ldots \quad \ldots \quad \ldots \quad (5.)
\]
Now this is the form to which the known relation connecting the amplitudes \(\phi, \psi, \sigma\) and the modulus \(c\) in the equation
\[
\pm F_c(\phi) \pm F_c(\psi) \pm F_c(\sigma) = 0,
\]
viz. the relation, \(\cos \sigma = \cos \phi \cos \psi - \sin \phi \sin \psi \sqrt{1-c^2 \sin^2 \sigma}\), is reduced, when we therein make \(\sin \phi = x_1, \sin \psi = x_2, \sin \sigma = x_3\), and rationalize the resulting equation. The signs of the integrals will of course be determined by the signs of the function \(1+vx+wx^2\).
To obtain a formula for the comparison of elliptic functions of the second order, we must deduce a finite expression for
\[
\Sigma \int \frac{\sqrt{1-c^2x^2}}{1-x^2} dx,
\]
subject to the relation (2.). We have then
\[
\Sigma \int \frac{\sqrt{1-c^2x^2}}{1-x^2} dx = \Sigma \int \frac{1-c^2x^2}{1+vx+wx^2} dx
\]
\[
= \int \Theta \left[ \frac{1-c^2x^2}{1+vx+wx^2} \right] \frac{2(1+vx+wx^2)(x\delta v + x^2\delta w)}{(1+vx+wx^2)-(1-x^2)(1-c^2x^2)}.
\]
Here, as before, the effect of that part of the operation \(\Theta\) which depends upon \(1+vx+wx^2\) is 0, so that we have simply
\[
\Sigma \int \frac{\sqrt{1-c^2x^2}}{1-x^2} dx = -\int C_1 \frac{(1-c^2x^2) \times 2(x\delta v + x^2\delta w)}{(1+vx+wx^2)-(1-x^2)(1-c^2x^2)},
\]
$C_1 x$ denoting the coefficient of $\frac{1}{x}$ in the development of the function in descending powers of $x$. Now developing the numerator and the denominator separately, we have
$$2(c^2x^3 - 1)(x^3\delta w + x\delta v) = 2c^2x^3\delta w + 2c^2x^2\delta v \ldots$$
$$(1 + vx + wx^3)^2 - (1 - x^2)(1 - c^2x^2) = (w^2 - c^2)x^4 + 2vwx^3 \ldots$$
Dividing the first of these by the second, we have a quotient
$$\frac{2c^2\delta w}{w^2 - c^2} + \left(\frac{2c^2\delta v}{w^2 - c^2} - \frac{4c^2vw\delta w}{(w^2 - c^2)^2}\right)\frac{1}{x} + &c.$$
Integrating the coefficient of $\frac{1}{x}$, which is a complete differential relatively to $v$ and $w$, we have
$$\int \sqrt{\frac{1 - c^2x^2}{1 - x^2}} dx = \frac{2c^2v}{w^2 - c^2} + \text{constant} = c^2x_1x_2x_3 \ldots \ldots \ldots \ldots \ldots \ldots \ldots (6.)$$
The signs of the three integrals under the sign $\Sigma$, are of course determined by the signs of the calculated values of $\frac{1 - c^2x^2}{1 + vx + wx^3}$.
We might in the same way deduce the known formulæ for the comparison of elliptic functions of the third order. Or we might at once investigate a formula for the comparison of elliptic functions of every order. For the latter purpose we should have to evaluate the expression
$$\Sigma \int \frac{(a + bx^2)dx}{(1 + nx^2)\sqrt{(1 - x^2)(1 - c^2x^2)}},$$
under which the three canonical forms of elliptic functions are comprehended, in subjection to the condition (2.). By the general theorem of transformation this value is
$$\int \Theta \left[\frac{a + bx^2}{(1 + nx^2)(1 + vx + wx^2)}\right] \frac{2(1 + vx + wx^3)(x\delta v + x^2\delta w)}{(1 + vx + wx^3)^2 - (1 - x^2)(1 - c^2x^2)},$$
which may be reduced at once to the form
$$\int \Theta \left[\frac{a + bx^2}{1 + nx^2}\right] \frac{2x\delta v + 2x^2\delta w}{(1 + vx + wx^3)^2 - (1 - x^2)(1 - c^2x^2)}. \ldots \ldots \ldots \ldots \ldots \ldots \ldots (7.)$$
The rest of the solution involves no difficulty. We must develope the entire function following $\Theta$ in ascending powers of $x + \frac{1}{\sqrt{n}}\sqrt{-1}$ and $x - \frac{1}{\sqrt{n}}\sqrt{-1}$ successively, and take therein the coefficients of $\frac{1}{x + \frac{1}{\sqrt{n}}\sqrt{-1}}$ and $\frac{1}{x - \frac{1}{\sqrt{n}}\sqrt{-1}}$. From their sum we must subtract the coefficient of $\frac{1}{x}$ in the development of the same function in descending powers of $x$, and integrate the result as a complete differential with respect to $v$ and $w$.
The above results are entirely founded upon the assumed theorem of transformation,
$$\sqrt{(1 - x^2)(1 - c^2x^2)} = 1 + vx + wx^3.$$
But any other transformation which would connect $x$ with two new variables, through
the medium of an equation of the third degree with reference to \( x \), would lead to results possessing the same degree of generality. Thus the equation
\[
\sqrt{(1-x^2)(1-c^2x^2)} = v + wx + cx^3,
\]
which connects \( x \) with \( v \) and \( w \), and constitutes when freed from surds an equation of the third degree with reference to \( x \), might have been employed. I am not aware that the above forms have been employed before. Legendre, in deducing the properties of Elliptic Functions from Abel's theorem, sets out from a different assumption, and as I think a less simple one, leading however to equivalent results.
It is not my intention to enter here into the subject of the connexion of the different solutions which may thus be obtained. The theory of that connexion can however present no difficulties to those who are acquainted with the labours of Jacobi, Richelot and others, upon the differential equations on the integration of which the doctrine of the comparison of transcendentals, as contemplated from another point of view, depends.
19. Before applying the general theorem of transformation to the investigation of general formulæ for the comparison of transcendentals, I will say a few words upon Abel's theorem, as well as upon the class of theorems to which it belongs.
Abel's theorem is virtually an expression for
\[
\Sigma \int \frac{f(x)dx}{(x-a)\sqrt{\varphi(x)}},
\]
\( f(x) \) and \( \varphi(x) \) being polynomials, the simultaneous values of \( x \) in the several integrals being connected by the equation
\[
\sqrt{\varphi(x)} = \chi(x),
\]
or
\[
\varphi(x) = \{\chi(x)\}^2,
\]
where \( \chi(x) \) is not restricted to being a polynomial, but is a rational function of \( x \), in terms of the coefficients of which the value of the integral sum is to be determined. Abel expresses \( \varphi(x) \) as the product of two polynomials \( \varphi_1(x), \varphi_2(x) \), a form which is obviously given to it in order to meet the case in which a rational fraction occurs under the radical sign, since we have
\[
\Sigma \int \frac{f(x)dx}{x-a}\sqrt{\frac{\varphi_1(x)}{\varphi_2(x)}} = \Sigma \int \frac{f(x)\varphi_1(x)dx}{(x-a)\sqrt{\varphi_1(x)\varphi_2(x)}}.
\]
Broch and others, including, I believe, Abel himself, have considered very fully the more general case in which the polynomial \( \varphi(x) \) is raised to any fractional power whatever, and to this case may be reduced the still more general one in which a rational fraction takes the place of the polynomial. The reduction is however obviously far more complex than in the simpler case in which the index is \( \frac{1}{2} \). I intend here to discuss this problem in a form sufficiently general to render all such reductions unnecessary.
20. Problem.—Required a finite expression for the integral sum
\[
\Sigma \int \varphi^m dx, \ldots \ldots \ldots \ldots \ldots (1.)
\]
φ and ψ being any rational functions of x, the simultaneous values of x in the different integrals being determined by the equation
\[ \psi^m = \chi, \]
χ also denoting any rational function of x of the form
\[ \frac{a_0 + a_1x + a_2x^2 + \ldots + a_mx^m}{b_0 + b_1x + b_2x^2 + \ldots + b_nx^n}. \]
Our object is to determine (1.) as a function of \(a_0, a_1, \ldots a_m, b_0, b_1, \ldots b_n\), which are arbitrary in value, whereas the coefficients in φ and ψ are definite in value and are usually numerical.
Representing φ, ψ and χ in the forms
\[ \phi = \frac{p}{q}, \quad \psi = \frac{s}{t}, \quad \chi = \frac{u}{v}, \]
\(p, q, s, t, u,\) and \(v\) being polynomials in \(x\), the transforming equation (2.) assumes the form
\[ \left(\frac{s}{t}\right)^m = \frac{u}{v}. \]
Whence
\[ smv^n - tmu^n = 0. \]
With this condition connecting the values of \(x\) in the several integrals with \(a_0, a_1, \ldots b_0, b_1, \ldots\), we have to seek the value of the expression
\[ \sum \int \frac{pu}{qv} dx. \]
For to this form, rational with respect to \(x\), the expression (1.) is reducible by virtue of the above relations.
Consider then \( \sum \int \frac{pu}{qv} dx \) subject to (4.). To apply to this the general theorem, we must write therein
\[ F = \frac{pu}{qv}, \quad E = smv^n - tmu^n. \]
We thus find
\[ \sum \int \frac{pu}{qv} dx = \Theta \left[ \frac{pu}{qv} \right] \delta \log (smv^n - tmu^n) = n \Theta \left[ \frac{pu}{qv} \right] \frac{smv^n - 1 \delta v - tmu^n - 1 \delta u}{smv^n - tmu^n}. \]
We cannot in its present form integrate the second member, as the interpretation of Θ depends in part upon \(v\), which contains some of the variables to which the integration has reference. As however the function which has to be developed in the performance of the operation Θ is a rational fraction relatively to \(u\), viz.
\[ \frac{pu}{qv} \frac{smv^n - 1 \delta v - tmu^n - 1 \delta u}{smv^n - tmu^n}, \]
we can resolve it into partial fractions, and this resolution will, in virtue of the properties of Θ, enable us to effect the integration required.
The partial fraction which has \( v \) for its denominator will be \( \frac{p}{q} \frac{\delta u}{v} \). Separating this, the entire fraction (6.) will assume the form
\[
\frac{p}{q} \frac{\delta u}{v} + \frac{p}{q} \frac{s^m v^{n-1} \delta u - s^m u v^{n-2} \delta v}{t^m u^n - s^m v^n}.
\]
Thus (5.) becomes
\[
\Sigma \frac{p}{q} \frac{u}{v} dx = n \Theta \frac{p}{q} \frac{\delta u}{v} + n \Theta \frac{p}{q} \frac{s^m v^{n-1} \delta u - s^m u v^{n-2} \delta v}{t^m u^n - s^m v^n}, \quad \ldots \ldots \ldots \ldots \quad (7.)
\]
\( \Theta \) deriving its interpretation from \( q \) and \( v \).
The first term in the second member, inasmuch as we may give to it the form
\[
n \Theta \left[ \frac{p \delta u}{qv} \right],
\]
vanishes by virtue of (5.), art. 11. The second term may be reduced to the form
\[
n \Theta \left[ \frac{p}{q} \frac{s^m v^{n-1} \delta u - s^m u v^{n-2} \delta v}{t^m u^n - s^m v^n} \right]. \quad \ldots \ldots \ldots \ldots \quad (8.)
\]
In proof of this, I observe that the function to which \( \Theta \) is applied cannot have any of the simple factors of \( v \) in its denominator. No such factor is involved in \( q \). For by supposition all the coefficients in \( q \) are definite, while those in \( v \) are arbitrary. Neither, again, can any of those simple factors be involved in \( t^m u^n - s^m v^n \), for if so it will be involved in \( t^m u^n \), and therefore either in \( t \) or \( u \). But it is not involved in \( t \), as the constants in \( t \) are definite; and it is not involved in \( u \), for if it were \( \frac{u}{v} \) would not be in its lowest terms.
The expression (8.) may be written in the form
\[
n \Theta \left[ \frac{p}{q} \right] \frac{\left( \frac{s}{t} \right)^m v \delta u - u \delta v}{v^2} = n \Theta [\phi] \frac{\psi^m \delta \chi}{\chi^n - \psi^m},
\]
on replacing \( \frac{p}{q} \), \( \frac{s}{t} \) and \( \frac{u}{v} \) by \( \phi \), \( \psi \) and \( \chi \).
Hence
\[
\Sigma \phi \psi^m dx = n \Theta [\phi] \frac{\psi^m \delta \chi}{\chi^n - \psi^m}.
\]
Therefore
\[
\Sigma \int \phi \psi^m dx = n \int \Theta [\phi] \frac{\psi^m \delta \chi}{\chi^n - \psi^m}. \quad \ldots \ldots \ldots \ldots \quad (9.)
\]
The symbols \( \Theta \) and \( \int \) in the second member are now independent and may be transposed. We thus have
\[
\Sigma \int \phi \psi^m dx = n \Theta [\phi] \psi^m \int \frac{\delta \chi}{\chi^n - \psi^m}, \quad \ldots \ldots \ldots \ldots \quad (10.)
\]
an expression of remarkable simplicity.
21. In applying this theorem we must effect the integration in the second member, regarding \( \chi \) as the only variable, inasmuch as the variables \( a_0, b_0, \&c. \) enter into the constitution of \( \chi \), but not into that of the other rational fractions \( \phi \) and \( \psi \). When the
integration is effected we must write for $\phi$, $\psi$, and $\chi$ the several rational fractions for which they stand and then perform the operation $\Theta$, as we are directed to do by the definition of that symbol.
On actual integration we have from (10.),
$$\Sigma \int \phi \psi^n dx = \Theta[\phi] \psi^n \left( \cos \frac{2r\pi}{n} + \sqrt{-1} \sin \frac{2r\pi}{n} \right) \log \left\{ \chi - \left( \cos \frac{2r\pi}{n} + \sqrt{-1} \sin \frac{2r\pi}{n} \right) \psi^n \right\} + n \Theta[\phi] \psi^n C,$$
the summation in the second member extending from $r=0$ to $r=n-1$. The last term in that member is equivalent to an arbitrary constant. For $\phi$ and $\psi^n$ do not contain the variables $a_o$, $b_o$, &c., which are only found in $\chi$. Hence the coefficients of terms in the developments of the function $\phi \psi^n$ will be determinate constants, and $n \Theta[\phi] \psi^n C$, on account of the arbitrary factor $C$, will be itself an arbitrary constant.
22. We are thus led to the following theorem:
**Theorem.**—The value of the expression $\Sigma \int \phi \psi^n dx$, where $\phi$ and $\psi$ are any rational functions of $x$, and the simultaneous values of $x$ in the several integrals under the sign $\Sigma$ are determined by the algebraic equation $\psi^n = \chi$ in which $\chi$ is a rational function of $x$, will be expressed by the formula
$$\Sigma \int \phi \psi^n dx = \Theta[\phi] \psi^n \left( \cos \frac{2r\pi}{n} + \sqrt{-1} \sin \frac{2r\pi}{n} \right) \log \left\{ \chi - \left( \cos \frac{2r\pi}{n} + \sqrt{-1} \sin \frac{2r\pi}{n} \right) \psi^n \right\} + C,$$
the summation in the second member extending from $r=0$ to $r=n-1$.
23. In the particular case in which $m=1$, $n=2$, we have
$$\Sigma \int \phi \psi dx = \Theta[\phi] \psi \log \frac{x - \sqrt{\psi}}{x + \sqrt{\psi}}.$$
Let us apply this theorem to the problem of Art. 17. We have
$$\phi = \frac{1}{x^2}, \quad \psi = x^4 - 2m^2x^2 \cos \theta + m^4, \quad \chi = vx + m^2;$$
$$\therefore \quad \Sigma \int \frac{\sqrt{x^4 - 2m^2x^2 \cos \theta + m^4}}{x^2} dx$$
$$= \Theta \left[ \frac{1}{x^2} \right] \sqrt{x^4 - 2m^2x^2 \cos \theta + m^4} \log \frac{vx + m^2 - \sqrt{x^4 - 2m^2x^2 \cos \theta + m^4}}{vx + m^2 + \sqrt{x^4 - 2m^2x^2 \cos \theta + m^4}}.$$
First, then, we must develope the function in the second member in ascending powers of $x$, and seek the coefficient of $\frac{1}{x}$. Now
$$\sqrt{m^4 - 2m^2x^2 \cos \theta + x^4} = m^2 - x^2 \cos \theta + &c.$$
Substituting, the function becomes
$$\left( \frac{m^2}{x^2} - \cos \theta \right) \log \frac{vx + x^2 \cos \theta}{2m^2 + vx - x^2 \cos \theta}.$$
But
\[ \log(vx + x^2 \cos \theta) = \log v + \log x + \frac{x \cos \theta}{v} + \text{&c.} \]
\[ \log(2m^2 + vx - x^2 \cos \theta) = \log 2m^2 + \frac{vx - x^2 \cos \theta}{2m^2} + \text{&c.} \]
Substituting, we have
\[ \left( \frac{m^2}{x^2} - \cos \theta \right) \left\{ \log x + \log v - \log 2m^2 + \left( \frac{\cos \theta}{v} - \frac{v}{2m^2} \right)x \right\}, \]
wherein the coefficient of \( \frac{1}{x} \) in the product is
\[ \frac{m^2 \cos \theta}{v} - \frac{v}{2}. \]
(13.)
In the second place, developing the function in descending powers of \( x \), we have
\[ \sqrt{x^4 - 2m^2 x^2 \cos \theta + m^4} = x^3 - m^2 \cos \theta, \text{ &c.,} \]
which on substitution gives
\[ \left( 1 - \frac{m^2 \cos \theta}{x^2} \right) \log \frac{-x^2 + vx + m^2(1 + \cos \theta)}{x^2 + vx + m^2(1 - \cos \theta)} \]
\[ = \left( 1 - \frac{m^2 \cos \theta}{x^2} \right) \log \left( -1 + \frac{2v}{x} \right) \]
\[ = \left( 1 - \frac{m^2 \cos \theta}{x^2} \right) \left( -\frac{2v}{x} \right), \]
wherein the coefficient of \( \frac{1}{x} \) is \( -2v \). Hence, changing its sign and adding the result to (13.), we have
\[ \Sigma \int \frac{\sqrt{x^4 - 2m^2 x^2 \cos \theta + m^4}}{x^2} dx = \frac{3}{2} v + \frac{m^2 \cos \theta}{v} + C, \]
which agrees with (8.), art. 17.
It is, however, very much easier in the above problem, and perhaps in most others, to apply at once the fundamental theorem of transformation, as already exemplified in its solution.
24. Abel's theorem is of course included in that of Art. 23. To deduce it, we must observe that its object is to determine the value of the expression
\[ \Sigma \int \frac{f(x)}{(x-a) \sqrt{\varphi_1(x) \varphi_2(x)}} dx, \]
the simultaneous values of \( x \) being connected by the following equation,
\[ \sqrt{\frac{\varphi_2(x)}{\varphi_1(x)}} = \frac{a_0 + a_1 x + \cdots + a_n x^n}{c_0 + c_1 x + \cdots + c_n x^n}. \]
To compare with the general theorem we must therefore write (1.) in the form
\[ \Sigma \int \frac{f(x)}{(x-a) \varphi_2(x)} \sqrt{\frac{\varphi_2(x)}{\varphi_1(x)}} dx. \]
Hence we must make in (1.),
\[ \varphi = \frac{f(x)}{(x-a) \varphi_2(x)}, \quad \psi = \frac{\varphi_2(x)}{\varphi_1(x)}, \quad \chi = \frac{a_0 + a_1 x + \cdots}{c_0 + c_1 x + \cdots}. \]
We thus find
$$\int \frac{f(x)dx}{(x-a)\sqrt{\varphi_1(x)\varphi_2(x)}} = \Theta \left[ \frac{f(x)}{(x-a)\varphi_2(x)} \right] \sqrt{\varphi_2(x)} \log \frac{a_0 + a_1 x \ldots - \sqrt{\varphi_2(x)}}{c_0 + c_1 x \ldots + \sqrt{\varphi_2(x)}}.$$
Here the function to be developed is
$$\frac{f(x)}{(x-a)\sqrt{\varphi_1(x)\varphi_2(x)}} \log \frac{(a_0 + a_1 x \ldots) \sqrt{\varphi_1(x)} - (c_0 + c_1 x \ldots) \sqrt{\varphi_2(x)}}{(a_0 + a_1 x \ldots) \sqrt{\varphi_1(x)} + (c_0 + c_1 x \ldots) \sqrt{\varphi_2(x)}},$$
the ascending developments having reference to $x-a$, and to the different simple factors, $x-h_1, x-h_2 \ldots$ of $\varphi_2(x)$. The coefficient of $\frac{1}{x-a}$ in the first development is evidently
$$\frac{f(a)}{\sqrt{\varphi_1(a)\varphi_2(a)}} \log \frac{(a_0 + a_1 a \ldots) \sqrt{\varphi_1(a)} - (c_0 + c_1 a \ldots) \sqrt{\varphi_2(a)}}{(a_0 + a_1 a \ldots) \sqrt{\varphi_1(a)} + (c_0 + c_1 a \ldots) \sqrt{\varphi_2(a)}}.$$
The coefficients of $\frac{1}{x-h_1}, \frac{1}{x-h_2}$ in the latter developments are 0. Hence we have
$$\int \frac{f(x)dx}{(x-a)\sqrt{\varphi_1(x)\varphi_2(x)}} = \frac{f(a)}{\sqrt{\varphi_1(a)\varphi_2(a)}} \log \frac{(a_0 + a_1 a \ldots) \sqrt{\varphi_1(a)} - (c_0 + c_1 a \ldots) \sqrt{\varphi_2(a)}}{(a_0 + a_1 a \ldots) \sqrt{\varphi_1(a)} + (c_0 + c_1 a \ldots) \sqrt{\varphi_2(a)}}$$
$$- C_1 \frac{f(x)}{x-a} \sqrt{\varphi_1(x)\varphi_2(x)} \log \frac{(a_0 + a_1 x \ldots) \sqrt{\varphi_1(x)} - (c_0 + c_1 x \ldots) \sqrt{\varphi_2(x)}}{(a_0 + a_1 x \ldots) \sqrt{\varphi_1(x)} + (c_0 + c_1 x \ldots) \sqrt{\varphi_2(x)}},$$
which is Abel's theorem. There is, however, nothing gained by the peculiar form in which it supposes the integral to be expressed. The resolution of the polynomial under the radical into two factors $\varphi_1(x), \varphi_2(x)$ is only a substitute, and an inconvenient one, for the more general hypothesis of the theorem of art. 22, which permits the function under the radical sign to be any rational fraction.
25. The theorem of art. 22 is, I believe, more general than any which have been investigated with relation to the same well-marked class of transcendentals. Broch, Jurgensen, and Minding have given formulæ directly applicable to the case in which $\varphi$ is a polynomial*. Their results agree in substance with the above, under the particular restriction supposed, but they are far more complicated in expression. The introduction of the symbol $\Theta$, definite in meaning and indicating the performance of operations which are always intelligible and always possible, greatly simplifies the expression of general theorems.
26. The most general form of the problem contemplated by Abel in his theory of the comparison of transcendentals may be thus expressed. Required a finite expression for $\int f(x,y)dx$, $f(x,y)$ denoting a rational function of $x$ and $y$, the latter of which quantities is itself an irrational function of $x$ given by an algebraic equation of the form
$$p_0 y^n + p_1 y^{n-1} + p_2 y^{n-2} \ldots + p_n = 0,$$
wherein $p_0, p_1, \ldots, p_n$ are rational and integral functions of $x$, the simultaneous values of $x$ in the different integrals being determined by an equation of the form
$$y = r,$$
* Crell's Journal, vol. xxiii.
wherein $r$ is a rational function of $x$, and of any new variables $a_1, a_2, \ldots a_r$, in terms of which the value $\int f(x,y)dx$ is to be found*. Jurgensen has remarked that this problem may be reduced to that of the determination of the value of the expression
$$\Sigma \int f(x)\varphi(x,y)dx,$$
$f(x)$ being a rational function of $x$ and $\varphi(x,y)$ a rational and integral function of $x$ and $y$; and under this form, adding also the restriction that $p_0$ the coefficient of the highest power of $y$ in (1.) shall be unity, he has solved the problem. Minding has investigated the solution when the above restriction is not imposed, but his analysis is in reality founded upon a transformation in which $p_0y$ takes the place of $y$†.
We can, both with increased generality and with that gain of simplicity which results from the employment of the symbol $\Theta$, solve the same problem by the method of this section. But as the comparison of the algebraical transcendentals is not the most important object of this paper, I do not propose to enter here upon the investigation in its most general form, but shall demonstrate a theorem which, while it is sufficiently general for all practical ends, will at the same time serve to throw light upon a peculiarity in the theorem of art. 22 already demonstrated.
27. Problem.—Required, in finite terms, the value of the integral expression
$$\Sigma \int \varphi(x)ydx,$$
$\varphi(x)$ denoting a rational function of $x$, and $y$ an irrational function of $x$ determined by an equation of the $m$th degree,
$$p_0y^m + p_1y^{m-1} + p_2y^{m-2} + \ldots + p_m = 0,$$
where $p_1 \ldots p_m$ are rational and integral functions of $x$. We shall suppose the different simultaneous values of $x$ under the sign $\Sigma$ determined by an equation
$$y = \chi,$$
$\chi$ being a rational function of $x$ of the form $\frac{u}{v}$ in which $u$ and $v$ are polynomials. The value of the integral expression (1.) is to be found in terms of the coefficients of those polynomials.
The equation (3.), cleared of the radicals contained in $y$, and arranged with respect to the powers of $\chi$, will be
$$p_0\chi^m + p_1\chi^{m-1} + \ldots + p_m = 0,$$
or writing for $\chi$ its value $\frac{u}{v}$, and clearing of fractions,
$$p_0w^m + p_1w^{m-1}v + \ldots + p_mv^m = 0.$$
This equation, on substituting for $u$ and $v$ their values as polynomials, is rational and integral with respect to $x$, and occupies the place of the equation $E = 0$ in the general theorem of transformation. We shall suppose it of the $n$th degree. It may of course
* Abel's Works, vol. ii. p. 66. † Crelle's Journal, vol. xxiii.
be exhibited in the form
\[ p_0(u-vy_1)(u-vy_2)\ldots(u-vy_m) = 0, \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots (5.) \]
\( y_1, y_2, \ldots, y_m \) being the different values of \( y \), as determined by giving different signs to the radicals in its expression. Hence we have
\[
\Sigma [\varphi(x)y] dx = \Sigma [\varphi(x)u/v] dx = \int \Theta [\varphi(x)u/v] \delta \log p_0(u-vy_1)(u-vy_2)\ldots(u-vy_m)
\]
\[
= \int \Theta [\varphi(x)u/v] \sum \frac{\delta u-y_r \delta v}{u-y_r v}
\]
the summation in the second member extending from \( r=1 \) to \( r=m \).
Here, transposing the symbol \( \Sigma \), the function upon which the operation \( \Theta \), whose interpretation is derived from \( \varphi(x)u/v \) is to be performed, is
\[
\varphi(x)u/v \delta u-y_r \delta v,
\]
which may be resolved into
\[
\varphi(x)y_r \frac{\delta u-y_r \delta v}{u-y_r v} + \varphi(x)/v (\delta u-y_r \delta v).
\]
Hence
\[
\Sigma [\varphi(x)y] dx = \Sigma [\Theta \varphi(x)y_r \frac{\delta u-y_r \delta v}{u-y_r v}] + \Sigma [\Theta \varphi(x)/v (\delta u-y_r \delta v)].
\]
We are especially to remark, that while \( \Sigma \) in the first member has reference to the \( n \) different values of \( x \) furnished by the equation (3.) or (4.), \( \Sigma \) in the second member has reference to the different values of \( y \) furnished by the equation (2.), its numerical range being from \( r=1 \) to \( r=m \).
Now considering the term
\[
\Sigma [\Theta \varphi(x)y_r \frac{\delta u-y_r \delta v}{u-y_r v}],
\]
all that part of the operation \( \Theta \) which depends upon \( v \) produces no effect. For none of the factors of \( v \) can enter in any way into \( y_r \), since those factors contain the variables \( a_0, a_1, \ldots, a_r \), from which \( y_r \) is wholly free. Again, they cannot enter into the denominator \( u-y_r v \), for then they would enter into \( u \), and the fraction \( u/v \) would not be in its lowest terms. Hence the first term in the second member of (7) becomes
\[
\Sigma [\Theta [\varphi(x)]y_r \frac{\delta u-y_r \delta v}{u-y_r v}].
\]
We can now transpose the symbols \( \int \) and \( \Theta \) and integrate. The result is
\[
\Theta [\varphi(x)] \Sigma y_r \log (u-y_r v). \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots (8.)
\]
The last term of (7.) will, on account of the interpretation of \( \Theta \), be properly written in the form
\[
\Sigma \Theta [\varphi(x)1/v] (\delta u-y_r \delta v),
\]
which may be resolved into
\[
\Sigma \Theta [\varphi(x)1/v] \delta u - \Sigma \Theta [\varphi(x)1/v] y_r \delta v.
\]
The first term vanishes by Art. 11. The second may be reduced as follows. We have
\[-\Sigma \Theta \left[ \varphi(x) \frac{1}{v} \right] y_r \delta v = -\Theta \left[ \varphi(x) \frac{1}{v} \right] \Sigma y_r \delta v = \Theta \left[ \varphi(x) \frac{1}{v} \right] p_1 \delta v.\]
Now
\[\Theta \left[ \varphi(x) \frac{1}{v} \right] p_1 \delta v + \Theta \left[ \varphi(x) \frac{p_1}{p_0} \right] \frac{1}{v} \delta v = \Theta \left[ \varphi(x) \frac{1}{v} \frac{p_1}{p_0} \right] \delta v + \Theta \left[ \varphi(x) \frac{1}{v} \frac{p_1}{p_0} \right] \delta v.\]
This is evident if we collect the different parts of the interpretation of $\Theta$ from the terms in each member, observing that in all cases it is upon the same function that $\Theta$ operates.
Now the first term in the second member vanishes by Art. 11. Hence
\[\Theta \left[ \varphi(x) \frac{1}{v} \right] p_1 \delta v = \Theta \left[ \varphi(x) \right] p_1 \delta v - \Theta \left[ \varphi(x) \frac{p_1}{p_0} \right] \delta v.\]
Attaching now the symbol of integration to the second member, and integrating, since $\int$ and $\Theta$ are now transposable, we have
\[\Theta \left[ \varphi(x) \right] \frac{p_1}{p_0} \log v - \Theta \left[ \varphi(x) \frac{p_1}{p_0} \right] \log v.\]
Writing in the first term of this expression $-\Sigma y_r$ for $\frac{p_1}{p_0}$ and adding the result to (8.), we have
\[\Theta \left[ \varphi(x) \right] \{ \Sigma y_r \log (u-y_r) - \Sigma y_r \log v \} - \Theta \left[ \varphi(x) \frac{p_1}{p_0} \right] \log v,\]
or
\[\Theta \left[ \varphi(x) \right] \Sigma y_r \log \left( \frac{u}{v} - y_r \right) - \Theta \left[ \varphi(x) \frac{p_1}{p_0} \right] \log v.\]
Hence replacing $\frac{u}{v}$ by $\chi$ and adding the constant of integration, we have
\[\Sigma \int \varphi(x) y dx = \Theta \left[ \varphi(x) \right] \Sigma y_r \log (\chi - y_r) - \Theta \left[ \varphi(x) \frac{p_1}{p_0} \right] \log v + C,\]
the expression required. It will be observed that as $\varphi(x)$ and $\frac{p_1}{p_0}$ are always rational functions, the operation $\Theta$ may always be performed on each term of the second member.
As the summation $\Sigma y_r \log (\chi - y_r)$ can only be effected by connecting the several terms included under $\Sigma$ by the sign of addition, it will be most convenient to express the solution in the following form,
\[\Sigma \int \varphi(x) y dx = \Sigma_{r=1}^{m} \Theta \left[ \varphi(x) \right] y_r \log (\chi - y_r) - \Theta \left[ \varphi(x) \frac{p_1}{p_0} \right] \log v + C.\]
28. If $p_1 = 0$, we have
\[\Sigma \int \varphi(x) y dx = \Sigma_{r=1}^{m} \Theta \left[ \varphi(x) \right] y_r \log (\chi - y_r) + C.\]
This includes as a particular case the theorem of Art. 22. For if $y = \psi^{\frac{m}{n}}$, we have
\[y^n - \psi^m = 0,\]
of which any root $y_r$ will be given by the formula
\[y_r = \left( \cos \frac{2r\pi}{n} + \sqrt{-1} \sin \frac{2r\pi}{n} \right) \psi^{\frac{m}{n}}.\]
Hence on substitution we have, since the number of values of \( y \) is \( n \),
\[
\Sigma [\varphi(x)]^m dx = \Sigma_{r=1}^{n} \Theta[\varphi(x)] \left( \cos \frac{2r\pi}{n} + \sqrt{-1} \sin \frac{2r\pi}{n} \right) \psi^n \log \left\{ x - \left( \cos \frac{2r\pi}{n} + \sqrt{-1} \sin \frac{2r\pi}{n} \right) \psi^n \right\},
\]
which agrees with the result in question.
We now see why it is that, although in the investigation of the latter formula we had to take distinct account of the terms \( u \) and \( v \) in the fraction \( \chi \); in the final result they are recombined, and only present themselves implicitly as component parts of \( \chi \).
It is due to the fact that the equation determining the irrational factor of the original integral wants a second term, i.e. that \( p_1 = 0 \).
If \( p_0 = 1 \), the term \( -\Theta \left[ \varphi(x) \right] \frac{p_1}{p_0} \log v \) in (11.) becomes \( -\Theta[\varphi(x)]p_1 \log v \), or \( -\Sigma \Theta[\varphi(x)]y_r \log v \),
whence the theorem assumes the following form,
\[
\Sigma [\varphi(x)] y dx = \Sigma_{r=1}^{n} \Theta[\varphi(x)] y_r \{ \log (\chi - y_r) + \log v \} = \Sigma_{r=1}^{n} \Theta[\varphi(x)] y_r \log (u - y_r v).
\]
We may remark in concluding this section, that all general theorems like the above for the comparison of algebraical transcendentals are difficult of application, from the necessity which they impose of developing logarithmic and irrational functions. This difficulty we avoid by employing directly the theorem of transformation exemplified in the earlier problems of this section. The application of that theorem requires only the development of rational fractions, and this can always be effected by the operations of multiplication and division. When this form of procedure is adopted there remains, however, an integration to be performed. Circumstances must decide which of these methods is preferable, but generally I conceive it will be the latter one. I will only add, that the interest attaching to the entire subject of the comparison of algebraical transcendentals appears to me to be chiefly of a speculative character. It is to be valued rather as affording evidence of the powers, and at the same time of the limitations of analysis, than as offering any prospect of increased command over the problems of physical science. Such at least seems to be the tenor of present indications.
**Application of the General Theorem of Transformation to the reduction of Functional Transcendentals.**
29. Let us first apply the general theorem to the reduction of the expression \( \Sigma [\varphi f(\psi)] dx \), where \( \varphi \) and \( \psi \) are any rational functions of \( x \), and \( f \) a general functional symbol, the simultaneous values of \( x \) in the several integrals being determined by an equation,
\[
\psi = v,
\]
in which \( v \) is a new variable.
We have
\[
\Sigma [\varphi f(\psi)] dx = \Sigma [f(v) \varphi dx] = \Sigma f(v) \varphi dx.
\]
We must now seek the value of the expression \( \Sigma f(v) \varphi dx \), subject to the condition (1.), and then integrate with respect to \( v \).
Now $\psi$ as a rational function of $x$ may be represented under the form $\frac{P}{Q}$, where $P$ and $Q$ are polynomials in $x$ not involving $v$. The equation of transformation then becomes
$$\frac{P}{Q} = v,$$
and in its rational and integral form $Qv - P = 0$.
Hence by the general theorem of transformation,
$$\int \varphi f(\psi) dx = \Theta[f(v)\varphi] \delta \log (Qv - P) = \Theta[f(v)\varphi] \frac{Q \delta v}{Qv - P}.$$
Now the factor $f(v)$, inasmuch as it does not contain $x$, in no wise affects the interpretation of $\Theta$. Hence it may be removed from within the brackets and the equation written in the form
$$\int \varphi f(\psi) dx = f(v) \Theta[\varphi] \frac{Q \delta v}{Qv - P} = f(v) \Theta[\varphi] \frac{\delta v}{v - \psi},$$
on replacing $\frac{P}{Q}$ by $\psi$.
Hence, replacing the first member by the expression for which it is an equivalent, and attaching the sign of integration,
$$\int \varphi f(\psi) dx = \int f(v) \Theta[\varphi] \frac{\delta v}{v - \psi}. \quad . . . . . . . . (3.)$$
To this result we may also give the form
$$\int \varphi f(\psi) dx = \Theta[\varphi] \int \frac{f(v) \delta v}{v - \psi}, \quad . . . . . . . . (4.)$$
as the symbol $\Theta$ and the function $\varphi$ are both independent of the variable $v$. And this would in fact be the best form if we could effect generally the integration in the second member. For the applications to which we shall proceed (3.), is, however, the form to be preferred.
We may express the results which have been arrived at in the following general theorem.
**Theorem.—** If $\varphi$ and $\psi$ are rational functions of $x$, and if the simultaneous values of $x$ in the integrals included in the expression $\int \varphi f(\psi) dx$ are roots of an equation
$$\psi = v,$$
$v$ being a variable quantity, then
$$\int \varphi f(\psi) dx = \int f(v) \Theta[\varphi] \frac{\delta v}{v - \psi}.$$
of the functional symbol $f$, we may so determine the form of $\psi$ as to cause the several integrals included under the sign $\Sigma$ in the first member to close up, if the expression is permitted, into a single definite integral whose actual value will then be given by that of a far more simple integral in the second member.
Let the limits of $v$ in the second member be $p$ and $q$, and let the transforming equation
$$\psi = v$$
give $p_1, p_2 \ldots p_{n+1}$ for the values of $x$ when $v=p$, and $q_1, q_2 \ldots q_{n+1}$ for its corresponding values when $v=q$. Then we have
$$\int_{p_1}^{q_1} \varphi f(\psi) dx + \int_{p_2}^{q_2} \varphi f(\psi) dx \ldots + \int_{p_{n+1}}^{q_{n+1}} \varphi f(\psi) dx = \int_p^q f(v) \Theta [\varphi] \frac{\delta v}{\psi - \varphi}. \quad \ldots \quad (5.)$$
Now let us give to $\psi$ the form
$$x - \frac{a_1}{x-\lambda_1} - \frac{a_2}{x-\lambda_2} \ldots - \frac{a_n}{x-\lambda_n},$$
where $\lambda_1, \lambda_2, \ldots \lambda_n$ are real, and $a_1, a_2 \ldots a_n$ real and positive. The transforming equation is then
$$x - \frac{a_1}{x-\lambda_1} - \frac{a_2}{x-\lambda_2} \ldots - \frac{a_n}{x-\lambda_n} = v; \quad \ldots \quad \ldots \quad \ldots \quad (6.)$$
whence representing still the first member by $\psi$,
$$\frac{d\psi}{dx} = 1 + \frac{a_1}{(x-\lambda_1)^2} + \frac{a_2}{(x-\lambda_2)^2} \ldots + \frac{a_n}{(x-\lambda_n)^2};$$
and as this expression is always positive, it follows,—1st, that $\psi$ regarded as a function of $x$ is never a maximum or minimum; 2ndly, that whenever $\psi$ varies continuously while $x$ increases, it varies by way of increase.
From these properties, and from the form of the equation (6.), it readily follows that if $\lambda_1, \lambda_2, \ldots \lambda_n$ are arranged in the order of increasing magnitude, then whatever real value $v$ may have, the roots of (6.) will be real and will be disposed in the following manner, viz. one root less than $\lambda_1$, one root between $\lambda_1$ and $\lambda_2$, one between $\lambda_{n-1}$ and $\lambda_n$, and finally one between $\lambda_n$ and $\infty$. To prove this in detail, let $x$ vary from $-\infty$ to $\lambda_1$, then $\psi$, as is evident from its form, varies from $-\infty$ to $\infty$, and it varies continuously by way of increase so as never to resume a former value. Once therefore in its course it will be equal to $v$. Wherefore one root of (6.) lies between $-\infty$ and $\lambda_1$. Supposing $x$ to continue to increase, the value of $\psi$ suddenly changes when $x$ passes over the value $\lambda_1$ from $\infty$ to $-\infty$, and as $x$ varies from $\lambda_1$ to $\lambda_2$, $\psi$ again varies continuously, and by way of increase from $-\infty$ to $\infty$, and therefore again becomes equal to $v$ once in its course; wherefore a value of $x$ lies between $\lambda_1$ and $\lambda_2$. In like manner there is a value of $x$ between $\lambda_2$ and $\lambda_3$, $\lambda_{n-1}$ and $\lambda_n$. Finally, as $x$ varies from $\lambda_n$ to $\infty$, $\psi$ once more passes over the value $v$. Whence the proposition is manifest.
The reality of all the roots of (6.) may also be readily shown in the following manner. Let $x=p+q\sqrt{-1}$. Then substituting in (6.), and reducing that equation to the form
$$A+B\sqrt{-1}=v,$$
MDCCCLVII.
which entails as a necessary consequence \( A = v, B = 0 \), we find as the form of the equation \( B = 0 \),
\[
q \left\{ 1 + \frac{a_1}{(p - \lambda_1)^2 + q^2} + \frac{a_2}{(p - \lambda_2)^2 + q^2} + \cdots + \frac{a_n}{(p - \lambda_n)^2 + q^2} \right\} = 0.
\]
Now as the function within the brackets is essentially positive, the above equation can only be satisfied by making \( q = 0 \). But this indicates that all the roots are real.
Resuming (5.), it is evident from what precedes, that if the lower limits of integration \( p_1, p_2 \ldots p_{n+1} \), corresponding to \( v = p \), are arranged in the ascending order of magnitude, the upper limits \( q_1, q_2 \ldots q_{n+1} \) will also be ranged in the same order. Moreover \( p_1 \) and \( q_1 \) will both be less than \( \lambda_1 \); \( p_2 \) and \( q_2 \) will lie between \( \lambda_1 \) and \( \lambda_2 \); finally, \( p_{n+1} \) and \( q_{n+1} \) will lie between \( \lambda_n \) and \( \infty \). Hence, then, the elements in the different integrals in the first member of (5.) will be all different, the superior limit of each integral being less than the inferior limit of the integral which follows it.
31. Let us now examine the case in which the integration relative to \( v \) in the second member of (5.) is from \( -\infty \) to \( \infty \).
Let \( p = -\infty \), and \( q = \infty \), we then have
\[
p_1 = -\infty, \quad p_2 = \lambda_1, \quad p_3 = \lambda_2 \ldots p_{n+1} = \lambda_n,
\]
\[
q_1 = \lambda_1, \quad q_2 = \lambda_2, \quad q_3 = \lambda_3 \ldots q_n = \lambda_n, \quad q_{n+1} = \infty.
\]
Thus \( q_1 = p_2, q_2 = p_3 \ldots q_n = p_{n+1} \), or the upper limit of each integral coincides with the lower limit of the integral which follows it.
It is more strict, however, to regard \( p \) and \( q \) as tending to the respective limits \( -\infty \) and \( \infty \). The first inferior limit, \( p_1 \), then tends to \( -\infty \), and the last superior limit, \( q_{n+1} \), to \( \infty \), while the superior limit of each integral but the last tends upward to the same limiting value to which the inferior limit of the integral following tends downward. The different integrals close up into a single definite integral taken between the limits \( -\infty \) and \( \infty \). Thus we have
\[
\int_{-\infty}^{\infty} \varphi f(\psi) dx = \int_{-\infty}^{\infty} f(v) \Theta[\varphi] \frac{\delta v}{v - \psi}. \quad . . . . . . . . (7.)
\]
The reasoning is evidently independent of the nature of the function symbolized by \( f \). That function may either be continuous or discontinuous. We thus arrive at the following theorem, in the expression of which we shall restore to \( \psi \) its complete value, and shall replace the rational function \( \varphi \) by \( \varphi(x) \), and the symbol \( \delta \), which is no longer necessary for distinction, by \( d \).
32. Theorem.—If \( \varphi(x) \) denote a rational function of \( x \), and if \( f \) be a general functional symbol, then
\[
\int_{-\infty}^{\infty} \varphi(x) f \left( x - \frac{a_1}{x - \lambda_1} - \frac{a_2}{x - \lambda_2} \ldots - \frac{a_n}{x - \lambda_n} \right) dx
\]
\[
= \int_{-\infty}^{\infty} dv f(v) \Theta[\varphi(x)] \frac{1}{v - x + \frac{a_1}{x - \lambda_1} + \frac{a_2}{x - \lambda_2} \ldots + \frac{a_n}{x - \lambda_n}}, \quad . . . . . . . . (1.)
\]
provided that \( a_1, a_2, \ldots a_n \) are real and positive, and \( \lambda_1, \lambda_2 \ldots \lambda_n \) real.
This is the general theorem of definite integration to which reference has been made. The remainder of this paper will be devoted to its illustration.
The theorem is independent, as has been said, of the nature of the functional interpretation of $f$, and, even when the factor $f\left(x - \frac{a}{x-\lambda_1} \cdots - \frac{a}{x-\lambda_n}\right)$ becomes infinite for the limiting values $\lambda_1, \lambda_2 \ldots \lambda_n$, does not fail, but carries with it a correction for the discontinuity thence arising. We cannot otherwise attach a meaning to the expression $\int_a^b f(x)dx$, when for a value $x = \lambda$, included within the limits of integration, $f(x)$ becomes infinite, than by considering it as the limit to which the sum of the integrals
$$\int_{\lambda-e}^{\lambda} f(x)dx + \int_{\lambda+e'}^{\lambda} f(x)dx$$
tend as $e$ and $e'$ tend to 0. According to the nature of the function $f(x)$ and the modes in which $e$ and $e'$ tend to the limit 0, the integral may become, as Cauchy has observed, finite or infinite, determinate or indeterminate. When $e' = e$, so that the approach on either side to the limiting value $\lambda$ is made in the same manner, i.e. by equivalent infinitesimal variations of $x$, the value of the integral obtained will be that which Cauchy terms its principal value. The equation (1.) will thus give the principal value of the integral in its first member, if we suppose $v$ to approach by the same kind of variations to the limits $-\infty$ and $\infty$; in other words, if, representing the function under the sign of integration in the second member by $F(v)$, we regard $\int_{-\infty}^{\infty} F(v)dv$ as the limit of the value of $\int_{-a}^{a} F(v)dv$, $a$ becoming indefinitely great. For suppose $x$ to be approaching the particular limit $\lambda_1$. The nearer its approach the more nearly (vide 6, art. 30) is the following equation realized, viz.—
$$\frac{-a_1}{x-\lambda_1} = v,$$
whence the more nearly have we
$$x = \lambda_1 + \frac{a_1}{v};$$
and therefore, if $v$ tend towards $\infty$ and $-\infty$ by equivalent variations, so also will $x$ by equivalent variations approach from above and from below the limit $\lambda_1$.
Again, the larger $x$ becomes the more nearly do $x$ and $v$ approach a ratio of equality, and therefore the mode of approach of $x$ in the first member to the limits $-\infty$ and $\infty$ determines identically the mode of approach of $v$ to $-\infty$ and $\infty$. Thus we may finally give to the theorem the following rigorous statement, viz.—
The two members of the equation
$$\int_{-a}^{a} dx \varphi(x)f\left(x - \frac{a_1}{x-\lambda_1} \cdots - \frac{a_n}{x-\lambda_n}\right) = \int_{-a}^{a} dv f(v)\Theta[\varphi(x)] \frac{1}{v-x + \frac{a_1}{x-\lambda_1} \cdots + \frac{a_n}{x-\lambda_n}}$$
approach a ratio of equality as $a$ approaches to infinity, provided that if
$$f\left(x - \frac{a_1}{x-\lambda_1} \cdots - \frac{a_n}{x-\lambda_n}\right)$$
become infinite for the critical values $\lambda_1, \lambda_2, \ldots, \lambda_n$, we suppose $x$ to approach each critical value by equivalent infinitesimal variations.
The most important, perhaps the only important, cases are those in which $f(v)$ vanishes when $v$ is infinite.
33. I shall begin with noticing some particular deductions from the theorem, among which will be included certain known formulæ of analysis. I shall show that it enables us to deduce from any known definite integral the values of an infinite number of other definite integrals of progressively increasing complexity. I shall show that when the arbitrary function under the sign of integration is regarded as discontinuous, the first member of the equation becomes resolved into a number of definite integrals of continuous functions, and that we thus arrive at the same theorem for the comparison of functional transcendentals, (5.), art. 30, from which the above theorem was itself derived. Finally, I shall apply the theorem to the extension of the theory of definite multiple integrals.
Special deductions may be obtained by limiting either,—1st, the form of the rational function $\varphi(x)$; or 2ndly, the interpretation of the functional sign $f$; or 3rdly, the number and value of the constants in the function under the sign $f$.
1st. Let $\varphi(x) = 1$. Then in the second member of (1.), art. 32,
$$\Theta[\varphi(x)] \int_{-\infty}^{\infty} \frac{dv}{v-x+a_1+\cdots+a_n} = -C_1 \int_{-\infty}^{\infty} \frac{dv}{v-x+a_1+\cdots+a_n}$$
$$= C_1 \int_{-\infty}^{\infty} \frac{dv}{x-v-a_1-\cdots-a_n} = dv.$$
Hence
$$\int_{-\infty}^{\infty} f\left(x-\frac{a_1}{x-\lambda_1}\right)dx = \int_{-\infty}^{\infty} f(v)dv. \ldots \ldots \ldots \ldots \ldots \ldots \ldots .(2.)$$
This was the theorem, or rather the most important case of the theorem referred to in Art. 1, as published in the 'Cambridge and Dublin Mathematical Journal,' No. XIX. The following are special applications of it, chiefly selected from the paper in question.
Since we have
$$\int_{-\infty}^{\infty} f\left(x-\frac{a}{x}\right)dx = \int_{-\infty}^{\infty} f(v)dv, \ldots \ldots \ldots \ldots \ldots \ldots \ldots .(3.)$$
and
$$x^2 + \frac{a^2}{x^2} = \left(x-\frac{a}{x}\right)^2 + 2a,$$
we shall have
$$\int_{-\infty}^{\infty} f\left(x^2 + \frac{a^2}{x^2}\right)dx = \int_{-\infty}^{\infty} f(v^2+2a)dv. \ldots \ldots \ldots \ldots \ldots \ldots \ldots .(4.)$$
Hence
$$\int_{-\infty}^{\infty} e^{-\left(x^2 + \frac{a^2}{x^2}\right)}dx = \int_{-\infty}^{\infty} e^{-2a} \times e^{-v^2}dv = \pi^{1/2} e^{-2a}. \ldots \ldots \ldots \ldots \ldots \ldots \ldots .(5.)$$
In like manner
\[ \int_{-\infty}^{\infty} \cos \left( x^2 + \frac{a^2}{x^2} \right) dx = \int_{-\infty}^{\infty} \cos (v^2 + 2a) dv \]
\[ = \cos 2a \int_{-\infty}^{\infty} \cos (v^2) dv - \sin 2a \int_{-\infty}^{\infty} \sin (v^2) dv \]
\[ = \left( \frac{\pi}{2} \right)^{\frac{1}{4}} (\cos 2a - \sin 2a) \]
and similarly,
\[ \int_{-\infty}^{\infty} \sin \left( x^2 + \frac{a^2}{x^2} \right) dx = \left( \frac{\pi}{2} \right)^{\frac{1}{4}} (\cos 2a + \sin 2a), \]
which are known relations. We may by the same method deduce the relations
\[ \int_{-\infty}^{\infty} dx e^{-\left( x^2 + \frac{a^2}{x^2} \right) \cos \theta} \cos \left( \left( x^2 + \frac{a^2}{x^2} \right) \sin \theta \right) = \pi^{\frac{1}{4}} e^{-2a \cos \theta} \cos \left( 2a \sin \theta + \frac{\theta}{2} \right), \]
\[ \int_{-\infty}^{\infty} dx e^{-\left( x^2 + \frac{a^2}{x^2} \right) \cos \theta} \sin \left( \left( x^2 + \frac{a^2}{x^2} \right) \sin \theta \right) = \pi^{\frac{1}{4}} e^{-2a \cos \theta} \sin \left( 2a \sin \theta + \frac{\theta}{2} \right), \]
originally given by Cauchy. All the above definite integrals are reduced by the theorem to immediate dependence on the fundamental theorem
\[ \int_{-\infty}^{\infty} e^{-ax^2} dx = \frac{\pi^{\frac{1}{4}}}{\sqrt{a}}. \]
Again, let us consider the definite integral
\[ u = \int_{0}^{\infty} \frac{dx \cdot x^{n-1}}{(a + bx + cx^2)^n}. \]
Making \( x^2 = z \), we have
\[ u = 2 \int_{0}^{\infty} \frac{dz}{\left( b + cz^2 + \frac{a}{z^2} \right)^n} \]
\[ = \int_{-\infty}^{\infty} \frac{dz}{\left[ b + c \left( z^2 + \frac{a}{cz^2} \right) \right]^n} = \int_{-\infty}^{\infty} \frac{dv}{\left[ b + c \left( v^2 + 2 \sqrt{\frac{a}{c}} \right) \right]^n} \text{ by } (4.) \]
\[ = \int_{-\infty}^{\infty} \frac{dv}{(b + 2 \sqrt{ac} + cv^2)^n} \]
\[ = 2 \int_{0}^{\infty} \frac{dv}{(b + 2 \sqrt{ac} + cv^2)^n}. \]
Now from the known theorem
\[ \int_{0}^{\infty} \frac{dt \cdot t^{a-1}}{(1+t)^r} = \frac{\Gamma(a) \Gamma(r-a)}{\Gamma(r)}, \]
in which \( r > a \), we readily deduce
\[ \int_{0}^{\infty} \frac{dv \cdot v^m}{(p + qv^2)^n} = \frac{1}{2p^{n-m+1/2}q^{m+1/2}} \frac{\Gamma \left( n - \frac{m+1}{2} \right) \Gamma \left( \frac{m+1}{2} \right)}{\Gamma(n)}. \]
whence, reducing the above expression for \( u \), we find
\[
\int_{0}^{\infty} \frac{dx}{(a + bx + cx^2)^n} = \frac{1}{c^{b + 2\sqrt{ac}}} \cdot \frac{\Gamma(\frac{1}{2}) \Gamma(n - \frac{1}{2})}{\Gamma(n)}; \quad (11.)
\]
and hence on changing \( x \) into \( \frac{1}{x} \), we find
\[
\int_{0}^{\infty} \frac{dx}{(a + bx + cx^2)^n} = \frac{\Gamma(\frac{1}{2}) \Gamma(n - \frac{1}{2})}{\Gamma(n)a^{b + 2\sqrt{ac}}}; \quad (12.)
\]
The two last theorems were discovered independently and about the same time by Mr. Cayley, Professor Thomson* and Schlömilch†. It is to be noted that \( a, b, \) and \( c \) must be positive.
The above examples have been selected in the first instance because they relate to known results. But there is not one of the results arrived at which may not be generalized to an indefinite degree.
Thus, since we have
\[
\int_{-\infty}^{\infty} dv e^{-vn} = \frac{2}{n} \Gamma\left(\frac{1}{n}\right); \quad (13.)
\]
we have
\[
\int_{-\infty}^{\infty} dx e^{-\left(\frac{x-a}{x}\right)^n} = \frac{2}{n} \Gamma\left(\frac{1}{n}\right). \quad (14.)
\]
If \( n = 2 \), this gives
\[
\int_{-\infty}^{\infty} dx e^{-\left(x^2 + \frac{a^2}{x^2}\right)} = \pi^2,
\]
whence
\[
\int_{-\infty}^{\infty} e^{-\left(x^2 + \frac{a^2}{x^2}\right)} dx = \pi^2 e^{-2a}, \quad (15.)
\]
agreeing with (5.). But let \( n = 4 \), and we find
\[
\int_{-\infty}^{\infty} e^{-\left(x^4 + \frac{a^4}{x^4}\right) + 4a\left(x^2 + \frac{a^2}{x^2}\right)} dx = \frac{1}{2} \Gamma\left(\frac{1}{4}\right) e^{6a^2}; \quad (16.)
\]
and so on indefinitely without even proceeding to employ the more general forms of (2.).
34. Let us examine the definite integral
\[
u = \int_{-\infty}^{\infty} x^{2n} f\left(x - \frac{a}{x}\right) dx. \quad (17.)
\]
By the general theorem (1.) we have
\[
u = \int_{-\infty}^{\infty} dv f(v) \Theta[x^{2n}] \frac{1}{v-x + \frac{a}{x}} = \int_{-\infty}^{\infty} dv f(v) C_1 \frac{x^{2n+1}}{x^2 - vx - a}. \quad (18.)
\]
For, \( x^{2n} \) not being fractional, the interpretation of \( \Theta \) is reduced simply to \( -C_1 \). It is obviously desirable to express the term \( C_1 \frac{x^{2n+1}}{x^2 - vx - a} \) in a series consisting of powers of \( v \).
Now
\[
\frac{x^{2n+1}}{x^2 - vx - a} = x^{2n+1} \left\{ \frac{1}{x^2 - a} + \frac{vx}{(x^2 - a)^2} + \frac{v^2x^2}{(x^2 - a)^3} + \ldots \right\} = \frac{x^{2n+1}}{x^2 - a} + v \frac{x^{2n+2}}{(x^2 - a)^2} + v^2 \frac{x^{2n+3}}{(x^2 - a)^3} + \ldots. \quad (19.)
\]
* Cambridge and Dublin Mathematical Journal, vol. ii. † Crelle, vol. xxxii.
We are permitted to give this form to the expression before developing in descending powers of \( x \), because, on thus developing the several terms in the second member, no higher power of \( x \) will present itself than would be obtained by developing the first member in descending powers of \( x \).
The general term of (19.) is
\[
\frac{v^i x^{2n+i+1}}{(x^2-a)^{i+1}}.
\]
If \( i \) be odd, there will be no terms of the form \( \frac{A}{x} \) in the development of this function in descending powers of \( x \). Let \( i \) be even and be expressed by \( 2m \). Then we have to develope in descending powers of \( x \) a series of functions of the form
\[
\frac{v^{2m} x^{2n+2m+1}}{(x-a)^{2m+1}}.
\]
The coefficient of \( \frac{1}{x} \) in the development of this expression is easily found to be
\[
\frac{(2m+1)(2m+2)\ldots(n+m)}{1\cdot2\ldots(n-m)} a^{n-m} v^{2m}.
\]
Therefore
\[
C_1 \frac{x^{2n+1}}{x^2-vx-a} = \sum_{m=0}^{n} \frac{(2m+1)(2m+2)\ldots(n+m)}{1\cdot2\ldots(n-m)} a^{n-m} v^{2m},
\]
the summation extending from \( m=0 \) to \( m=n \). Hence
\[
\int_{-\infty}^{\infty} x^{2n} f\left(\frac{x-a}{x}\right) dx = \sum_{m=0}^{n} \frac{(2m+1)(2m+2)\ldots(n+m)}{1\cdot2\ldots(n-m)} a^{n-m} \int_{-\infty}^{\infty} dv v^{2m} f(v).
\]
(20.)
In the particular case in which the function denoted by \( f \) is even, we have, on replacing \( f(x) \) by \( \varphi(x^2) \),
\[
\int_{-\infty}^{\infty} x^{2n} \varphi\left(\frac{x-a}{x}\right) dx = \sum_{m=0}^{n} \frac{(2m+1)(2m+2)\ldots(n+m)}{1\cdot2\ldots(n-m)} a^{n-m} \int_{0}^{\infty} dv v^{2m} \varphi(v^2).
\]
(21.)
This, when \( a=1 \), is Cauchy's theorem referred to in Art. 1. Some valuable illustrations of it will be found in the Corollaries to the memoir of which it forms the subject.
We may employ (20.) or (21.) to generalize the results given in (11.) and (12.). We may thus finitely determine the values of the integrals
\[
\int_{0}^{\infty} \frac{dx}{(a+bx+cx^3)^n}, \quad \int_{0}^{\infty} \frac{dx}{(a+bx+cx^3)^n},
\]
\( i \) being an integer. For the former integral we shall have the expression
\[
\sum_{m=0}^{n} \frac{(2m+1)(2m+2)\ldots(i+m)}{1\cdot2\ldots(i-m)} \left(\frac{a}{c}\right)^{i-m} \times \frac{1}{(b+2\sqrt{ac})^{n-2i+1}} \times \frac{\Gamma\left(n-\frac{2i+1}{2}\right) \Gamma\left(\frac{2i+1}{2}\right)}{\Gamma(n)}.
\]
(22.)
For the latter integral we shall only have to change in the above, \( a \) into \( c \) and \( c \) into \( a \).
The results in (11.) and (12.), and the more general conclusions just obtained, are of importance in some of the more difficult problems connected with the mathematical theory of electricity. It is probable that a result equivalent to (22.) may be obtained
by some formulae of Mr. Cayley's connected with the reduction of the integrals which occur in certain problems of this class. I have not, however, attempted a verification.
34. Although the list which I have given of results obtainable by other methods might be increased, it is still only in comparatively rare instances that the means of independent verification present themselves. We might by transformations such as Cauchy has employed, verify the theorem
$$\int_{-\infty}^{\infty} \frac{dx \cos m \left( x - \frac{a}{x} \right)}{1 + \left( x - \frac{a}{x} \right)^2} = \pi e^{-m};$$
but it would not be easy by any such process to verify the theorem
$$\int_{-\infty}^{\infty} \frac{dx \cos \left\{ m \left( x - \frac{a_1}{x - \lambda_1} \cdots \frac{a_n}{x - \lambda_n} \right) \right\}}{1 + \left( x - \frac{a_1}{x - \lambda_1} \cdots \frac{a_n}{x - \lambda_n} \right)^2} = \pi e^{-m},$$
$a_1$ and $a_2$, &c., being any positive quantities, and $\lambda_1$, $\lambda_2$, &c., any real quantities whatever. I shall not, however, dwell any longer upon special results, but shall briefly state some of the general consequences which flow from the application of the primary theorem (1.).
1st. The evaluation of any definite integral
$$\int_{-\infty}^{\infty} \varphi(x)f(x - \frac{a_1}{x - \lambda_1} \cdots \frac{a_n}{x - \lambda_n}) dx,$$
in which $\varphi(x)$ is a rational and integral function of $x$, is reducible to that of the definite integral
$$\int_{-\infty}^{\infty} \psi(v)f(v) dv,$$
in which $\psi(v)$ is a rational and integral function of an order not higher than the order of $\varphi(x)$.
For, by the general theorem,
$$\int_{-\infty}^{\infty} dx \varphi(x)f(x - \frac{a_1}{x - \lambda_1} \cdots \frac{a_n}{x - \lambda_n}) = \int_{-\infty}^{\infty} dv f(v)\Theta[\varphi(x)] \frac{1}{v - x + \frac{a_1}{x - \lambda_1} \cdots + \frac{a_n}{x - \lambda_n}},$$
since $\varphi(x)$ is integral. If we develope the fractions $\frac{a_1}{x - \lambda_1}$, $\frac{a_2}{x - \lambda_2}$, &c., in descending powers of $x$, we shall have
$$\frac{\varphi(x)}{x - v - \frac{a_1}{x - \lambda_1} \cdots - \frac{a_n}{x - \lambda_n}} = \frac{\varphi(x)}{x - v - \frac{\sum a_r}{x - \lambda_r} \cdots - \frac{\sum a_r l_r}{x^2}}.$$
Suppose \( m \) to be the highest index of \( x \) in \( \varphi(x) \), then the development of the right-hand member, in descending powers of \( x \) by division, will assume the following form,
\[
B_0 x^{m-1} + B_1 x^{m-2} + B_2 x^{m-3} \ldots + B_m x^{-1} + B_{m+1} x^{-2} \ldots + \&c.,
\]
\( B_0 \) not containing \( v \), \( B_1 \) containing the first power of \( v \), \( B_2 \) the first and second power, &c. Hence \( B_m \), the coefficient of \( x^{-1} \), will involve powers of \( v \) up to the \( m \)th. Let this function be represented by \( \psi(v) \), and we have
\[
\int_{-\infty}^{\infty} dx \varphi(x) f\left( x - \frac{a_1}{x - \lambda_1} \ldots - \frac{a_n}{x - \lambda_n} \right) = \int_{-\infty}^{\infty} \psi(v) f(v) dv,
\]
\( \psi(v) \) being of the same order with respect to \( v \) as \( \varphi(x) \) with respect to \( x \).
The following are particular examples:
\[
\int_{-\infty}^{\infty} dx \cdot x f\left( x - \frac{a_1}{x - \lambda_1} \ldots - \frac{a_n}{x - \lambda_n} \right) = \int_{-\infty}^{\infty} vf(v) dv,
\]
\[
\int_{-\infty}^{\infty} dx \cdot x^2 f\left( x - \frac{a_1}{x - \lambda_1} \ldots - \frac{a_n}{x - \lambda_n} \right) = \int_{-\infty}^{\infty} (v^2 + a_1 + a_2 \ldots + a_n) f(v) dv,
\]
\[
\int_{-\infty}^{\infty} dx \cdot x^3 f\left( x - \frac{a_1}{x - \lambda_1} \ldots - \frac{a_n}{x - \lambda_n} \right) = \int_{-\infty}^{\infty} \left[ v^3 + 2(a_1 \ldots + a_n)v + a_1 \lambda_1 \ldots + a_n \lambda_n \right] f(v) dv.
\]
2ndly. The evaluation of the definite integral
\[
\int_{-\infty}^{\infty} dx \varphi(x) f\left( x - \frac{a_1}{x - \lambda_1} \ldots - \frac{a_n}{x - \lambda_n} \right),
\]
where \( \varphi(x) \) is a rational fraction, is reducible to that of the definite integral
\[
\int_{-\infty}^{\infty} dv \psi(v) f(v),
\]
where \( \psi(v) \) is a rational fraction of the same order as \( \varphi(x) \).
By a rational fraction of the same order, I mean one whose numerator is of the same degree, and whose denominator involves the same number of simple factors elevated to the same powers, the only difference arising from the constant coefficients.
By the general theorem we have
\[
\int_{-\infty}^{\infty} dx \varphi(x) f\left( x - \frac{a_1}{x - \lambda_1} \ldots - \frac{a_n}{x - \lambda_n} \right) = \int_{-\infty}^{\infty} dv f(v) \Theta[\varphi(x)] \frac{1}{v - x + \frac{a_1}{x - \lambda_1} \ldots + \frac{a_n}{x - \lambda_n}},
\]
in which, on account of the distributive character of the symbol \( \Theta \), we may resolve \( \varphi(x) \) into its component terms, and give to \( \Theta \) in succession the respective interpretations which they afford.
The component terms of \( \varphi(x) \) will be of the forms \( ax^i \) and \( \frac{a}{(x-p)^i} \), \( i \) being an integer.
We have just considered the effect of the first class of terms, and it only remains to consider that of the second class.
MDCCCLVII.
Now
\[
\Theta \left[ \frac{a}{(x-h)^i} \right] = \frac{1}{v-x + \frac{a_1}{x-\lambda_1} \cdots + \frac{a_n}{x-\lambda_n}}
\]
\[
= \frac{a}{1.2 \cdots (i-1)} \left( \frac{d}{dx} \right)^{i-1} \frac{1}{v-h + \frac{a_1}{h-\lambda_1} \cdots + \frac{a_n}{h-\lambda_n}}
\]
\[
+ C_1 \frac{a}{x} \left( x-v - \frac{a_1}{x-\lambda_1} \cdots - \frac{a_n}{x-\lambda_n} \right),
\]
the former of the two terms in the second member being the coefficient of \( \frac{1}{x-h} \) in the development of the function in ascending powers of \( x-h \).
It is evident that the latter of the terms in the second member vanishes; for the first term of the development in descending powers of \( x \) being \( \frac{a}{x^{i+1}} \), there will be no term of the form \( \frac{A}{x} \). Hence we have merely to consider the term
\[
\frac{a}{1.2 \cdots (i-1)} \cdot \left( \frac{d}{dh} \right)^{i-1} \frac{1}{v-H},
\]
\( H \) standing for
\[
h - \frac{a_1}{h-\lambda_1} \cdots - \frac{a_n}{h-\lambda_n}.
\]
If \( i=1 \), (8.) becomes
\[
\frac{a}{v-H}.
\]
Let \( i=2 \), and (8.) becomes
\[
\frac{a}{(v-H)^2} \frac{dH}{dh}.
\]
If \( i=3 \), (8.) becomes
\[
\frac{a}{1.2 \cdots (i-1)} \left[ \frac{d^2H}{dh^2} + 2 \left( \frac{dH}{dh} \right)^2 \right].
\]
Hence, generally,
\[
\frac{a}{1.2 \cdots (i-1)} \left( \frac{d}{dh} \right)^{i-1} \frac{1}{v-H} = \frac{H_1}{v-H} + \frac{H_2}{(v-H)^2} \cdots + \frac{H_i}{(v-H)^i},
\]
\( H_1, H_2, \ldots H_i \) being independent of \( v \). The second member of the above equation, on addition of its several terms, becomes a rational fraction whose denominator is \( (v-H)^i \), and whose numerator is a rational and integral function of \( v \) of an order not higher than \( i-1 \). Hence the theorem is demonstrated.
35. The conclusion to which these investigations lead is a remarkable one, and may be thus expressed. The evaluation of the definite integral
\[
\int_{-\infty}^{\infty} \frac{P_0 + P_1 x + P_2 x^2 \cdots + P_k x^k}{Q_0 + Q_1 x + Q_2 x^2 \cdots + Q_j x^j} f(x) dx
\]
is reducible to that of a definite integral of the form
\[
\int_{-\infty}^{\infty} \frac{P_c + P_1 v + P_2 v^2 \cdots + P_r v^r}{Q_0 + Q_1 v + Q_2 v^2 \cdots + Q_j v^j} f(v) dv,
\]
$P_0, P_1, \ldots Q_0, Q_1, \ldots$ being constants whose values in terms of $p_1, p_2, \ldots q_1, q_2, \ldots$ can always be finitely determined.
As particular examples of the above, we should have
$$\int_{-\infty}^{\infty} \frac{dx}{x-h} f\left(x - \frac{a_1}{x-\lambda_1} \cdots - \frac{a_n}{x-\lambda_n}\right) = \int_{-\infty}^{\infty} \frac{f(v)dv}{v-h + \frac{a_1}{h-\lambda_1} \cdots + \frac{a_n}{h-\lambda_n}}$$
$$\int_{-\infty}^{\infty} \frac{dx}{(x-h)^2} f\left(x - \frac{a_1}{x-\lambda_1} \cdots - \frac{a_n}{x-\lambda_n}\right) = \left[1 + \frac{a_1}{(h-\lambda_1)^2} + \cdots + \frac{a_n}{(h-\lambda_n)^2}\right] \int_{-\infty}^{\infty} \frac{f(v)dv}{\left(v-h + \frac{a_1}{h-\lambda_1} \cdots + \frac{a_n}{h-\lambda_n}\right)^2}.$$
In the last theorem the particular case in which $h=\lambda_1$, the function $f(v)$ being at the same time supposed small for very large values of $v$, is interesting. We see that as $h$ approaches $\lambda_1$ the terms $\frac{a_1}{(h-\lambda_1)^2}$ and $\frac{a_1}{h-\lambda_1}$ become large in comparison with those with which they are connected by addition. Thus the second member approaches the value
$$\frac{a_1}{(h-\lambda_1)^2} \int_{-\infty}^{\infty} \frac{f(v)dv}{\left(\frac{a_1}{h-\lambda_1}\right)^2} = \frac{1}{a_1} \int_{-\infty}^{\infty} f(v)dv,$$
so that in the limit
$$\int_{-\infty}^{\infty} \frac{dx}{(x-\lambda)^2} f\left(x - \frac{a_1}{x-\lambda_1} \cdots - \frac{a_n}{x-\lambda_n}\right) = \frac{1}{a_1} \int_{-\infty}^{\infty} f(v)dv.$$
It may be worth while to verify this theorem in a particular instance. We have from it
$$\int_{-\infty}^{\infty} \frac{dx}{(x-\lambda)^2} f\left(x - \frac{a}{x-\lambda}\right) = \frac{1}{a} \int_{-\infty}^{\infty} f(v)dv.$$
Now assume in the first member
$$\frac{-a}{x-\lambda} = y-\lambda.$$
The transformed integral is easily found to be
$$\frac{1}{a} \int dy f(y - \frac{a}{y-\lambda}).$$
As to the limits, when $x$ varies from $-\infty$ to $\lambda$, $y$ varies from $\lambda$ to $\infty$; and when $x$ varies from $\lambda$ to $\infty$, $y$ varies from $-\infty$ to $\lambda$. Thus, by mere transposition of the two portions of the integral, the limits of $y$ become the same as those of $x$, and we have
$$\int_{-\infty}^{\infty} \frac{dx}{(x-\lambda)^2} f\left(x - \frac{a}{x-\lambda}\right) = \frac{1}{a} \int_{-\infty}^{\infty} dy f(y - \frac{a}{y-\lambda}) = \frac{1}{a} \int_{-\infty}^{\infty} f(v)dv \text{ by (2.), art. 32.}$$
As a particular deduction from the above we shall have.
$$\int_{-\infty}^{\infty} \frac{dx}{x^2 e^{-(x^2+a^2)}} = \frac{\pi^{1/2} e^{-2a}}{a},$$
which may also be verified by differentiating (5.), art. 32, with respect to $a$. We are permitted thus to differentiate with respect to $a$, because the function under the sign of integration does not become infinite within the limits. This condition must be strictly attended to in all similar attempts at verification.
3rdly. From the value of any known definite integral, we can, by the general theorem, deduce either the values of other definite integrals taken between the limits $-\infty$ and $\infty$, or relations among the values of those integrals taken between other limits.
To accomplish the first object, we have only to transform the given integral into one whose limits are $-\infty$ and $\infty$, and then apply directly the general theorem. The method requires no illustration.
To accomplish the second object, we must express the function under the sign of integration, not as a continuous function taken between the given limits, but as a discontinuous function taken between the limits $-\infty$ and $\infty$, the character of its discontinuity being such, that for all values within the given limits of integration it shall assume the form specified, and for all values without the given limits shall vanish.
Thus let $\int_{p}^{q} f(v)dv$ be the definite integral whose value is given. We may extend the integration from $-\infty$ to $\infty$, provided that we regard $f(v)$ as vanishing when $v$ falls without the limits $p$ and $q$. We shall thus have
$$\int_{-\infty}^{\infty} f(x - \frac{a_1}{x - \lambda_1} - \cdots - \frac{a_n}{x - \lambda_n}) dx = \int_{-\infty}^{\infty} f(v)dv = \int_{p}^{q} f(v)dv,$$
provided that $f(x - \frac{a_1}{x - \lambda_1} - \cdots - \frac{a_n}{x - \lambda_n})$ vanishes whenever $x - \frac{a_1}{x - \lambda_1} - \cdots - \frac{a_n}{x - \lambda_n}$ falls without the limits $p$ and $q$. Let the roots of the equation
$$x - \frac{a_1}{x - \lambda_1} - \cdots - \frac{a_n}{x - \lambda_n} = p,$$
taken in ascending order of magnitude, be $p_1, p_2, \ldots p_{n+1}$, and the roots of
$$x - \frac{a_1}{x - \lambda_1} - \cdots - \frac{a_n}{x - \lambda_n} = q,$$
taken in the same order, be $q_1, q_2, \ldots q_{n+1}$. We suppose $\lambda_1, \lambda_2, \ldots \lambda_n$ also to be in ascending order of magnitude. Thus (art. 30) $p_1$ and $q_1$ lie between $-\infty$ and $\lambda_1$, $p_2$ and $q_2$ between $\lambda_1$ and $\lambda_2$, and so on. Also $q_1$ is greater than $p_1$, $q_2$ than $p_2$, &c. Hence we see that $x - \frac{a_1}{x - \lambda_1} - \cdots - \frac{a_n}{x - \lambda_n}$ will only fall within the limits $p$ and $q$ when $x$ falls either between $p_1$ and $q_1$, or between $p_2$ and $q_2$, &c. Thus we have in fact
$$\sum_{p_i}^{q_i} dx f(x - \frac{a_1}{x - \lambda_1} - \cdots - \frac{a_n}{x - \lambda_n}) = \int_{p}^{q} f(v)dv,$$
and still more generally, $\varphi(x)$ being a rational function of $x$,
$$\sum_{p_i}^{q_i} \varphi(x)f(x - \frac{a_1}{x - \lambda_1} - \cdots - \frac{a_n}{x - \lambda_n}) dx = \int_{p}^{q} dv f(v)\Theta[\varphi(x)] \frac{1}{v - x + \frac{a_1}{x - \lambda_1} + \cdots + \frac{a_n}{x - \lambda_n}},$$
which is a reproduction of (5.), art. 30. I deem it, however, an important fact, that in the comparison of functional transcendentals, formulae involving the sign of summation
may be dispensed with; a more general conception of the nature of a function supplying their place.
36. One remarkable theorem must still be noticed. Since
\[ \sin x = x \left(1 - \frac{x^2}{\pi^2}\right) \left(1 - \frac{x^2}{2\pi^2}\right) \left(1 - \frac{x^2}{3\pi^2}\right) \cdots, \]
we have, on taking the logarithms of both sides and differentiating,
\[ \cot x = \frac{1}{x} + \frac{1}{x + \pi} + \frac{1}{x - \pi} + \frac{1}{x + 2\pi} + \frac{1}{x - 2\pi} + \cdots \quad (1). \]
Hence
\[ x - \cot x = x - \frac{1}{x} - \frac{1}{x + \pi} - \frac{1}{x - \pi} - \frac{1}{x + 2\pi} - \cdots \quad &c. \]
Whence, by (2.), art. 32,
\[ \int_{-\infty}^{\infty} dx f(x - \cot x) = \int_{-\infty}^{\infty} dv f(v). \quad \cdots \cdots \cdots \cdots \quad (2). \]
The result may however be generalized. For from (1.),
\[ a_1 \cot(x - \lambda_1) = \frac{a_1}{x - \lambda_1} + \frac{a_1}{x - \lambda_1 + \pi} + \frac{a_1}{x - \lambda_1 - \pi} + \cdots \quad &c. \]
Taking the sum of any series of such terms, we shall evidently have
\[ x - a_1 \cot(x - \lambda_1) - a_2 \cot(x - \lambda_2) \cdots - a_n \cot(x - \lambda_n) \]
\[ = x - \frac{a_1}{x - \lambda_1} - \frac{a_2}{x - \lambda_2} - \cdots - \frac{a_n}{x - \lambda_n} \cdots, \]
which agrees in form with the function under the sign \( f \) in (2.), art. 32. Hence
\[ \int_{-\infty}^{\infty} dx f(x - a_1 \cot(x - \lambda_1) \cdots - a_n \cot(x - \lambda_n)) = \int_{-\infty}^{\infty} dv f(v). \quad \cdots \cdots \cdots \cdots \quad (3). \]
If we treat in the same way the theorem
\[ \cos x = \left(1 - \frac{4x^2}{\pi^2}\right) \left(1 - \frac{4x^2}{3\pi^2}\right) \left(1 - \frac{4x^2}{5\pi^2}\right) \cdots, \]
we shall arrive at the theorem
\[ \int_{-\infty}^{\infty} dx f(x + a_1 \tan(x - \lambda_1) \cdots + a_n \tan(x - \lambda_n)) = \int_{-\infty}^{\infty} dv f(v). \quad \cdots \cdots \cdots \cdots \quad (4). \]
Essentially, however, this result is involved in (3.), the analogy of which with (2.), Art. 32, will be most apparent if we place it in the form
\[ \int_{-\infty}^{\infty} dx f\left\{x - \frac{a_1}{\tan(x - \lambda_1)} \cdots - \frac{a_n}{\tan(x - \lambda_n)}\right\} = \int_{-\infty}^{\infty} dv f(v). \]
As before, the quantities \( a_1, a_2 \ldots a_n \) must be positive.
The verification of these theorems by some independent method seems desirable.
Application of the General Theorem to the evaluation of multiple definite integrals.
37. The form in which multiple definite integrals present themselves in the application of mathematics to natural philosophy, is usually the following. The value of a triple integral,
$$\iiint \varphi(x, y, z) \, dx \, dy \, dz,$$
is required to be found, the integration extending, in some instances, to all positive values, but more generally to all values whatever of the variables $x, y, z$ which satisfy a condition
$$\psi(x, y, z) \equiv 1.$$
The most general method of treating this class of problems is due to M. Lejeune Dirichlet. It consists in converting $\varphi(x, y, z)$ into a discontinuous function which vanishes whenever the variables $x, y, z$ transcend the limits assigned in (2.), and which is equal to $\varphi(x, y, z)$ whenever those variables satisfy the above condition. This transformation being effected, we are permitted to regard the integrations relatively to $x, y, z$ as independent, and as individually taken between the limits $-\infty$ and $\infty$.
From this circumstance the progress of our knowledge of multiple definite integrals must be in some degree coordinate with the extension of our command over single definite integrals taken between the limits $-\infty$ and $\infty$. I propose in this section both to illustrate, by one or two examples, the theory of multiple integrals as above stated, and to show how it gains extension from the theorem of definite integration demonstrated in the preceding pages.
There are several different forms in which the application of the principle of discontinuity to the evaluation of multiple integrals may be presented. The form which I shall adopt in this paper is similar to one originally given by me in the Transactions of the Royal Irish Academy (vol. xxi. pt. 1), but is more convenient in application. It depends essentially upon the employment of Fourier's theorem, viz.—
$$f(x) = \frac{1}{\pi} \int_{-\infty}^{\infty} \int_{0}^{\infty} da \, dv \cos(ax - xv)f(a).$$
If we write the cosine in its exponential form, we have
$$f(x) = \frac{1}{2\pi} \int_{-\infty}^{\infty} \int_{0}^{\infty} da \, dv \left[ e^{(av - xv)\sqrt{-1}} + e^{-(av - xv)\sqrt{-1}} \right] f(a).$$
Now by known theorems
$$\frac{1}{i} = \frac{i\sqrt{-1}}{\Gamma(i)} \int_{0}^{\infty} dw \cdot w^{i-1} e^{-tw\sqrt{-1}} = \frac{i\sqrt{-1}}{\Gamma(i)} \int_{0}^{\infty} dw \cdot w^{i-1} e^{tw\sqrt{-1}}.$$
Multiply the terms of (3.) by those of (4.) taken in the same order, and, converting the exponentials into sines and cosines, we have
$$f(x) = \frac{1}{\pi \Gamma(i)} \int_{-\infty}^{\infty} \int_{0}^{\infty} da \, dv \, dw \cos \left( av - xv - tw + \frac{im}{2} \right) w^{i-1} f(a),$$
the theorem employed in the Irish Transactions.
Now \( v \) remaining unchanged, let \( w = vs \). Then transforming in the usual way, we have
\[
dv dw = v dv ds,
\]
whence, on substitution,
\[
f(x) = \frac{1}{\pi \Gamma(i)} \int_{-\infty}^{\infty} \int_{0}^{\infty} \int_{0}^{\infty} da dv ds \cos \left\{ (a - x - ts)v + \frac{i\pi}{2} \right\} v^{s-1} f(a).
\]
In the application of this theorem to the reduction of multiple integrals, \( x \) and \( t \) will be replaced by functions of the variables involved in those integrals. Its advantages are the following. Like Fourier's theorem, from which it is derived, it enables us to express any species of discontinuity in the function \( f(x) \). Thus if \( f(x) \) is to vanish for all values of \( x \) which lie without the limits \( p \) and \( q \), we have only to substitute \( p \) and \( q \) for \(-\infty\) and \(\infty\) in the integration relative to \( a \). At the same time the theorem presents \( x \) and \( t \) in a functional connexion, which in all the most important cases renders possible the subsequent integrations without any new transformation.
One subsidiary theorem remains to be noticed. Since we have
\[
\int_{-\infty}^{\infty} dy \cos (a \pm cy^2) = \frac{\pi}{c} \cos \left( a \pm \frac{\pi}{4} \right),
\]
we have, by successive applications of this theorem,
\[
\int_{-\infty}^{\infty} dy_1 dy_2 \ldots dy_n \cos \left( a \pm \sum_{r=1}^{n} c_r y_r^2 \right) = \frac{\pi^n}{c_1 c_2 \ldots c_n} \cos \left( a \pm \frac{n\pi}{4} \right).
\]
38. Example 1st. Let it be required to evaluate the multiple definite integral
\[
V = \int \ldots dx_1 dx_2 \ldots dx_n \frac{f(l_1 x_1 + l_2 x_2 + \ldots + l_n x_n)}{\{ h^2 + (a_1 - x_1)^2 + (a_2 - x_2)^2 + \ldots + (a_n - x_n)^2 \}^{i/2}} \ldots .
\]
the integration extending to all values of the variables which satisfy the condition
\[
l_1 x_1 + l_2 x_2 + \ldots + l_n x_n < q.
\]
If for convenience we represent \( l_1 x_1 + l_2 x_2 + \ldots + l_n x_n \) by \( \Sigma l_r x_r \) and \( (a_1 - x_1)^2 + (a_2 - x_2)^2 + \ldots + (a_n - x_n)^2 \) by \( \Sigma (a_r - x_r)^2 \), we have by the theorem (6),
\[
\frac{l_1 x_1 + l_2 x_2 + \ldots + l_n x_n}{\{ h^2 + (a_1 - x_1)^2 + (a_2 - x_2)^2 + \ldots + (a_n - x_n)^2 \}^{i/2}} = \frac{1}{\pi \Gamma(i)} \int_{-\infty}^{\infty} \int_{0}^{\infty} \int_{0}^{\infty} da dv ds \cos \left\{ (a - \Sigma l_r x_r - (h^2 + \Sigma (a_r - x_r)^2 s))v + \frac{i\pi}{2} \right\} v^{s-1} f(a).
\]
The conditions relative to the limits will be fulfilled by introducing \( p \) and \( q \) for \(-\infty\) and \(\infty\) in the above. Effecting this change and extending, as we then may do, the integrations relative to \( x_1, x_2, \ldots, x_n \) from \(-\infty\) to \(\infty\), we have
\[
V = \frac{1}{\pi \Gamma(i)} \int_{p}^{q} \int_{0}^{\infty} \int_{0}^{\infty} da dv ds v^{s-1} f(a) T,
\]
where
\[
T = \int_{-\infty}^{\infty} \ldots dx_1 dx_2 \ldots dx_n \cos \left\{ (a - h^2 s - \Sigma (l_r x_r + s \cdot a_r - x_r)^2) v + \frac{i\pi}{2} \right\}.
\]
Now \( l_r x_r + s (a_r - x_r)^2 = s \left( x_r - a_r + \frac{l_r}{2s} \right)^2 + l_r a_r - \frac{l_r^2}{4s} = sy_r^2 + l_r a_r - \frac{l_r^2}{4s} \) if \( y_r = x_r - a_r + \frac{l_r}{2s} \). Sub-
stituting and observing that the limits of \( y_r \) are \(-\infty\) and \(\infty\), we have
\[
T = \int_{-\infty}^{\infty} \ldots dy_1 dy_2 \ldots dy_n \cos \left( (a - h^2 s - \sum (sy_r^2 + l_r a_r - \frac{l_r^2}{4s})) v + i \frac{\pi}{2} \right)
\]
\[
= \int_{-\infty}^{\infty} \ldots dy_1 dy_2 \ldots dy_n \cos \left( (a - h^2 s - \sum l_r a_r + \frac{\sum l_r^2}{4s}) v + \sum vsy_r^2 + i \frac{\pi}{2} \right).
\]
Let \( \sigma = h^2 s + \sum l_r a_r - \frac{\sum l_r^2}{4s} \).
Then
\[
T = \int_{-\infty}^{\infty} \ldots dy_1 dy_2 \ldots dy_n \cos \left( (a - \sigma) v + \sum vsy_r^2 + i \frac{\pi}{2} \right) = \frac{\pi^n}{(vs)^n} \cos \left( (a - \sigma) v + i \frac{\pi}{2} - \frac{n \pi}{4} \right),
\]
whence
\[
V = \frac{\pi^{n-1}}{\Gamma(i)} \int_p^q \int_0^\infty \int_0^\infty da dv ds \left( \frac{i}{2} \right)^{n-1} \cos \left( (a - \sigma) v + \left( i - \frac{n}{2} \right) \frac{\pi}{2} \right) f(a) = \frac{\pi^n}{\Gamma(i)} \int_0^\infty ds \left( \frac{i}{2} \right)^{n-1} Q,
\]
where
\[
Q = \frac{1}{\pi} \int_p^q \int_0^\infty da dv \left( \frac{i}{2} \right)^{n-1} \cos \left( (a - \sigma) v + \left( i - \frac{n}{2} \right) \frac{\pi}{2} \right) f(a)
\]
\[
= \left( -\frac{d}{ds} \right)^{i-n/2} \frac{1}{\pi} \int_p^q \int_0^\infty da dv \cos \left( (a - \sigma) v \right) f(a)
\]
\[
= \left( -\frac{d}{ds} \right)^{i-n/2} f(\sigma)
\]
by Fourier's theorem, \( f(\sigma) \) vanishing when \( \sigma \) does not fall within the limits \( p \) and \( q \).
Thus, finally,
\[
V = \frac{\pi^n}{\Gamma(i)} \int_0^\infty ds \left( \frac{i}{2} \right)^{n-1} \left( -\frac{d}{ds} \right)^{i-n/2} f(\sigma),
\]
the fully expressed value of \( \sigma \) being
\[
\sigma = h^2 s + l_1 a_1 + l_2 a_2 \ldots + l_n a_n - \frac{l_1^2 + l_2^2 \ldots + l_n^2}{4s}.
\]
We may remark, that as
\[
\frac{d\sigma}{ds} = h^2 + \frac{l_1^2 + l_2^2 \ldots + l_n^2}{4s^2},
\]
\( \sigma \) increases with \( s \). As \( s \) varies from 0 to \( \infty \), \( \sigma \) varies from \(-\infty\) to \( \infty \). Thus, whatever may be the values of \( p \) and \( q \) within which the variation of \( \sigma \) is confined, there will exist corresponding positive values within which the variation of \( s \) will be confined, and those limits must take the place of 0 and \( \infty \) in the expression of (4.).
39. In strictness there is no need of referring to the limit in the statement of general theorems like the above involving an arbitrary symbol of functionality. The consistent interpretation of that symbol will suffice. Thus the results at which we have arrived are virtually included in the theorem
\[
\int_{-\infty}^{\infty} \ldots dx_1 dx_2 \ldots dx_n \frac{f(l_1 x_1 + l_2 x_2 \ldots + l_n x_n)}{(h^2 + (a_1 - x_1)^2 + (a_2 - x_2)^2 \ldots + (a_n - x_n)^2)^i} = \frac{\pi^n}{\Gamma(i)} \int_0^\infty ds \left( \frac{i}{2} \right)^{n-1} \left( -\frac{d}{ds} \right)^{i-n/2} f(\sigma),
\]
\( \sigma \) having the value given above. For if \( l_1 x_1 + l_2 x_2 \ldots + l_n x_n \) is confined within given limits, \( f(l_1 x_1 + l_2 x_2 \ldots + l_n x_n) \) may be regarded as vanishing whenever \( l_1 x_1 + l_2 x_2 \ldots + l_n x_n \) transcends
those limits, and therefore, by consistency of interpretation, \( f(\sigma) \) as vanishing whenever \( \sigma \) transcends the same limits.
40. I shall not enter into any discussion of the above solution, but shall briefly point out in what way it may be generalized by the theorem of definite integration of Art. 32. It is evident that if we change in (6.)
\[
x_1 \text{ into } x_1 - \frac{b_1}{x_1 - \lambda_1} - \frac{b_2}{x_1 - \lambda_2} \ldots - \frac{b_m}{x_1 - \lambda_m},
\]
\[
x_2 \text{ into } x_2 - \frac{c_1}{x_2 - \mu_1} - \frac{c_2}{x_2 - \mu_2} \ldots - \frac{c_m}{x_2 - \mu_m},
\]
\&c. \&c. \&c.
or into any of the remarkable forms thence derived, Art. 36, leaving \( dx_1 dx_2 \ldots dx_n \) unchanged, the actual value of the multiple integral will be unaltered. Thus, as a particular illustration, if we suppose
\[
V = \iiint dxdydz \frac{f\left( l(x-\frac{1}{x}) + m(y-\frac{1}{y}) + n(z-\frac{1}{z}) \right)}{\left[ h^2 + \left( a-x+\frac{1}{x} \right)^2 + \left( b-y+\frac{1}{y} \right)^2 + \left( c-z+\frac{1}{z} \right)^2 \right]^{\frac{3}{2}}},
\]
the integrations being limited only by the condition
\[
l\left( x-\frac{1}{x} \right) + m\left( y-\frac{1}{y} \right) + n\left( z-\frac{1}{z} \right) < 0,
\]
we should find
\[
V = \frac{\pi^{\frac{3}{2}}}{\Gamma(i)} \int_0^\infty ds \cdot s^{i-\frac{3}{2}-1} \left( \frac{d}{d\sigma} \right)^{i-\frac{3}{2}} f(\sigma),
\]
where
\[
\sigma = la + mb + nc - \frac{l^2 + m^2 + n^2}{48},
\]
provided that \( f(\sigma)=0 \) when \( \sigma \) does not fall within the limits 0 and 1.
41. Example 2nd. Let
\[
V = \int \ldots dx_1 dx_2 \ldots dx_n \frac{f\left\{ l_1^2 \left( x_1^2 + \frac{a_1^2}{x_1^2} \right) \ldots + l_n^2 \left( x_n^2 + \frac{a_n^2}{x_n^2} \right) \right\}}{\left\{ h^2 + m_1^2 \left( x_1^2 + \frac{b_1^2}{x_1^2} \right) \ldots + m_n^2 \left( x_n^2 + \frac{b_n^2}{x_n^2} \right) \right\}^{\frac{3}{2}}},
\]
the integration extending to all values of the variables which satisfy the condition
\[
l_1^2 \left( x_1^2 + \frac{a_1^2}{x_1^2} \right) \ldots + l_n^2 \left( x_n^2 + \frac{a_n^2}{x_n^2} \right) \leq 1.
\]
Here, after reductions similar to those which have been exemplified in the preceding problem, but more complicated, we find
\[
V = \frac{\pi^{\frac{3}{2}}}{\Gamma(i)} \int_0^\infty ds \cdot s^{i-1} \left( \frac{-d}{d\sigma} \right)^{i-\frac{3}{2}} f(\sigma)
\]
\[
\ldots \ldots \ldots \ldots (1.)
\]
wherein \( \sigma = h^2 s + 2(l_1^2 + m_1^2 s)^i (l_1^2 a_1^2 + m_1^2 b_1^2 s_1)^i + \cdots + 2(l_n^2 + m_n^2 s)^i (l_n^2 a_n^2 + m_n^2 b_n^2 s)^i \),
\( f(\sigma) \) being supposed to vanish when \( \sigma \) transcends the limits 0 and 1. This theorem admits of the same kind of generalization as the preceding one. It was communicated by me, some years ago, to Mr. Cayley, and published by him, at my desire, in Liouville's 'Journal,' vol. xiii.
If in the above theorem we make \( a_1 = 0, \ldots a_n = 0 \), we have an expression for the value of the multiple integral
\[
\int \ldots dx_1 dx_2 \ldots dx_n \frac{f(l_1^2 x_1^2 + l_2^2 x_2^2 + \cdots + l_n^2 x_n^2)}{h^2 + m_1^2 \left( \frac{x_1^2}{a_1^2} + \frac{b_1^2}{x_1^2} \right) \cdots + m_n^2 \left( \frac{x_n^2}{a_n^2} + \frac{b_n^2}{x_n^2} \right)} i^2
\]
the integrations being extended through the mass of the ellipsoid whose equation is
\[
l_1^2 a_1^2 + \cdots + l_n^2 a_n^2 = 1.
\]
The following example, originally published by me, but without any intimation of its possible extension by means of the general theorem of definite integration, in the memoir already referred to in the 'Transactions of the Royal Irish Academy,' is of great practical importance.
42. Example 3rd. Required the value of the multiple integral
\[
V = \int \ldots dx_1 dx_2 \ldots dx_n \frac{f \left( \frac{x_1^2}{h_1^2} + \frac{x_2^2}{h_2^2} + \cdots + \frac{x_n^2}{h_n^2} \right)}{h^2 + (a_1 - x_1)^2 + (a_2 - x_2)^2 + \cdots + (a_n - x_n)^2} i^2 \quad \ldots \quad (1.)
\]
the integrations extending to all values of the variables which satisfy the condition
\[
\frac{x_1^2}{h_1^2} + \frac{x_2^2}{h_2^2} + \cdots + \frac{x_n^2}{h_n^2} \leq 1. \quad \ldots \quad (2.)
\]
Here \( V \) will be found by integrating with respect to \( x_1, x_2, \ldots x_n \) between the limits \(-\infty\) and \(\infty\) the expression
\[
\frac{1}{\pi \Gamma(i)} \int_0^\infty \int_0^\infty \int_0^\infty da dv ds v^{i-1} \cos \left\{ \left( a - \sum \frac{x_r^2}{h_r^2} - s \left( h^2 + \Sigma (a_r - x_r)^2 \right) \right) v + \frac{i \pi}{2} \right\} f(a).
\]
Hence changing, as before, the order of integration,
\[
V = \frac{1}{\pi \Gamma(i)} \int_0^\infty \int_0^\infty \int_0^\infty da dv ds v^{i-1} f(a) T,
\]
where
\[
T = \int_{-\infty}^\infty dx_1 dx_2 \ldots dx_n \cos \left\{ \left( a - h^2 s - \sum \left( \frac{x_r^2}{h_r^2} + s (a_r - x_r)^2 \right) \right) v + \frac{i \pi}{2} \right\}.
\]
Now
\[
\frac{x_r^2}{h_r^2} + s (a_r - x_r)^2 = \frac{1 + h_r^2 s}{h_r^2} y_r^2 + \frac{s a_r^2}{1 + h_r^2 s}
\]
if we make
\[
y_r = x_r - \frac{h_r^2 a_r}{1 + h_r^2 s}.
\]
Substituting and integrating with respect to \( y_1y_2\ldots y_r \) between the limits \(-\infty\) and \(\infty\), we have
\[
T = \frac{h_1h_2\ldots h_n}{(1 + h_1^2s)^{\frac{n}{2}} \ldots (1 + h_n^2s)^{\frac{n}{2}}} \cos \left( \left( a - h_1^2s - \sum_{i=1}^{n} \frac{a_i^2s}{1 + h_i^2s} \right) v + \left( i - \frac{n}{2} \right) \frac{\pi}{2} \right);
\]
whence, if we make
\[
\sigma = h_1^2s + \frac{a_1^2s}{1 + h_1^2s} + \ldots + \frac{a_n^2s}{1 + h_n^2s},
\]
we have
\[
V = \frac{\pi^{n-1}h_1h_2\ldots h_n}{\Gamma(i)} \int_0^\infty \int_0^\infty \int_0^\infty \int_0^\infty \frac{ds \cdot s^{i-1}}{(1 + h_1^2s)^{\frac{i}{2}} \ldots (1 + h_n^2s)^{\frac{i}{2}}} \cos \left( \left( a - \sigma \right) v + \left( i - \frac{n}{2} \right) \frac{\pi}{2} \right) f(a)
\]
Here \( \sigma \) increases continuously with \( s \). As \( s \) varies from 0 to \(\infty\), \( \sigma \) also varies from 0 to \(\infty\). To any positive limits of \( \sigma \) will correspond positive limits of \( s \), and these, as will hereafter appear, will in certain cases replace the limits 0 and \(\infty\) in the expression for \( V \).
It is also deserving of note that \( \sigma \) may be placed in the form
\[
\sigma = \frac{a_1^2}{h_1^2} + \frac{a_2^2}{h_2^2} + \ldots + \frac{a_n^2}{h_n^2} + h_1^2s - \frac{a_1^4}{h_1^4} - \ldots - \frac{a_n^4}{h_n^4},
\]
which only differs by a constant term from the form of the function \( \psi \) in the general theorem of definite integration. It follows at once from this that all the values of \( s \) corresponding to a given value of \( \sigma \) will be real and that only one of them will be positive, the others lying between limits expressed by \(-\infty\), \(-\frac{1}{h_1^2}\), \(-\frac{1}{h_2^2}\), \(\ldots\), \(-\frac{1}{h_n^2}\) supposing these ranged in order of ascending magnitude.
The above example admits of the same generalization as the first which we considered. The value of \( V \) remains unchanged if, both in the original integral (1.) and in the equation of the limits (2.), we substitute \( x_i = \frac{a_1}{x_1 - \lambda_1} + \frac{a_2}{x_1 - \lambda_2} + \ldots + \frac{a_m}{x_1 - \lambda_m} \) for \( x_1 \), with similar but not necessarily the same transformations for \( x_2, x_3 \ldots x_n \), leaving \( dx_1, dx_2 \ldots dx_n \) unaltered; or if we employ the derived transformations of Art. 36.
I do not conceive that these extensions possess any kind of prospective value or importance, beyond what must attach to all real additions to our knowledge of the Integral Calculus. Upon such questions it is, however, almost always unsafe to specu-
late. We can do little more than address ourselves patiently to follow the tracks which open before us, without attempting to prescribe their direction or conjecture their end.
The interpretation of the formulæ which have been arrived at in this section is so far distinct from the general course and design of the paper, that I have thrown into a note such observations as I have to offer upon the subject. And I have rather taken this course, because those observations have been to some extent anticipated in a former publication. See Note B.
I have not attempted the further extension of the theory of multiple integrals, which would seem to be involved in the general theorem of Art. 32, when other values than unity are assigned to the function \( \varphi(x) \). Neither have I attempted to extend that theorem to the case in which \( \varphi(x) \) is rational—a case evidently of some importance from its formal connexion with the last member of (5.).
**Note A.**
*On the Connexion between the Symbol \( \Theta \) and Cauchy’s Symbol \( \mathcal{E} \), employed in the Calculus of Residues.*
It has been explained that the operation denoted by the symbol \( \mathcal{E} \) is equivalent to that portion of the operation represented by \( \Theta \) which depends upon the ascending developments of the subject function. There is, however, a difference in the mode of statement.
Thus \( \mathcal{E} \left[ \frac{x^2}{(x-a)^2(x-b)} \right] \) would denote, according to Cauchy’s definition, the sum of the coefficients of \( \frac{1}{x} \) in the developments of the respective functions \( \frac{(x+a)^2}{x^2(x+a-b)} \) and \( \frac{(x+b)^2}{(x+b-a)^2 x} \), in ascending powers of \( x \). This would be the same as the sum of the coefficients of \( \frac{1}{x-a} \) and \( \frac{1}{x-b} \) in the respective ascending developments of the primitive function \( \frac{x^2}{(x-a)^2(x-b)} \), in ascending powers of \( x-a \) and \( x-b \) respectively. The operation \( \Theta \) would add to the above the coefficient with changed sign of \( \frac{1}{x} \) in the development of the same function in descending powers of \( x \).
We shall perhaps best exemplify the connexion thus established between the two symbols, by applying the new symbol \( \Theta \) to the solution of some of the problems which Cauchy has treated by the Calculus of Residues. I select for the purpose,—1st, the problem of the integration of linear differential equations with constant coefficients; 2ndly, the problem of the integration of rational fractions.
Both these applications depend on a transformation of the rational function \( f(v) \).
1st. It follows from the subsidiary theorem (6.), Art. 13, that if we have
\[
v-x=0
\]
as an equation connecting \( x \) and \( v \), then
\[
\Sigma f(x)=\Theta[f(x)]\frac{1}{v-x}.
\]
But since from (1.) \( x \) has only one value, viz. \( v \), it is evident that \( \Sigma f(x) = f(v) \). Hence
\[
f(v) = \Theta [f(x)] \frac{1}{v-x}. \tag{2.}
\]
Now let \( f\left(\frac{d}{d\theta}\right)u = U \) represent any differential equation with constant coefficients, \( \theta \) being the independent, \( u \) the dependent variable, and \( U \) a given function of \( \theta \). To secure the utmost generality, I will suppose \( f\left(\frac{d}{d\theta}\right) \) a rational fraction. Thus if the equation were
\[
\frac{d^2u}{d\theta^2} + u = \frac{dU}{d\theta},
\]
we should have
\[
f\left(\frac{d}{d\theta}\right) = \frac{d^2 + 1}{d}.
\]
Now
\[
u = \left[f\left(\frac{d}{d\theta}\right)\right]^{-1} U = \Theta [f(x)^{-1}] \left(\frac{d}{d\theta} - x\right)^{-1} U \text{ by (2.)} = \Theta [f(x)^{-1}] e^{x\theta} \left\{ \int_{\theta}^{x} U d\theta + \psi(x) \right\}, \tag{3.}
\]
\( \psi(x) \) being an arbitrary function of \( x \). This is the complete solution of the differential equation proposed. The reader will have no difficulty in applying it to particular cases. The arbitrary constants have their origin in the complementary function \( \psi(x) \).
In the Calculus of Residues the theorem corresponding to (2.) is the following:
\[
f(v) = E \left[ \frac{f(x)}{v-x} \right] + \mathcal{E} \left[ \frac{f(1)}{x(1-vx)} \right]. \tag{4.}
\]
The first term in this expression for \( f(v) \) is equivalent to the result of the first part of the operation \( \Theta \) in the second member of (2.), viz. that which depends upon the developments which are effected in ascending powers of the simple factors of the denominator of \( f(x) \). The second term is a transformation of the result of the second part of the operation \( \Theta \). It would be more conveniently expressed in the form
\[
-C_1 \frac{f(x)}{v-x}.
\]
The solution of the equation \( f\left(\frac{d}{d\theta}\right)u = U \), furnished by (4.), will evidently be
\[
u = E \left[ f(x)^{-1} \right] e^{x\theta} \left\{ \int_{\theta}^{x} U d\theta + \psi(x) \right\} - \mathcal{E} \left[ \frac{f(1)}{x} \right] e^{x\theta} \left\{ \int_{\theta}^{x} U d\theta + \psi(x) \right\}, \tag{5.}
\]
\( \psi(x) \) being an arbitrary function of \( x \).
When, as indeed is usually the case, \( f\left(\frac{d}{d\theta}\right) \) is a rational and integral function of the symbol \( \frac{d}{d\theta} \), the second member of (5.) reduces to its first term. The symbols \( \Theta \) and \( \mathcal{E} \) then become identical.
2ndly. From the equation \( v - x = 0 \), we have, by the general theorem of transformation,
\[
\Sigma f(x) dx = \Theta [f(x)] \frac{dv}{v-x}.
\]
Whence, \( x \) having only one value in terms of \( v \),
\[
f(v)dv = \Theta[f(x)] \frac{dv}{v-x},
\]
and integrating,
\[
\int f(v)dv = \Theta[f(x)] \{\log (v-x) + \psi(x)\},
\]
\( \psi(x) \) being an arbitrary function of \( x \). It is easily seen that \( \Theta[f(x)]\psi(x) \) may be represented by \( C \), whence
\[
\int f(v)dv = \Theta[f(x)] \log (v-x) + C. \quad \ldots \ldots \ldots \ldots \quad (6.)
\]
Cauchy has very extensively employed the Calculus of Residues in the evaluation of definite integrals taken between the limits 0 and \( \infty \). These applications are among the most valuable portions of his writings. They have no connexion, however, with the researches of this paper, and I have not even examined whether they would be in any degree generalized by the adoption of the symbol \( \Theta \).
**Note B.**
*On the Interpretation of the Formulae for the Evaluation of Multiple Integrals.*
The three principal formulæ, (4.) art. 38, (1.) art. 41, and (5.) art. 42, evidently possess a common type. In each of them we recognize under the sign \( \int \), a function \( f(\sigma) \) which may be discontinuous within the limits of integration, and which is at the same time subject to an operation of general differentiation. This is a combination which is at least unusual in analysis. I purpose to consider here some of the questions of interpretation which it suggests. To some extent, indeed, these questions have been considered in my previous memoir, already referred to; but one of the most important of them, the effect of discontinuity in the function \( f(\sigma) \) upon the integral in which it is involved, admits of being presented in a more satisfactory light. I do not propose to enter upon a complete investigation of the latter question, but only to examine one or two special and well-marked cases, in the hope of directing the attention of others to the subject.
When there is but one variable, and the index of differentiation is 0, the formulæ reduce in effect to ordinary integral transformations. And it is quite worthy of observation, that in this way the formula (4.), Art. 38, leads to known modular transformations of the elliptic functions. Thus if we make \( n=1, h=1, i=\frac{1}{2}, a_1=0 \) and drop the suffix from \( x_1 \) and \( l_1 \), we have
\[
\int \frac{dxf(lx)}{(1+x^2)^i} = \int ds \frac{f(\sigma)}{s},
\]
where \( \sigma=s-\frac{l^2}{4s} \) and \( f(\sigma) \) vanishes when \( \sigma \) transcends the limits of \( lx \). But this amounts to saying that
\[
\int \frac{dxf(lx)}{(1+x^2)^i} = \int ds \frac{f\left(s-\frac{l^2}{4s}\right)}{s},
\]
provided that
\[ lx = s - \frac{l^2}{4s}, \]
a result easily verified. Now let \( f(lx) = \frac{1}{\sqrt{1 + l^2 x^2}} \), we have
\[ \int \frac{dx}{\sqrt{(1 + x^2)(1 + l^2 x^2)}} = \int \frac{ds}{\sqrt{1 + \left( s - \frac{l^2}{4s} \right)^2}} = \int \frac{ds}{\sqrt{\frac{l^4}{16} + \left( 1 - \frac{l^2}{2} \right) s^2 + s^4}}. \]
The second member is a function of the same kind as the first, differing only in the constants. If we assume \( s = mt \), we can so determine \( m \) as to reduce the equation to the form
\[ \int \frac{dx}{\sqrt{(1 + x^2)(1 + l^2 x^2)}} = L \int \frac{dt}{\sqrt{(1 + t^2)(1 + l^2 t^2)}}. \quad \ldots \ldots \ldots \ldots (2.) \]
We shall find
\[ L = \frac{2}{1 + \sqrt{1 - l^2}}, \quad l = \frac{1 - \sqrt{1 - l^2}}{1 + \sqrt{1 + l^2}}; \]
the relation between \( x \) and \( t \) being
\[ t = l^2(x \sqrt{x^2 + 1}). \]
If in the above we make \( x = \tan \phi, \quad t = \tan \theta, \quad 1 - l^2 = h^2, \quad 1 - l^2 = k^2 \), we find
\[ \int \frac{d\phi}{\sqrt{1 - h^2 \sin^2 \phi}} = \frac{2}{1 + h} \int \frac{d\theta}{\sqrt{1 - k^2 \sin^2 \theta}}, \quad \ldots \ldots \ldots \ldots (3.) \]
provided that
\[ h = \frac{1 - \sqrt{1 - k^2}}{1 + \sqrt{1 - k^2}} \quad \text{and} \quad \tan \theta = \sqrt{\frac{1 + h}{1 - h}} (\tan \phi + \sec \phi). \]
These are of course known relations.
In the following example, which is valuable only for the sake of the principle involved, the differential coefficient of a discontinuous function occurs under the sign of integration.
In the same equation (4.), Art. 38, let \( h = 0, \quad i = \frac{3}{2}, \quad n = 1, \quad l_1 = 1 \); then dropping the suffix from the single variable retained, we have
\[ \int \frac{dx f(x)}{(a - x)^3} = 2 \int_0^\infty ds \left( -\frac{d}{d\sigma} \right) f(\sigma), \quad \ldots \ldots \ldots \ldots (4.) \]
wherein \( \sigma = a - \frac{1}{4s} \) and \( f(\sigma) \) vanishes whenever \( \sigma \) transcends the limits of \( x \). Suppose those limits 0 and 1, and let \( a \) be greater than 1. Now \( \sigma \) and \( s \) increase together, and \( s = \frac{1}{4(a - \sigma)} \). Hence \( \sigma = 0 \) gives \( s = \frac{1}{4a} \), and \( \sigma = 1 \) gives \( s = \frac{1}{4(a - 1)} \). Therefore
\[ \int_0^\infty \frac{dx f(x)}{(a - x)^3} = -2 \int_0^\infty ds \frac{d}{d\sigma} f(\sigma), \quad \ldots \ldots \ldots \ldots (5.) \]
provided that \( f(\sigma) \) be regarded as a discontinuous function defined in the following manner, viz.—
From $s=0$ to $s=\frac{1}{4a}$ \quad $f(\sigma)=0$. \ldots \ldots \ldots \ldots (6.)
From $s=\frac{1}{4a}$ to $s=\frac{1}{4a-1}$ \quad $f(\sigma)=f\left(a-\frac{1}{4s}\right)$. \ldots \ldots \ldots \ldots (7.)
From $s=\frac{1}{4a-1}$ to $s=\infty$ \quad $f(\sigma)=0$. \ldots \ldots \ldots \ldots (8.)
Let us now examine the corresponding values of the element $ds \frac{d}{d\sigma} f(\sigma)$ under the sign of integration in (4.).
1st. From $s=0$ to $s=\frac{1}{4a}$ that element is 0 by (6.).
2ndly. At the point $s=\frac{1}{4a}$ we have
$$ds \frac{d}{d\sigma} f(\sigma) = ds \frac{d}{d\sigma} f(\sigma) ds. \ldots \ldots \ldots \ldots (9.)$$
But $\frac{d}{ds} f(\sigma) ds$ is the increment of $f(\sigma)$ corresponding to an infinitesimal increment $ds$ in the value of $s$. At the break, where $s=\frac{1}{4a}$, $f(\sigma)$ changes in value from 0 to $f(0)$, the initial value of $f\left(a-\frac{1}{4s}\right)$, and the increment of $f(\sigma)$ is $f(0)$, whatever the value of $ds$ may be, provided only that it is infinitesimal. Hence at this point we have
$$ds \frac{d}{d\sigma} f(\sigma) = ds f(0) = \frac{1}{4(a-\sigma)^2} f(0) = \frac{f(0)}{4a^2}. \ldots \ldots \ldots \ldots$$
3rdly. From $s=\frac{1}{4a}$ to $s=\frac{1}{4(a-1)}$, $ds \frac{d}{d\sigma} f(\sigma) = ds f'\left(a-\frac{1}{4s}\right)$.
4thly. At the second break, where $s=\frac{1}{4(a-1)}$ and $\sigma=1$, we find $ds \frac{d}{d\sigma} f(\sigma) = -\frac{f(1)}{4(a-1)^2}$.
5thly. From $s=\frac{1}{4(a-1)}$ to $s=\infty$ we have $ds \frac{d}{d\sigma} f(\sigma)=0$. Thus on recapitulation
$$\int_0^\infty ds \frac{d}{d\sigma} f(\sigma)$$
comprises two finite elements whose sum is
$$\frac{f(0)}{4a^2} - \frac{f(1)}{4(a-1)^2}$$
and a series of infinitesimal elements which give by integration
$$\int_{\frac{1}{4a}}^{1} ds f'\left(a-\frac{1}{4s}\right),$$
whence (5.) becomes
$$\int_0^1 \frac{f(x)dx}{(a-x)^3} = 2\left\{\frac{f(1)}{4(a-1)^2} - \frac{f(0)}{4a^2} - \int_{\frac{1}{4a}}^{1} ds f'\left(a-\frac{1}{4s}\right)\right\}. \ldots \ldots \ldots \ldots (10.)$$
Now this result may be verified by integrating the first member of the equation by parts, and transforming the integral which remains by assuming $x=a-\frac{1}{4s}$.
I think that this example very clearly shows, that, regarding the integral
\[ \int_0^\infty ds \left( \frac{d}{d\sigma} \right) f(\sigma) \]
as made up of elements some of which are finite and perfectly determinate in value, the others infinitesimal and subject to the conditions of ordinary integration, we can interpret the general theorem (4.), Art. 38, in accordance with known truth. In determining, as we have done, the value of the element \( ds \left( \frac{d}{d\sigma} \right) f(\sigma) \) at each break in the discontinuous function, we attach no new signification to a differential coefficient. We regard that element as the limiting value to which \( \Delta s \frac{\Delta}{\Delta \sigma} f(\sigma) \) approaches as \( \Delta s \) approaches to 0. In the present instance this value is finite. Usually it is 0.
The following examples, which are adopted with some improvements from my previous memoir, will illustrate the more important applications of (5.), Art. 42.
Let \( n = 3, i = \frac{1}{2}, h = 0 \), and let us substitute \( x, y, z \) for \( x_1, x_2, x_3 \), and \( a, b, c \) for \( a_1, a_2, a_3 \); then
\[ V = \iiint \frac{dx dy dz f \left( \frac{x^2}{h_1^2} + \frac{y^2}{h_2^2} + \frac{z^2}{h_3^2} \right)}{(a-x)^2 + (b-y)^2 + (c-z)^2} \]
the limits being given by the conditions
\[ \frac{x^2}{h_1^2} + \frac{y^2}{h_2^2} + \frac{z^2}{h_3^2} \leq 1. \]
The value of \( V \) becomes
\[ V = -h_1 h_2 h_3 \pi \int_0^\infty ds \cdot s^{-\frac{1}{2}} \left( \frac{d}{d\sigma} \right)^{-1} f(\sigma) \]
wherein
\[ \sigma = \frac{a^2 s}{1 + h_1^2 s} + \frac{b^2 s}{1 + h_2^2 s} + \frac{c^2 s}{1 + h_3^2 s}. \]
Now the attraction, according to the law of nature, of the ellipsoid whose equation is
\[ \frac{x^2}{h_1^2} + \frac{y^2}{h_2^2} + \frac{z^2}{h_3^2} = 1, \]
and whose internal density is expressed by the function
\[ g f \left( \frac{x^2}{h_1^2} + \frac{y^2}{h_2^2} + \frac{z^2}{h_3^2} \right) \]
upon the point \((a, b, c)\), will be \(-g \frac{dV}{da}\). Observing that, in the value of \( V \) given in (12.), \( a \) only appears as involved in \( \sigma \), we have
\[ \frac{d}{da} = \frac{d\sigma}{da} \frac{d}{d\sigma} \text{ and } \frac{d}{da} \left( \frac{d}{d\sigma} \right)^{-1} f(\sigma) = \frac{d\sigma}{da} f(\sigma) = \frac{2as}{1 + h_1^2 s} f(\sigma), \]
whence
\[ -g \frac{dV}{da} = h_1 h_2 h_3 \pi g \int_0^\infty \frac{2as^{\frac{3}{2}} f(\sigma) ds}{(1 + h_1^2 s)(1 + h_2^2 s)(1 + h_3^2 s)^{\frac{3}{2}}} \]
\[ = h_1 h_2 h_3 \pi g \int_0^\infty \frac{2as}{1 + h_1^2 s} \cdot \frac{s^{-\frac{1}{2}} f(\sigma) ds}{(1 + h_2^2 s)^{\frac{3}{2}}(1 + h_3^2 s)^{\frac{3}{2}}}. \]
Now \( f(\sigma) \) is to vanish when \( \sigma \) falls without the limits 0 and 1, and \( s \) as falling between the limits 0 and \( \infty \) is to be positive. But from the expression for \( \sigma \), it appears that when \( s = 0 \) \( \sigma = 0 \), and when \( s = \infty \) \( \sigma = \frac{a^2}{h_1^2} + \frac{b^2}{h_2^2} + \frac{c^2}{h_3^2} \); also \( \sigma \) increases with \( s \) (Art. 42.), and therefore passes over the value 1 when \( \frac{a^2}{h_1^2} + \frac{b^2}{h_2^2} + \frac{c^2}{h_3^2} > 1 \), but reaches not that value otherwise.
The former condition is realized when the attracted point is external. Representing by \( \eta \) the value of \( s \) for which \( \sigma = 1 \), we have
\[
-\varepsilon \frac{dV}{da} = 2h_1 h_2 h_3 \pi \varepsilon a \int_0^\eta ds \cdot s^{\frac{1}{2}} f \left( \frac{a^2 s}{1 + h_1^2 s} + \frac{b^2 s}{1 + h_2^2 s} + \frac{c^2 s}{1 + h_3^2 s} \right) \left( \frac{1}{1 + h_1^2 s} \right)^{\frac{3}{2}} \left( \frac{1}{1 + h_2^2 s} \right)^{\frac{3}{2}} \left( \frac{1}{1 + h_3^2 s} \right)^{\frac{3}{2}},
\]
\( \eta \) being the positive root of the equation
\[
\frac{a^2 s}{1 + h_1^2 s} + \frac{b^2 s}{1 + h_2^2 s} + \frac{c^2 s}{1 + h_3^2 s} = 1.
\]
When the attracted point is internal we have only to substitute \( \infty \) for \( \eta \) in the upper limit of integration; for all positive values of \( \sigma \) then satisfy the condition \( \sigma < 1 \), and positive values only are admissible.
Both these cases may also be derived from the more general theorem deducible from (5.), Art. 42,
\[
-\varepsilon \frac{dV}{da} = 2h_1 h_2 h_3 \pi \varepsilon a \int_0^\eta ds \cdot s^{\frac{1}{2}} f \left( \frac{h^2 s}{1 + h^2 s} + \frac{a^2 s}{1 + h_1^2 s} + \frac{b^2 s}{1 + h_2^2 s} + \frac{c^2 s}{1 + h_3^2 s} \right) \left( \frac{1}{1 + h_1^2 s} \right)^{\frac{3}{2}} \left( \frac{1}{1 + h_2^2 s} \right)^{\frac{3}{2}} \left( \frac{1}{1 + h_3^2 s} \right)^{\frac{3}{2}},
\]
where \( \eta \) is the positive root of the equation
\[
h^2 s + \frac{a^2 s}{1 + h_1^2 s} + \frac{b^2 s}{1 + h_2^2 s} + \frac{c^2 s}{1 + h_3^2 s} = 1.
\]
When \( h \) approaches to 0 this root approaches to the positive root of the equation (15.) if \( \frac{a^2}{h_1^2} + \frac{b^2}{h_2^2} + \frac{c^2}{h_3^2} \) is greater than 1, but tends to \( \infty \) if \( \frac{a^2}{h_1^2} + \frac{b^2}{h_2^2} + \frac{c^2}{h_3^2} \) is equal to or less than 1.
If \( f(\sigma) = 1 \), the expressions are easily reducible to elliptic functions and agree with known results.
Lastly, let the law of force be that of the inverse fourth power of the distance and equal to \( \varepsilon \). The expression for the attraction on an external point \((a, b, c)\) is
\[
-\varepsilon \frac{dV}{da},
\]
where
\[
V = \iiint \frac{dx dy dz}{\{(a-x)^2 + (b-y)^2 + (c-z)^2\}^{\frac{3}{2}}} = 2\pi h_1 h_2 h_3 \int_0^\infty ds \cdot s^{\frac{1}{2}} f(\sigma) \left( \frac{1}{1 + h_1^2 s} \right)^{\frac{3}{2}} \left( \frac{1}{1 + h_2^2 s} \right)^{\frac{3}{2}} \left( \frac{1}{1 + h_3^2 s} \right)^{\frac{3}{2}},
\]
therefore
\[
-\varepsilon \frac{df(\sigma)}{da} = -\varepsilon \frac{d(\sigma)}{da} \frac{df(\sigma)}{d(\sigma)} = -\frac{2\varepsilon a s}{3(1 + h_1^2 s)} \cdot \frac{df(\sigma)}{d(\sigma)} = -\frac{2\varepsilon a s}{3(1 + h_1^2 s)} \cdot \frac{ds}{d(\sigma)} \cdot \frac{df(\sigma)}{ds}.
\]
Therefore
\[-\frac{s}{3} \frac{dV}{da} = -\frac{4}{3} \pi \rho ah_1 h_2 h_3 \int_0^\infty \frac{s^2 ds}{(1 + h_1^2 s)^{\frac{3}{2}} (1 + h_2^2 s)^{\frac{3}{2}} (1 + h_3^2 s)^{\frac{3}{2}}} \frac{df(\sigma)}{ds} ds.\]
Let \( \eta \) be the positive root of the equation
\[\frac{a^2 s}{1 + h_1^2 s} + \frac{b^2 s}{1 + h_2^2 s} + \frac{c^2 s}{1 + h_3^2 s} = 1,\]
and let the density be uniform; then \( f(\sigma) = 1 \) or 0, according as \( s \) is less or greater than \( \eta \).
Before and after the break, therefore, \( \frac{df(\sigma)}{ds} = 0 \). At the break we have, by the reasoning of a previous section, \( \frac{df(\sigma)}{ds} ds = -1 \). We must therefore substitute this value in (18.), and in the rest of the expression under the integral sign change \( s \) into \( \eta \). Observing that this substitution converts \( \frac{ds}{ds} \) into \( \frac{a^2}{(1 + h_1^2 \eta)^2} + \frac{b^2}{(1 + h_2^2 \eta)^2} + \frac{c^2}{(1 + h_3^2 \eta)^2} \), and that the integral being reduced to a single finite element we may reject the integral sign, we have
\[-\frac{\rho}{3} \frac{dV}{da} = \frac{\frac{4}{3} \pi \rho ah_1 h_2 h_3 \eta^{\frac{3}{2}}}{\left( \frac{a^2}{(1 + h_1^2 \eta)^2} + \frac{b^2}{(1 + h_2^2 \eta)^2} + \frac{c^2}{(1 + h_3^2 \eta)^2} \right) \left( 1 + h_1^2 \eta \right)^{\frac{3}{2}} \left( 1 + h_2^2 \eta \right)^{\frac{3}{2}} \left( 1 + h_3^2 \eta \right)^{\frac{3}{2}}}.\]
This result is due, I believe, to Mr. Cayley, but was originally obtained by an entirely different analysis.
It only remains to add, that when the index of differentiation is fractional we must revert to the first expression in (5.), Art. 42, and effect the integrations separately. The integration with respect to \( v \) may always be performed. The possibility of the two others will depend upon the nature of the problem under consideration. Thus writing the expression in the form
\[ V = \frac{\pi^n}{\Gamma(i)} \int_0^1 \int_0^\infty \frac{da \cdot ds \cdot s^{i-1} f(a)}{(1 + h_1^2 s)^{\frac{3}{2}} \cdots (1 + h_n^2 s)^{\frac{3}{2}}} \int_0^\infty dv \cdot v^{i-\frac{n}{2}} \cos \left[ (a-\sigma)v + \left( i-\frac{n}{2} \right) \frac{\pi}{2} \right], \]
it is easily shown that, according as \( a \) is greater or less than \( \sigma \),
\[\int_0^\infty dv \cdot v^{i-\frac{n}{2}} \cos \left[ (a-\sigma)v + \left( i-\frac{n}{2} \right) \frac{\pi}{2} \right] = \frac{\Gamma \left( i-\frac{n}{2}+1 \right) \sin \left( i-\frac{n}{2}+1 \right) \pi}{(a-\sigma)^{i-\frac{n}{2}+1}} \text{ or } 0,\]
\[= \frac{\pi}{\Gamma \left( \frac{n}{2}-1 \right) (a-\sigma)^{i-\frac{n}{2}+1}} \text{ or } 0,\]
by a known property of the function \( \Gamma \). The latter form gives
\[ V = \frac{\pi^n}{\Gamma(i) \Gamma \left( \frac{n}{2}-i \right)} \int_0^1 \int_0^\infty \frac{da \cdot ds \cdot s^{i-1} f(a)}{(1 + h_1^2 s)^{\frac{3}{2}} (1 + h_2^2 s)^{\frac{3}{2}} \cdots (1 + h_n^2 s)^{\frac{3}{2}} (a-\sigma)^{i-\frac{n}{2}+1}}.\]
This transformation, or one in effect equivalent to it, is due to Mr. Cayley*, who has applied it to obtain the value of a remarkable definite integral which occurs in the mathematical theory of electricity.
* Cambridge and Dublin Mathematical Journal, vol. ii. p. 219.