On Systems of Linear Indeterminate Equations and Congruences
Author(s)
Henry J. Stephen Smith
Year
1861
Volume
151
Pages
35 pages
Language
en
Journal
Philosophical Transactions of the Royal Society of London
Full Text (OCR)
XV. On Systems of Linear Indeterminate Equations and Congruences. By Henry J. Stephen Smith, M.A., Fellow and Mathematical Lecturer of Balliol College, Oxford. Communicated by J. J. Sylvester, Esq., F.R.S.
Received January 17,—Read January 31, 1861.
The theory of the solution, in positive or negative integral numbers, of systems of linear indeterminate equations, requires the consideration of rectangular matrices, the constituents of which are integral numbers. It will therefore be convenient to explain the meaning which we shall attach to certain phrases and symbols relating to such matrices.
A matrix containing $p$ constituents in every horizontal row, and $q$ in every vertical column, is a matrix of the type $q \times p$. We shall employ the symbol $\begin{bmatrix} q \times p \\ A \end{bmatrix}$, or (when it is not necessary that the type of the matrix should be indicated in its symbol) the simpler symbol $\|A\|$ to represent the matrix
$$\begin{bmatrix}
A_{1,1}, & A_{1,2}, & \ldots, & A_{1,p} \\
A_{2,1}, & A_{2,2}, & \ldots, & A_{2,p} \\
\vdots & \vdots & \ddots & \vdots \\
A_{q,1}, & A_{q,2}, & \ldots, & A_{q,p}
\end{bmatrix}$$
If $\|A\|$ and $\|B\|$ be two matrices of the same type, the equation $\|A\| = \|B\|$ indicates that the constituents of $\|A\|$ are respectively equal to the constituents of $\|B\|$; whereas the equation $|A| = |B|$ will merely express that the determinants of $\|A\|$ are equal to the corresponding determinants of $\|B\|$. The determinants of a matrix are, of course, the determinants of the greatest square matrices contained in it; similarly, its minor determinants of order $i$ are the determinants of the square matrices of the type $i \times i$ that are contained in it. Matrices of the types $n \times (m+n)$ and $m \times (m+n)$ are said to be of complementary types; if $\|A\|$ and $\|B\|$ be two such matrices, we shall employ the equation $|A| = |B|$
to express that each determinant of $\|A\|$ is equal to that determinant of $\|B\|$, by which it is multiplied in the development of the determinant of the square matrix $\begin{bmatrix} A \\ B \end{bmatrix}$. When $m$ and $n$ are both uneven numbers, the signs of the determinants $\begin{bmatrix} A \\ B \end{bmatrix}$ and $\begin{bmatrix} B \\ A \end{bmatrix}$ are different: this occasions a certain ambiguity of sign in the interpretation of the equation $|A| = |B|$, which, however, will occasion no inconvenience. If $m = n$, the matrices $\|A\|$ and $\|B\|$ are at once of the same, and of complementary types; so that, in this case, the equation $|A| = |B|$ may stand for either of two very different sets of equations; but this
also is an imperfection of the notation here employed, which it is sufficient to have pointed out. If \( k \) denote any quantity whatever, it is hardly necessary to state that the equality
\[
|A| = k \times |B|
\]
implies that the determinants of \( |A| \) are respectively \( k \) times the corresponding determinants of \( |B| \).
Let \( |P| \) be a square matrix of the type \( n \times n \), and \( |Q| \) a matrix of the type \( n \times (n+m) \) (where \( m \geq 0 \)), we shall understand by the matrix compounded of \( |P| \) and \( |Q| \), the matrix \( |X| \) of the same type as \( |Q| \), the constituents of which are defined by the equation
\[
X_{i,j} = P_{i,1} Q_{1,j} + P_{i,2} Q_{2,j} + \ldots + P_{i,n} Q_{n,j};
\]
and we shall write
\[
|X| = |P| \times |Q|;
\]
in this equation \( |Q| \) is said to be premultiplied by \( |P| \), and \( |P| \) to be post-multiplied by \( |Q| \). This definition will suffice for our present purpose; as the only case of composition which we shall have to consider, is that in which the vertical dimensions of the matrices to be compounded are all equal, and in which every premultiplying matrix is square, so that if an oblong matrix present itself at all in a series of matrices to be compounded, it will occupy the last place in the series.
By the greatest divisor of a matrix we are to understand the greatest common divisor of the determinants of the matrix. If the matrix be square, its greatest divisor is, consequently, the determinant of the matrix. A prime matrix is one of which the greatest divisor is unity; i.e. the determinants of which are relatively prime. A prime square matrix (i.e. a matrix of which the determinant is unity) we shall call a unit-matrix.
In any system of linear equations, whether defective or redundant, or neither, we shall understand by the matrix of the system the matrix formed by the coefficients of the unknown quantities. If to this matrix we add an additional vertical column, composed of the absolute terms of the equations, the resulting matrix we shall term (for brevity) the augmented matrix of the system.
Lastly, when we have occasion to consider square matrices, the constituents of which, excepting those on the principal diameter, are zero, we shall represent them by symbols of the form
\[
|q_1, q_2, q_3, \ldots q_n|,
\]
where \( q_1, q_2, \ldots q_n \) are the constituents of the principal diameter.
Art. 2. If every determinant of the augmented matrix of a redundant system of linear equations is equal to zero, while the determinants of the unaugmented matrix are not all equal to zero, the system admits of one solution, and one only. And in particular if the matrix of the system be a prime matrix, the values of the unknown quantities which satisfy the system are integral numbers. For these values may be expressed as fractions having for their denominators any one of the determinants of the matrix; and these determinants are relatively prime.
Let \( \|A\| \) be a given prime matrix of the type \( n \times (n+m) \), \( \|K\| \) a given matrix of the same type connected with \( \|A\| \) by the equation
\[
\|K\| = k \times \|A\|,
\]
which implies that \( k \) is the greatest divisor of \( \|K\| \); then the symbolic equation
\[
\|K\| = \|k\| \times \|A\|,
\]
in which \( \|k\| \) denotes a square matrix of the type \( n \times n \), will admit of one solution, and one only.
For, to determine \( k_{r_1}, k_{r_2}, \ldots, k_{r_n} \), the constituents of the \( r \)th horizontal row of \( \|k\| \), we have the redundant system
\[
K_{r_i} = A_{1,i} k_{r_1} + A_{2,i} k_{r_2} + \cdots + A_{n,i} k_{r_n},
\]
for \( i = 1, 2, 3, \ldots, n+m \),
which is involved in the symbolic equation (2.). The matrix of this system is the prime matrix \( \|A\| \); and the determinants of its augmented matrix are all equal to zero; for, by virtue of equation (1.), they are equal to the determinants
\[
-\frac{1}{k} \begin{vmatrix}
K_{r_1}, K_{r_2}, \ldots, K_{r_{n+m}} \\
K_{1,1}, K_{1,2}, \ldots, K_{2,n+m} \\
K_{2,1}, K_{2,2}, \ldots, K_{2,n+m} \\
\vdots & \vdots & \ddots & \vdots \\
K_{n,1}, K_{n,2}, \ldots, K_{n,n+m}
\end{vmatrix}
\]
in which two horizontal rows are identical. Thus the system (3.), and consequently the equation (2.), admits of one solution, and one only. It is evident that the determinant of \( \|k\| \) is \( k \). The case in which \( m = 0 \) is not included in this demonstration; its proof, however, presents no difficulty, and may be omitted here.
A particular case of this theorem (that in which \( n = 2 \)) occurs in the 'Disquisitiones Arithmeticae' (see art. 234 of that work).
Art. 3. If every determinant of the augmented matrix of a redundant system of linear congruences be divisible by the modulus, while the greatest divisor of the unaugmented matrix is prime to the modulus, the system is resoluble and admits of only one solution. For if the modulus be represented by \( P \times Q \times R \ldots, P, Q, R \ldots \) denoting powers of unequal primes, one (at least) of the determinants of the unaugmented matrix is prime to \( P \), one (at least) is prime to \( Q \), &c.; whence it may be inferred that the system is resoluble for each of the modules \( P, Q, R \ldots \), and admits of only one solution for each of them; it is therefore resoluble for their product \( P \times Q \times R \ldots \), and admits of only one solution for that modulus.
Let \( \|K\| \) denote (as in the preceding article) a given matrix of the type \( n \times (n+m) \), of which \( k \) is the greatest divisor; and let it be required to find the complete solution of the symbolic equation
\[
\|K\| = \|k\| \times \|A\|,
\]
in which \( \|k\| \) is a square matrix of which the determinant is \( k \), \( \|A\| \) a prime matrix of
the same type as \( \|K\| \), and in which the constituents of \( \|A\| \) and \( \|k\| \) are the unknown numbers.
We shall first obtain a particular solution of this equation, and then show how from any particular solution the complete solution may be deduced.
We may suppose that the constituents of any horizontal row of \( \|K\| \) admit of no common divisor but unity; for if \( \delta_1, \delta_2, \ldots, \delta_n \) be the greatest common divisors of the constituents of the horizontal rows of \( \|K\| \), we find
\[
\|K\| = \delta_1, \delta_2, \delta_3, \ldots, \delta_n \times \|K'\|
\]
\( \|K'\| \) denoting a matrix the constituents of which are derived from those of \( \|K\| \) by the relation
\[
K'_{r,s} = \frac{1}{\delta_r} K_{r,s}
\]
so that the solution of equation (4.) depends on the solution of a similar equation for the matrix \( \|K'\| \), in which the constituents of each horizontal row are relatively prime.
Let then the matrix \( r \times (n+m) \|K\| \), i.e. the matrix
\[
\begin{bmatrix}
K_{1,1}, K_{1,2}, \ldots, K_{1,n+m} \\
K_{2,1}, K_{2,2}, \ldots, K_{2,n+m} \\
\vdots & \vdots & \ddots & \vdots \\
K_{r,1}, K_{r,2}, \ldots, K_{r,n+m}
\end{bmatrix}
\]
be a prime matrix, but let the matrix \( (r+1) \times (n+m) \|K\| \) admit of a greatest divisor \( \mu \).
Determine \( \omega_1, \omega_2, \ldots, \omega_r \) by the system of congruences,
\[
K_{i,1} \omega_1 + K_{i,2} \omega_2 + \ldots + K_{i,r} \omega_r \equiv K_{i,r+1} \mod \mu_i
\]
for \( i = 1, 2, 3, \ldots, n+m \)
(which, as we have just seen, is always resoluble), and in \( \|K\| \) replace the constituents \( K_{r+1,i} \) by the numbers
\[
\frac{1}{\mu} [K_{r+1,i} - \sum_{s=1}^{r} \omega_s K_{s,i}]
\]
we thus deduce from \( \|K\| \) another matrix \( \|K''\| \) connected with it by the relation \( \|K\| = \mu \times \|K''\| \), and such that the matrix of its first \( r+1 \) horizontal rows is prime. By proceeding in this manner, we shall at last obtain a prime matrix \( \|A_0\| \), which satisfies the equation \( \|K\| = k \times \|A_0\| \); we may then, by the method of the last article, determine a square matrix \( \|k_0\| \) satisfying the equation
\[
\|K\| = \|k_0\| \times \|A_0\|
\]
and thus obtain a particular solution of the proposed equation (4.).
To deduce the general solution of that equation, let \( \|k_1\| \) and \( \|A_1\| \) be any two matrices satisfying it. We have therefore the equality
\[
\|k_1\| \times \|A_1\| = \|k_0\| \times \|A_0\|
\]
which evidently implies that
\[ |A_i| = |A_0|; \]
whence, by the theorem of the last article,
\[ \|A_i\| = \|a\| \times \|A_0\|, \]
\(\|a\|\) denoting a unit-matrix. Combining (9.) and (11.), we find
\[ \|k_i\| \times \|a\| \times \|A_0\| = \|k_0\| \times \|A_0\|, \]
whence, by the same theorem, it follows that
\[ \|k_i\| \times \|a\| = \|k_0\|; \]
or, which is the same thing,
\[ \|k_i\| = \|k_0\| \times \|a\|^{-1}, \]
\(\|a\|^{-1}\) denoting the matrix reciprocal to \(\|a\|\). The complete solution of equation (4.) is therefore contained in the formulæ
\[ \|A\| = \|a\| \times \|A_0\| \]
\[ \|K\| = \|K_0\| \times \|a\|^{-1} \]
\(\|a\|\) denoting an arbitrary unit-matrix of the type \(n \times n\), and \(\|A_0\|\), \(\|k_0\|\) being any two matrices that satisfy the equation.
In this, as in the preceding article, we have for simplicity excluded the case in which \(m = 0\), and the matrices \(\|K\|\) and \(\|A\|\) are squares. But it is readily seen that no exception is presented by this particular case.
Art. 4. Let
\[ A_{i,1} x_1 + A_{i,2} x_2 + \ldots + A_{i,n+m} x_{n+m} = 0, \quad i = 1, 2, 3, \ldots n \]
represent a system of indeterminate equations of which the matrix is \(\|A\|\). We shall suppose that the determinants of \(\|A\|\) are not all equal to zero, i.e. that the system is independent; so that its index of indeterminateness (or the excess of the number of indeterminates above the number of really independent equations) is \(m\). If we take \(r\) solutions of the system, for example the solutions
\[ x_{s,1}, x_{s,2}, x_{s,3}, \ldots x_{s,n+m}, \quad s = 1, 2, 3, \ldots r \]
it is evident that if \(r > m\), the determinants of the matrix \(\|a\|\) are all equal to zero. If \(r \leq m\), and if the determinants of the matrix \(\|a\|\) be not all equal to zero, the solutions (17.) are said to form a set of \(r\) independent solutions; if \(r = m\), they form a complete set of independent solutions. A set of relatively prime solutions is an independent set of which the matrix is prime; a complete set of relatively prime solutions may be called, for a reason which will presently appear, a fundamental set of solutions. It is always possible, in an infinite number of ways, to assign complete sets of independent solutions of a system of equations of the form (16.). Among the methods by which this may be accomplished, we shall select one which depends on the following principle:—
If \( \begin{bmatrix} r \times (m+r) \\ a \end{bmatrix} \) represent any matrix of the type \( r \times (m+r) \), the determinants of which are not all equal to zero, and if \( \lambda_1, \lambda_2, \ldots, \lambda_{m+r} \) be integers which satisfy the equations
\[
\sum_{k=1}^{m+r} a_{i,k} \lambda_k = 0, \quad i = 1, 2, 3, \ldots, r \tag{18.}
\]
while \( a_{r+1,1}, a_{r+1,2}, a_{r+1,3}, \ldots, a_{r+1,m+r} \) are integers satisfying the inequality
\[
\sum_{k=m+r}^{n+m} a_{r+1,k} \lambda_k \geq 0, \quad \ldots \tag{19.}
\]
the determinants of the matrix
\[
\begin{bmatrix} (r+1) \times (m+r) \\ a \end{bmatrix}
\]
are not all equal to zero.
For if \( \sum_{k=1}^{m+r} a_{r+1,k} \lambda_k = 0 \), it is evident that by combining this equation with the equations (18.), we may express each of the determinants \( \theta \times \begin{bmatrix} r \times (m+r) \\ a \end{bmatrix} \) in succession as a linear function of the determinants of \( \begin{bmatrix} (r+1) \times (m+r) \\ a \end{bmatrix} \). If, therefore, the former determinants do not all vanish, neither can the latter.
Let, then, \( a_{m,1}, a_{m,2}, \ldots, a_{m,n+m} \) represent any particular solution (other, of course, than that in which every indeterminate is equal to zero) of the system (16.); and let \( A_{n+1,1}, A_{n+1,2}, \ldots, A_{n+1,n+m} \) be integral numbers satisfying the inequality
\[
\sum_{k=1}^{n+m} A_{n+1,k} a_{m,k} \geq 0; \quad \ldots \tag{20.}
\]
the system
\[
A_{i,1} x_1 + A_{i,2} x_2 + \ldots + A_{i,n+m} x_{n+m} = 0, \quad i = 1, 2, 3, \ldots, n+1 \tag{21.}
\]
(which is obtained by the addition of a single equation to the system (16.)) is itself an independent system, as appears from the principle just enunciated; its index of indeterminateness is therefore \( m-1 \). Let \( \begin{bmatrix} (m-1) \times (n+m) \\ a \end{bmatrix} \) represent a complete set of independent solutions of (21.); it may then be inferred, from a second application of the same principle, that \( \begin{bmatrix} m \times (n+m) \\ a \end{bmatrix} \) represents a complete set of independent solutions of the proposed system (16.). Thus the determination of a complete set of independent solutions of a system of which the index of indeterminateness is \( m \), depends on the determination of a similar set of solutions for a system of which the index is lower by a unit. By successive reductions, therefore, we shall at last arrive at a system of which the index of indeterminateness is unity, the complete solution of which is of course immediately found by evaluating the determinants of its matrix.
The practical application of this method supposes only that we can always assign a particular solution of a system of the form (16.) or (21.). And this, it may be observed,
can always be done, either by trial, or by other obvious and not unsymmetrical expedients.
Art. 5. If \( \|a\| \) represent the matrix of a complete set of independent solutions of the proposed system (16.), and \( \|b\| \) be any matrix of the same type as \( \|a\| \), and connected with \( \|a\| \) by the equation
\[
\|b\| = \|k\| \times \|a\|,
\]
in which \( \|k\| \) denotes a square matrix of which the determinant is not zero, it is evident that the constituents of \( \|b\| \) are also a complete set of independent solutions. And, conversely, if \( \|b\| \) be the matrix of a complete set of independent solutions, \( \|a\| \) is also the matrix of a similar set. For if \( \|K\| \) be the matrix composed of the first minors of \( \|k\| \), so that
\[
\|K\| \times \|k\| = \|k, k, k, \ldots\|,
\]
we have from (22.), \( \|K\| \times \|b\| = \|k, k, \ldots k, k\| \times \|a\|; \)
from which it appears that \( \|k, k, \ldots\| \times \|a\| \), and therefore \( \|a\| \) itself, is the matrix of an independent set of solutions.
This observation enables us to obtain a complete set of relatively prime solutions, as soon as we have obtained an independent set. If \( \|b\| \) be the matrix of the independent set, we have only to determine, by the method of art. 3, a square matrix \( \|k\| \), and an oblong prime matrix \( \|a\| \), satisfying the equation
\[
\|b\| = \|k\| \times \|a\|;
\]
the constituents of \( \|a\| \) are then the terms of a set of fundamental solutions.
Or again, if in art. 4 we employ, instead of the inequality (19.), the equation
\[
\Sigma_{k=1}^{k=n+m} a_{r+1,k} \lambda_k = 1,
\]
it is easily shown that if \( \|r \times (n+m)\| \) be a prime matrix, \( \| (r+1) \times (n+m)\| \) is also a prime matrix; so that, by following the method of that article, we may obtain directly a set of fundamental solutions of any proposed system. Only, it will be observed, that in this mode of obtaining such a set, we suppose that we can assign particular solutions, not only of systems of the form (16.), but also of equations of the form (23.).
Art. 6. The importance of fundamental sets of solutions in the theory of linear indeterminate equations is evident from the following proposition:
"If \( \|a\| \) represent a set of fundamental solutions of the system (16.), the complete solution of that system is contained in the formula
\[
x_i = \Sigma_{k=1}^{k=m} \xi_k a_{k,i},
\]
\( i = 1, 2, 3, \ldots n+m \),
in which \( \xi_1, \xi_2, \ldots \xi_m \) are absolutely indeterminate integral numbers."
For it is evident that every set of numbers included in (24.) satisfies (16.); and, conversely, if \( a_{m+1,1}, a_{m+1,2}, \ldots a_{m+1,n+m} \) be any solution of (16.), the determinants of the
matrix \((m+1) \times (n+m)\) are all zero, while the matrix \(m \times (n+m)\) is prime; whence, by a principle employed in art. 2, the system
\[
a_{m+1,i} = \sum_{k=1}^{m} \xi_k a_{k,i}
\]
\(i = 1, 2, 3, \ldots, n+m\)
is satisfied by one, and only one system of integral values of \(\xi_1, \xi_2, \ldots, \xi_m\); or, which is the same thing, the numbers \(a_{m+1,1}, a_{m+1,2}, \ldots, a_{m+1,n+m}\) are included in the formula (24.).
It may be added, that no fractional values of \(\xi_1, \xi_2, \ldots, \xi_m\) can give integral values to \(x_1, x_2, \ldots, x_{n+m}\); and that the same values of \(x_1, x_2, x_3, \ldots, x_{n+m}\) cannot arise from different values of \(\xi_1, \xi_2, \ldots, \xi_m\).
The converse of the proposition just established is also true; i.e.
"If the formula
\[
x_i = \sum_{k=1}^{m} \xi_k a_{k,i}
\]
represent every solution of an indeterminate system of equations, the matrix \(\|a\|\) is a prime matrix."
For if \(\|b\|\) represent a set of fundamental solutions of the indeterminate system, we may express the constituents of \(\|b\|\) as linear functions of the constituents of \(\|a\|\), by means of the equations (24.), so as to obtain an equation of the form
\[
\|b\| = \|z\| \times \|a\|,
\]
\(\|z\|\) denoting a square matrix; whence it immediately appears that \(\|a\|\) is a prime matrix, and \(\|z\|\) a unit-matrix.
Thus, if we apply Euler's method for the resolution of indeterminate equations to the system (16.), we obtain, as the final result of the process, a system of equations of the form (24.); and as it is demonstrable, from the nature of the method itself, that these final equations contain the complete solution of the proposed system, their matrix is a prime matrix.
If \(\|a\|\) and \(\|b\|\) be any two sets of fundamental solutions of the same system, we shall have the equation
\[
\|b\| = \|z\| \times \|a\|,
\]
\(\|z\|\) denoting a unit-matrix. The matrices, therefore, of all sets of fundamental solutions are deducible, by premultiplication with unit-matrices, from the matrix of any given set of such solutions.
Art. 7. If \(\|a\|\) and \(\|b\|\) represent two complete sets of independent solutions of the same system, the determinants of \(\|a\|\) and \(\|b\|\) are evidently connected by the relation
\[
\beta \times \|a\| = \pm \alpha \times \|b\|,
\]
\(\alpha\) and \(\beta\) denoting the greatest divisors of \(\|a\|\) and \(\|b\|\) respectively. A similar relation subsists between the matrix of the system and the matrix of any complete set of independent solutions of it.
Let \(\|A\|\) and \(\|a\|\) represent those matrices respectively, \(K\) and \(k\) their greatest divisors;
the relation in question is expressed by the formula
\[ k \times |A| = K \times |a|, \]
where it is to be remembered that the types of the matrices \(|A|\) and \(|a|\) are complementary; so that, as has been already observed (see art. 1), there is an ambiguity of sign in the equation (25.).
To obtain its demonstration, let \(Q\) and \(q\) denote the sums of the squares of the determinants of \(|A|\) and \(|a|\) respectively, and consider the determinant \(|A/a|\). This determinant is certainly not zero, for multiplying it by itself, we find
\[ |A|^2 = Q \times q. \]
Let, then, \(|A/a|\) be multiplied by any determinant of \(|A|\); for example, by \( \Sigma \pm A_{1,1}, A_{2,2}, \ldots A_{n,n} \). Observing that \( \Sigma \pm A_{1,1}, A_{2,2}, \ldots A_{n,n} \) may assume the form
\[
\begin{array}{cccccc}
A_{1,1}, & A_{2,1}, & \ldots & A_{n,1}, & 0, & 0, \ldots, 0 \\
A_{1,2}, & A_{2,2}, & \ldots & A_{n,2}, & 0, & 0, \ldots, 0 \\
& & & & & \\
A_{1,n}, & A_{2,n}, & \ldots & A_{n,n}, & 0, & 0, \ldots, 0 \\
A_{1,n+1}, & A_{2,n+1}, & \ldots & A_{n,n+1}, & 1, & 0, \ldots, 0 \\
A_{1,n+2}, & A_{2,n+2}, & \ldots & A_{n,n+2}, & 0, & 1, \ldots, 0 \\
& & & & & \\
A_{1,n+m}, & A_{2,n+m}, & \ldots & A_{n,n+m}, & 0, & 0, \ldots, 1
\end{array}
\]
we obtain the equation
\[ |A/a| \times \Sigma \pm A_{1,1}, A_{2,2}, \ldots A_{n,n} = Q \times \Sigma \pm a_{1,n+1}, a_{2,n+2}, \ldots a_{m,n+m}, \ldots \]
in which we may permute the second set of indices in any manner consistent with the condition that \(|A/a|\) should not change its sign; so that we may write
\[ |A/a| \times |A| = Q \times |a|, \]
the correspondence of the determinants in \(|A|\) and \(|a|\) being fixed by the matrix \(|A/a|\).
The equation (25.) is an immediate consequence of this result; and if in that equation we suppose the correspondence of the determinants to be still fixed by the matrix \(|A/a|\), we shall have to write
\[ k \times |A| = K \times |a|, \]
or
\[ k \times |A| = -K \times |a|, \]
according as \(|A/a|\) is a positive or negative number.
MDCCCLXI.
Art. 8. From the preceding principles we may deduce the solution of the following problem, which admits of important applications in other parts of arithmetic:
"To find all the matrices of a given type, of which the determinants have given values, not all equal to zero."
Two particular cases of this problem (those in which the matrix is of the type $2 \times 3$ and $2 \times 4$) occur in the 'Disquisitiones Arithmeticae' (see arts. 279 and 236). In both places Gauss has suppressed the analysis of the problem, and has only given a synthetical demonstration that its conditions are satisfied by the solution he assigns. This, indeed, in art. 279, he expressly observes. He has also suppressed his method of deducing the complete solution from any particular solution,—an omission, however, which may probably be supplied by a comparison of art. 234 with art. 213, i. The very general and most important case, of a matrix of the type $n \times (n+1)$, has been subsequently treated of by M. Hermite*.
Let $\|x\|$ denote a matrix of the type $n \times (n+m)$, of which the constituents are absolutely indeterminate quantities; writing $\lambda$ for $\frac{\Pi(n+m)}{\Pi n.\Pi m}$, we shall represent its determinants by $X_1, X_2, \ldots, X_\lambda$. If $m > 1$, these determinants are not all independent, but are connected by certain identities of the form
$$\Phi(X_1, X_2, \ldots, X_\lambda) = 0,$$
$\Phi$ denoting a rational and integral homogeneous function with numerical coefficients. If, therefore, $C_1, C_2, \ldots, C_\lambda$ be a given set of integral numbers, which can be represented as the determinants of a matrix of the type $n \times (n+m)$, these numbers will satisfy every relation of the form (29.); so that the identity
$$\Phi(C_1, C_2, \ldots, C_\lambda) = 0$$
will involve also the numerical equation
$$\Phi(C_1, C_2, \ldots, C_\lambda) = 0.$$
To obtain a convenient notation for $C_1, C_2, \ldots, C_\lambda$, let us imagine that we have formed a square matrix of the type $(n+m) \times (n+m)$ by the addition of $m$ horizontal rows to the matrix $\|x\|$; if, in the development of the determinant of this matrix, the coefficient of $X_i$ be the determinant
$$\begin{vmatrix} x_{n+r,\mu_s} \end{vmatrix}, \quad r=1, 2, 3, \ldots m$$
$(\mu_1, \mu_2, \ldots, \mu_m$ denoting $m$ of the numbers $1, 2, \ldots, n+m)$, we may represent $X_i$ and $C_i$ by the symbols $[\mu_1, \mu_2, \ldots, \mu_m]$ and $(\mu_1, \mu_2, \ldots, \mu_m)$ respectively; observing, however, that if two of the numbers $\mu_1, \mu_2, \ldots$ are equal, the value zero is to be attributed to each of these symbols.
If $r$ denote one of the numbers $1, 2, 3, \ldots, n$, the determinants of the matrix obtained
* Crell's Journal, vol. xl. p. 264; see also Eisenstein, ibid. vol. xxviii. p. 327.
by adding the horizontal row
\[ x_{r,1}, x_{r,2}, \ldots, x_{r,n+m} \]
to the matrix \( n \times (n+m) \), are identically equal to zero. We thus obtain \( \lambda \times \frac{m}{n+1} \)
equations of the form
\[ \sum_{i=1}^{m+n} [\mu_1, \mu_2, \ldots, \mu_{m-1}] x_{r,i} = 0, \ldots \]
(31.)
\( \mu_1, \mu_2, \ldots, \mu_{m-1} \) representing any combination of \( m-1 \) of the numbers 1, 2, 3, \ldots, \( m+n \).
In connexion with these equations, consider also the similarly formed system,
\[ \sum_{i=1}^{m+n} (\mu_1, \mu_2, \ldots, \mu_{m-1}) y_i = 0. \ldots \]
(32.)
This system, which is in appearance redundant (containing \( \lambda \times \frac{m}{n+1} \) equations, and only \( m+n \) indeterminates), is in reality defective, and is equivalent to \( m \) independent equations. For if \( (k_1, k_2, \ldots, k_m) \) be one of the given numbers \( C \) which is not equal to zero, the partial system of \( m \) equations
\[ \sum_{i=1}^{m+n} (i, k_1, k_2, \ldots, k_{j-1}, k_{j+1}, \ldots, k_m) y_i = 0 \quad j = 1, 2, 3, \ldots, m \]
(33.)
is certainly an independent system, because the determinant of the coefficients of \( y_{k_1}, y_{k_2}, \ldots, y_{k_m} \) is \( (k_1, k_2, \ldots, k_m)^m \), and is therefore different from zero. Again, every equation of (32.) which is not already comprised in (33.), may be obtained by linearly combining the equations of that partial system. To verify this assertion, let
\[ \sum_{i=1}^{m+n} [\mu_1, \mu_2, \ldots, \mu_{m-1}] x_{r,i} = 0 \quad r = 1, 2, 3, \ldots, n \]
(34.)
be the system of \( n \) equations obtained by attributing to \( r \) the \( n \) values of which it is susceptible in any one of the equations (31.). Eliminating from this system those \( n-1 \) determinants \( [\mu_1, \mu_2, \ldots, \mu_{m-1}] \) in which \( i \) has a value not included in a set of \( m+1 \) numbers \( \nu_1, \nu_2, \ldots, \nu_{m+1} \), arbitrarily selected from the series 1, 2, 3, \ldots, \( n+m \), we obtain a relation, which may be expressed in the form
\[ \sum_{i=1}^{m+1} (-1)^i [\nu_1, \nu_2, \ldots, \nu_{i-1}, \nu_{i+1}, \ldots, \nu_{m+1}] = 0, \ldots \]
(35.)
representing \( \frac{mn\lambda^2}{(m+1)(n+1)} \) equations, since the sets
\[ \mu_1, \mu_2, \ldots, \mu_{m-1}, \nu_1, \nu_2, \ldots, \nu_{m+1} \]
may respectively denote any sets of \( m-1 \) and \( m+1 \) numbers taken from the series 1, 2, \ldots, \( m+n \). Since (35.) is of the form \( \Phi(X_1, X_2, \ldots, X_\lambda) = 0 \), we may at once infer the corresponding relation,
\[ \sum_{i=1}^{m+1} (-1)^i [\nu_1, \nu_2, \ldots, \nu_{i-1}, \nu_{i+1}, \ldots, \nu_{m+1}] = 0, \ldots \]
(36.)
by means of which any one of the equations (32.) may be deduced from the equations of the partial system (33.). Thus, if we multiply the equations of that system taken in
order, by the determinants \((-1)^i(k_1, h_1, h_2, \ldots h_{m-1})\), and add the results, we obtain
\[(k_1, k_2, \ldots k_m) \sum_{i=n+m}^{i=n+m} (i, h_1, h_2, \ldots h_{m-1}) y_i = 0,\]
i.e. since \((k_1, k_2, \ldots k_m)\) is not zero,
\[\sum_{i=n+m}^{i=n+m} (i, h_1, h_2, \ldots h_{m-1}) y_i = 0.\]
The system (32.) is therefore equivalent to a system of \(m\) independent equations.
Let \(\gamma \times (m+n)\) represent the matrix of (33.), or of any other independent system equivalent to (32.) (the determinants of all such matrices are proportional); let \(\Gamma_1, \Gamma_2, \ldots \Gamma_\lambda\) be the determinants of \(\gamma\); \(\xi\) and \(\Xi_1, \Xi_2, \ldots \Xi_\lambda\) the matrix and determinants of the system similarly derived from (31.). By the theorem of art. 7, we have
\[\frac{\xi}{x} \times \xi = \Sigma \Xi^2 \times x,\]
or observing that \(\frac{\xi}{x} = \Sigma \Xi X\), and that (37) is an identity of the form \(\Phi = 0\),
\[\Sigma \Gamma C \times \gamma = \Sigma \Gamma^2 \times C,\]
where \(C\) symbolizes the numbers \(C_1, C_2, \ldots C_\lambda\), which correspond to the determinants of \(\gamma\) in the same inverse order in which in equation (37.) the determinants of \(x\) correspond to those of \(\xi\). But if \(\gamma \times (m+n)\) represent a system of fundamental solutions of (32.) or (33.), we have also
\[\beta \times \gamma = \Sigma \Gamma^2 \times \beta,\]
whence, combining (38.) and (39.), and representing the greatest common divisor of \(C_1, C_2, \ldots C_\lambda\) by \(c\), we find
\[c \times \beta = |C|.\]
If, then, \(c\) denote any square matrix of determinant \(c\), and of the type \(n \times n\), the formula \(c \times \beta\) contains the complete solution of the problem.
If \(\gamma\) represent the greatest divisor of \(\gamma\), we infer from (38.)
\[c \times \gamma = \gamma \times |C|,\]
whence, if \(\gamma'\) be a prime matrix of the type \(m \times (m+n)\) satisfying the equation
\[\gamma = \gamma \times \gamma' \quad (\text{see art. 3}),\]
we find
\[|C| = c \times \gamma'.\]
The preceding analysis enables us therefore to obtain simultaneously the representation of the determinants \(|C|\) as the determinants of two complementary matrices, of the types \(n \times (m+n)\) and \(m \times (m+n)\) respectively. We have thus two distinct methods of arriving at the solution of the problem, of which one requires the determination of a set of fundamental solutions of a system of linear equations; the other the reduction (by the method of art. 3) of a given matrix to a prime matrix. The greatest divisor of \(\gamma\),
which we have represented by $\gamma$, is evidently $(k_1, k_2, \ldots, k_m)^{m-1} \times c$. If, therefore, $C$, one of the given numbers $C_1, C_2, \ldots, C_n$, be a unit, we have only to take $C$ for $(k_1, k_2, \ldots, k_m)$, and we shall immediately obtain a matrix $\|\gamma\|$ of the type $m \times (m+n)$ satisfying the equation
$$|\gamma| = |C|.$$
And similarly might a matrix of the type $n \times (m+n)$, satisfying the same equation, be written down without calculation.
Art. 9. The importance of the case, in which $m=1$, is so great, that we may be allowed to point out the identity of the solution obtained by the preceding method with that already given by M. Hermite. Let, then, $C_1, C_2, \ldots, C_{n+1}$ represent the determinants of a matrix of the type $n \times (n+1)$ taken in their natural order (i.e., so taken that if the matrix be completed by an additional row of constituents,
$$c_1, c_2, \ldots, c_{n+1},$$
the value of its determinant would be
$$c_1C_1 + c_2C_2 + c_3C_3 + \ldots + c_{n+1}C_{n+1}.$$
We have then to obtain a set of fundamental solutions of the equation
$$C_1y_1 + C_2y_2 + C_3y_3 + \ldots + C_{n+1}y_{n+1} = 0. \quad \ldots \quad (43.)$$
Such a set may always be obtained by the following particular method. Supposing that $C_1$ is not zero, consider the equations
$$\begin{align*}
0 &= C_1y_1 + C_2y_2 \\
0 &= C_1y_1 + C_2y_2 + C_3y_3 \\
&\vdots \\
0 &= C_1y_1 + C_2y_2 + C_3y_3 + \ldots + C_{n+1}y_{n+1}
\end{align*}$$
and take a particular solution of each of them, assigning to the last indeterminate in each, the least value (zero excepted) of which it is susceptible. If we denote by $\Delta_k$ the greatest common divisor of $C_1, C_2, \ldots, C_k$, so that $\Delta_1 = C_1, \Delta_{n+1} = c$, it is evident that the value of $y_{k+1}$ in the equation
$$C_1y_1 + C_2y_2 + \ldots + C_{k+1}y_{k+1} = 0$$
will be $\frac{\Delta_k}{\Delta_{k+1}}$; and if in the same equation we represent the values of $y_1, y_2, \ldots, y_k$ by
$$r_{k,1}, r_{k,2}, r_{k,3}, \ldots, r_{k,k},$$
the matrix
$$\begin{bmatrix}
r_{1,1}, \frac{\Delta_1}{\Delta_2}, 0, 0 & \cdots & 0 \\
r_{2,1}, r_{2,2}, \frac{\Delta_2}{\Delta_3}, 0 & \cdots & 0 \\
r_{3,1}, r_{3,2}, r_{3,3}, \frac{\Delta_3}{\Delta_4} & \cdots & 0 \\
& \cdots & \cdots & \cdots & 0 \\
r_{n,1}, r_{n,2}, r_{n,3}, r_{n,4}, \ldots, \frac{\Delta_n}{\Delta_{n+1}}
\end{bmatrix} \quad (45.)$$
will represent a set of fundamental solutions of (43.). For, in the first place, it repre-
sents a set of independent solutions; because its first determinant is \( \frac{\Delta_1}{\Delta_2} \times \frac{\Delta_2}{\Delta_3} \times \ldots \times \frac{\Delta_n}{\Delta_{n+1}} \)
or \( \frac{\Delta_1}{\Delta_{n+1}} \), or \( C_1 \), therefore its determinants are proportional to \( C_1, C_2, \ldots \) &c.; or since
the first of them is \( C_1 \), they are respectively equal to the numbers
\[ \frac{C_1}{c}, \frac{C_2}{c}, \frac{C_3}{c}, \ldots \frac{C_{n+1}}{c}, \]
which admit of no common divisor.
To obtain a set of values for the constituents \( r_{ij} \), which occur in the matrix (45.), we may form the series of equations
\[ \begin{align*}
\lambda_1 C_1 + \mu_1 \Delta_1 &= \Delta_2 \\
\lambda_2 C_2 + \mu_2 \Delta_2 &= \Delta_3 \\
&\vdots \\
\lambda_{n-1} C_n + \mu_{n} \Delta_n &= \Delta_{n+1}
\end{align*} \]
(46.)
It will then be found that the equations (44.) are satisfied by the values of \( r \) comprised in the formula
\[ r_{ij} = -\lambda_j \mu_j \ldots \mu_{i-1} \frac{C_{i+1}}{\Delta_{i+1}} \quad [j \leq i]; \quad \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots \ldots (47.) \]
and on substituting these values in the matrix (45.), it will coincide, after an unim-
portant modification, with that occurring in M. Hermite's solution of the problem.
But, in practice, the simplest method of obtaining a solution of the problem con-
sidered in this article, is to solve the equation (43.) by Euler's method, and to employ
in the place of the matrix (45.), the matrix of the set of fundamental solutions thus
obtained (see art. 6).
Art. 10. Another problem, closely connected with the preceding, and of no less fre-
quent application, has also been completely solved by M. Hermite*; but as it may serve
to illustrate the utility of the methods employed in this paper, we shall venture to resume
and generalize it here. The problem is
"Given a set of \( n+1 \) numbers \( C_1, C_2, \ldots, C_{n+1} \) without any common divisor, to assign
all the matrices \( ||x|| \) of the type \( n \times (n+1) \) which satisfy the equation
\[ ||C|| = 1. \]
Let \( c_1, c_2, \ldots, c_{n+1} \) be any particular solution of the equation
\[ C_1 y_1 + C_2 y_2 + \ldots + C_{n+1} y_{n+1} = 0 \]
(48.)
(which is always possible because \( C_1, C_2, \ldots, C_{n+1} \) are relatively prime); and let \( ||y|| \) repre-
sent a set of fundamental solutions of the equation
\[ c_1 y_1 + c_2 y_2 + \ldots + c_{n+1} y_{n+1} = 0. \]
(49.)
* LIouVILLE, vol. xiv. p. 21.
Then, if \( \|u\| \) represent any unit-matrix of the type \( n \times n \), and \( \lambda_1, \lambda_2, \ldots, \lambda_n \) absolutely indeterminate integers, the complete solution of the problem is contained in the formula
\[
\begin{align*}
\|u\| \times \left\{ \gamma_{i,j} + \lambda_j C_j \right\} \\
i = 1, 2, 3, \ldots, n \\
j = 1, 2, 3, \ldots, n+1
\end{align*}
\]
For if \( \|x\| \) be any one of the matrices contained in that formula, it is readily seen that
\[
\frac{\|C\|}{\|x\|} = C_1 c_1 + C_2 c_2 + \ldots + C_{n+1} c_{n+1} = 1.
\]
Conversely, if \( \|x\| \) be a matrix satisfying the equation \( \frac{\|C\|}{\|x\|} = 1 \), \( \|x\| \) is included in the formula (50.). To show this, we observe that the complete solution of equation (48.) is contained in the formula
\[
y_j = c_j + \sum_{i=1}^{n} \theta_{i,j} \lambda_i, \quad j = 1, 2, \ldots, n+1,
\]
in which \( \theta \) is any set of fundamental solutions of the equation
\[
C_1 y_1 + C_2 y_2 + \ldots + C_{n+1} y_{n+1} = 0,
\]
and \( \lambda_1, \lambda_2, \ldots, \lambda_n \) are indeterminate integers. The complete solution of the same equation (48.) is therefore supplied by the determinants of the matrix \( \gamma_{i,j} + \lambda_j C_j \). For those determinants may be represented by the formula
\[
c_j + \sum_{i=1}^{n} [\hat{i}, j] \lambda_i, \quad j = 1, 2, 3, \ldots, n+1,
\]
in which \( [\hat{i}, j] \) symbolizes a first minor of the determinant \( \frac{\|C\|}{\|x\|} \), so that
\[
[\hat{i}, j] = \frac{d\|C\|}{d\gamma_{i,j}}.
\]
But the numbers \( [\hat{i}, 1], [\hat{i}, 2], \ldots, [\hat{i}, n+1] \) satisfy (52.) for every value of \( i \); and, since \( \frac{\|C\|}{\|x\|} = 1 \), the determinants of the matrix
\[
\begin{align*}
\|[\hat{i}, j]\| & \\
i = 1, 2, 3, \ldots, n \\
j = 1, 2, 3, \ldots, n+1
\end{align*}
\]
are the numbers \( C_1, C_2, \ldots, C_{n+1} \), and are therefore relatively prime. It follows from this that (53.) represents a set of fundamental solutions of (52.); i.e., that the complete solution of (48.) is represented by the determinants of \( \gamma_{i,j} + \lambda_j C_j \). If then \( \|x\| \) be a matrix satisfying the equation \( \frac{\|C\|}{\|x\|} = 1 \), since the determinants of \( \|x\| \) evidently satisfy (48.), values can be assigned to \( \lambda_1, \lambda_2, \ldots, \lambda_n \) which shall verify the equation
\[
\gamma_{i,j} + \lambda_j C_j = \|x\|,
\]
whence it follows that
\[
\|x\| = \|u\| \times \gamma_{i,j} + \lambda_j C_j,
\]
\( \|u\| \) denoting a unit-matrix, i.e., \( \|x\| \) is one of the matrices included in the formula (50.).
The result incidentally obtained in the foregoing analysis, that the complete solution
of an equation of the form
\[ C_1 x_1 + C_2 x_2 + \ldots + C_{n+1} x_{n+1} = 1 \]
can be exhibited in the determinantal form (50.), is occasionally useful.
The preceding problem is a particular case of the following more general enunciation:
"Given a prime matrix \( \|C\| \) of the type \( m \times (m+n) \), to find all the matrices \( \|x\| \) of the type \( n \times (m+n) \) which satisfy the equation
\[ \|C\| = \|x\| = 1. \]"
Let \( \|v\| \) be a matrix which satisfies (54.), let the numbers \( \mu_{i,j} \) represent absolute indeterminates, and \( \|u\| \) any unit-matrix; the complete solution of the problem is contained in the formula
\[ \|x\| = \|u\| \times (\gamma + \Sigma \mu C), \]
where \( \gamma + \Sigma \mu C \) represents the matrix,
\[ \begin{bmatrix}
\gamma_{i,j} + \sum_{\theta=1}^{m} \mu_{i,\theta} C_{\theta,j} \\
j=1, 2, 3, \ldots n+m
\end{bmatrix} \quad i=1, 2, 3, \ldots n \]
For if \( \|x\| \) be a matrix satisfying the equation (54.), we have
\[ \|C\| = \|x\| = \|C\|; \]
and consequently
\[ \|C\| = \|v\| \times \|C\|, \]
\( \|v\| \) denoting a unit of the type \( (m+n) \times (m+n) \). But because the first \( m \) horizontal rows in \( \|C\| \) and \( \|C\| \) are identical, it is evident that
\[ v_{i,j} = 0, \quad i=1, 2, 3, \ldots m \]
\[ j=1, 2, 3, \ldots m+n, \]
except when \( i=j \), in which case
\[ v_{1,1} = v_{2,2} = \ldots v_{m,m} = 1. \]
The unit-matrix \( \|v\| \) therefore arises from the composition of two unit-matrices of the forms
\[ \begin{bmatrix}
1, 0, 0, \ldots 0, 0, 0, \ldots 0 \\
0, 1, 0, \ldots 0, 0, 0, \ldots 0 \\
0, 0, 1, \ldots 0, 0, 0, \ldots 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
\lambda_{1,1}, \lambda_{1,2}, \lambda_{1,3}, \ldots \lambda_{1,m}, 1, 0, \ldots 0 \\
\lambda_{2,1}, \lambda_{2,2}, \lambda_{2,3}, \ldots \lambda_{2,m}, 0, 1, \ldots 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
\lambda_{n,1}, \lambda_{n,2}, \lambda_{n,3}, \ldots \lambda_{n,m}, 0, 0, \ldots 1
\end{bmatrix} \]
and
\[
\begin{bmatrix}
1, & 0, & 0, & \ldots & 0, & 0, & \ldots & 0 \\
0, & 1, & 0, & \ldots & 0, & 0, & \ldots & 0 \\
0, & 0, & 1, & \ldots & 0, & 0, & \ldots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
0, & 0, & 0, & \ldots & u_{1,1}, & u_{1,2}, & \ldots & u_{1,n} \\
0, & 0, & 0, & \ldots & u_{2,1}, & u_{2,2}, & \ldots & u_{2,n} \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
0, & 0, & 0, & \ldots & u_{n,1}, & u_{n,2}, & \ldots & u_{n,n}
\end{bmatrix}
\]
\(||\lambda||\) denoting a matrix of the type \(n \times m\) of which the constituents may be any numbers whatever, and \(||u||\) a unit-matrix of the type \(n \times n\). If for \(||\lambda||\) we substitute the matrix
\[
||u|| = ||u||^{-1} \times ||\lambda||,
\]
it is readily seen that we may invert the order of the factors in the expression of \(||v||\); so that, using an abbreviated notation, the signification of which is evident, we may write either
\[
||v|| = \begin{bmatrix} 1, & 0 \\ \lambda, & 1 \end{bmatrix} \times \begin{bmatrix} 1, & 0 \\ 0, & u \end{bmatrix},
\]
or
\[
||v|| = \begin{bmatrix} 1, & 0 \\ 0, & u \end{bmatrix} \times \begin{bmatrix} 1, & 0 \\ \mu, & 1 \end{bmatrix}.
\]
Substituting the latter expression of \(||v||\) in the equation
\[
||C|| = ||v|| \times ||C||,
\]
we immediately infer
\[
||x|| = ||u|| \times ||y + \Sigma \mu C||.
\]
Every matrix satisfying the equation \(||C|| = 1\) is therefore comprised in the formula (55.); and since it is evident, conversely, that every matrix comprised in (55.) satisfies the equation, that formula contains the complete solution of the question.
A particular solution of the problem (which may be taken for \(||y||\)) can be obtained as follows:—Complete the matrix \(||C||\) by any \(n\) horizontal rows of constituents which do not cause the determinant of the resulting matrix to vanish. From this matrix a prime (\(i.e.\) a unit) matrix of the same type is to be deduced by the method of art. 3, a reduction which can always be effected without changing the prime matrix \(||C||\).
Art. 11. The consideration of sets of fundamental solutions of linear systems is also of use in the theory of indeterminate systems containing terms not affected by any indeterminate. Let
\[
A_{i,0} + A_{i,1} x_1 + A_{i,2} x_2 + \ldots + A_{i,n+m} = 0 \quad i = 1, 2, 3, \ldots n
\]
represent such a system; its general solution will assume the form
\[
x_k = a_k + \sum_{\theta=1}^{m} \mu_\theta a_{k,\theta} \quad k = 1, 2, 3, \ldots n+m
\]
where \(a_1, a_2, \ldots a_{n+m}\) is a particular solution of (56.), \(p_1, p_2, \ldots p_m\) indeterminate numbers, and \(A\) a set of fundamental solutions of the system
\[
A_{i,1}x_1 + A_{i,2}x_2 + \ldots + A_{i,n+m}x_{n+m} = 0 \\
i = 1, 2, 3, \ldots n
\]
Whenever, therefore, the proposed system is resolvable, its complete solution involves \(m\) indeterminates; but in order that it should be resolvable, a certain condition must be satisfied by its coefficients. This condition is, "that the greatest divisors of its augmented and unaugmented matrices must be equal*." We shall call these divisors \(D\) and \(D_0\) respectively, representing the matrices themselves by \(A\) and \(A_0\). That the condition is necessary may be seen by eliminating in turn every combination of \(n-1\) indeterminates from (56). We thus find that every determinant of \(A\) is divisible by \(D_0\), i.e. that \(D\) is divisible by \(D_0\); but evidently \(D\) divides \(D_0\), so that \(D_0 = D\). To show that the condition is sufficient, as well as necessary, consider the system
\[
A_{i,0}x_0 + A_{i,1}x_1 + A_{i,2}x_2 + \ldots + A_{i,n+m}x_{n+m} = 0 \\
i = 1, 2, 3, \ldots n
\]
and let \((m+1) \times (n+m+1)\) represent a set of its fundamental solutions. To say that (56.) is resolvable, is the same thing as to say that (59.) admits of solutions in which the value of \(x_0\) is unity; and (59.) will not, or will admit of such solutions according as \(\theta_1, \theta_2, \ldots, \theta_m\) do, or do not admit of any common divisor beside unity. But, by the theorem of art. 7, those determinants of \(B\) into which the column \(\theta_1, \theta_2, \ldots\) enters, are equal to the determinants of \(A_0\), taken in a proper order and divided by \(D\). If \(D = D_0\), the determinants of \(A_0\), divided by \(D\), are relatively prime, and consequently those determinants of \(B\) which contain \(\theta_1, \theta_2, \ldots, \theta_m\) are also relatively prime; a conclusion which implies that \(\theta_1, \theta_2, \ldots, \theta_m\) are themselves relatively prime, i.e. that the system (56.) is resolvable.
This criterion is not immediately applicable if the system (56.) be not independent, i.e. if the determinants of its augmented matrix \(A\) be all equal to zero. But it may
* [This Theorem has already been given by M. Ignaz Heger (Memoirs of the Vienna Academy, vol. xiv. second part, p. 111). I regret that in the abstract of the present paper, which has been inserted in the 'Proceedings of the Royal Society,' no reference was made to M. Heger's Memoir, with the contents of which I was unacquainted, at the time at which that abstract was prepared. M. Heger's demonstration (adapted to the terminology here employed) is, in the main, as follows. (1) If the unaugmented matrix of an indeterminate system be prime, the system is always resolvable. For every determinate system, of which the matrix is a unit-matrix, is resolvable in integral numbers; and we may suppose the given indeterminate system to form part of such a determinate system (see art. 10, supra). (2) The equation \(A = D \times A'\), in which \(D\) is a square matrix, having \(D\) for its determinant, and \(A'\) a prime matrix of the same type as \(A\), is always resolvable (see art. 3). We can therefore replace the given system (56.) by a system of which the augmented matrix is \(A'\), and which is resolvable or irresolvable at the same time with the given system. But if \(D_0 = D\), the unaugmented matrix of this derived system is prime; i.e. if \(D_0 = D\), the proposed system is resolvable. (3) That the condition is necessary as well as sufficient may be proved as in the text.—Sept. 1861, H. J. S. S.]
be applied to any independent system, equivalent to the proposed system, and deduced linearly from it.
If we represent by $D_k$ the greatest divisor of the matrix, deduced from the matrix of (59.) by omitting from it the column $A_{1,k}, A_{2,k}, \ldots, A_{n,k}$, we may enunciate the following proposition:
"In every solution of the system (59.), the value of $x_k$ is divisible by $\frac{D_k}{D}$; and, conversely, a solution of that system can always be assigned in which $x_k$ shall have any given value divisible by $\frac{D_k}{D}$."
It will be seen that the solution of (56.) depends, first, on the solution of (59.), and, secondly, on that of the indeterminate equation
$$\theta_{1,0} x_1 + \theta_{2,0} x_2 + \ldots + \theta_{m+1,0} x_{m+1} = 1.$$
(60.)
If we represent the values of the indeterminates in this equation as the determinants of the matrix
$$\begin{vmatrix}
\gamma_{i,j} + \mu_i \theta_{j,0} & i=1, 2, 3, \ldots m \\
j=1, 2, \ldots m+1
\end{vmatrix}$$
(see art. 10), we may express the most general values of the indeterminates which satisfy (56.) in the determinantal form
$$x_k = \begin{vmatrix}
\theta_{1,k} & \theta_{2,k} & \theta_{m+1,k} \\
\gamma_{1,1} + \mu_1 \theta_{1,0} & \gamma_{1,2} + \mu_1 \theta_{2,0} & \ldots & \gamma_{1,m+1} + \mu_1 \theta_{m+1,0} \\
\gamma_{2,1} + \mu_2 \theta_{1,0} & \gamma_{2,2} + \mu_2 \theta_{2,0} & \ldots & \gamma_{2,m+1} + \mu_2 \theta_{m+1,0} \\
\vdots & \vdots & \ddots & \vdots \\
\gamma_{m,1} + \mu_m \theta_{1,0} & \gamma_{m,2} + \mu_m \theta_{2,0} & \ldots & \gamma_{m,m+1} + \mu_m \theta_{m+1,0}
\end{vmatrix}$$
(61.)
Art. 12. We shall now indicate an important transformation of which any square matrix of integral numbers is susceptible. We begin with the following theorem:
"If a given rectangular matrix be premultiplied by a unit matrix, the greatest common divisor of any vertical column of minor determinants is the same in the resulting as in the given matrix."
For it is evident that any minor, either in the given or in the resulting matrix, is an integral and linear function of the minors formed from the same vertical columns in the other matrix.
Similarly, it may be shown that
"When a square matrix is post-multiplied by any prime rectangular matrix, the greatest common divisor of any horizontal row of minors is the same in the resulting rectangular matrix as in the given square matrix."
For if
$$\begin{vmatrix}
n \times (n+m) \\
A
\end{vmatrix} = \begin{vmatrix}
n \times n \\
B
\end{vmatrix} \times \begin{vmatrix}
n \times (n+m) \\
C
\end{vmatrix},$$
$2 \uparrow 2$
where \( \|C\| \) is a prime matrix, it is clear that every minor of \( \|A\| \) is a linear function of the minors formed from the same horizontal rows of \( \|B\| \); so that if \( a \) and \( b \) be the greatest common divisors of any corresponding horizontal rows of minors in those two matrices, \( a \) is divisible by \( b \). But again, if \( \theta \) be any one of the determinants of \( \|C\| \), and \( s \) be the order of the minors under consideration, any minor of \( \|B\| \), after multiplication by \( \theta^s \), may be expressed as a linear function of a certain group of the minors taken from the same horizontal rows of \( \|A\| \). Consequently \( \theta^s \times b \) is divisible by \( a \); or, since \( \theta \) may have any one of a series of values which are relatively prime, \( b \) is divisible by \( a \), i.e. \( b = a \).
By combining these results we obtain the theorem.
"If \( \nabla_n, \nabla_{n-1}, \nabla_{n-2}, \ldots \nabla_1 \) represent the greatest common divisors of all the minors of order \( n, n-1, \ldots 1 \), respectively which can be formed out of a given square matrix, these numbers will remain unchanged, when the given matrix is premultiplied by any unit-matrix, and post-multiplied by any prime matrix whatsoever."
Art. 13. Let \( \theta \), the determinant of the square matrix \( \|a\| \), be a positive number, different from zero. It may be shown that by post-multiplication with a properly assumed unit \( \|a\| \), the matrix \( \|a\| \) can be reduced to the form
\[
\begin{bmatrix}
\mu_1, r_{1,2}, r_{1,3}, \ldots, r_{1,n} \\
0, \mu_2, r_{2,3}, \ldots, r_{2,n} \\
0, 0, \mu_3, \ldots, r_{3,n} \\
\vdots & \vdots & \ddots & \ddots \\
0, 0, 0, \ldots, \mu_n
\end{bmatrix},
\]
where \( \mu_1, \mu_2, \ldots, \mu_n \) are positive numbers, such that \( \mu_1 \times \mu_2 \times \ldots \times \mu_n = \theta \), and the constituents \( r_{i,k} \) satisfy the inequalities
\[ 0 \leq r_{i,k} < \mu_i. \]
This was first observed by Gauss for the case \( n = 2 \); by Seeber for \( n = 3 \); and the general theorem has been enunciated by M. Hermite*. Its precise statement is
"Every matrix of the type \( n \times n \) is equivalent (by post-multiplication) to one, and only one, of the reduced matrices included in the formula (62.)."
To show this, let \( v_{1,1}, v_{2,1}, \ldots, v_{n,1} \) be the integral and relatively prime numbers which satisfy the equations
\[ a_{i,1}, v_{1,1} + a_{i,2}, v_{2,1} + \ldots a_{i,n}, v_{n,1} = 0 \quad (i = 2, 3, \ldots, n) \]
and the inequality
\[ a_{1,1}, v_{1,1} + a_{1,2}, v_{2,1} + \ldots a_{1,n}, v_{n,1} > 0. \]
Then it is evident that, if \( \|v\| \) be a unit-matrix of which \( v_{1,1}, v_{2,1}, \ldots, v_{n,1} \) form the first column, the matrix \( \|a\| \times \|v\| \) will assume the form
* Gauss, Disq. Arith. art. 213; Seeber, "Untersuchungen ueber die Eigenschaften der positiven ternären quadratischen Formen" (Mannheim, 1831), art. 31; M. Hermite, Crelle, vol. xli. p. 192.
where \( \mu_1 = a_{1,1} v_{1,1} + a_{1,2} v_{2,1} + \ldots + a_{1,n} v_{n,1} \).
If this matrix be post-multiplied by the unit,
\[
\begin{bmatrix}
1, & k_2, & k_3, & \ldots, & k_n \\
0, & 1, & 0, & \ldots, & 0 \\
0, & 0, & 1, & \ldots, & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0, & 0, & 0, & \ldots, & 1
\end{bmatrix}
\]
the constituents \( b_{1,i} \) will be changed into \( b_{1,i} + \mu_i k_i \), while all the other constituents will remain unaltered; so that by assigning proper values to the numbers \( k_2 \ldots k_n \), we may bring the given matrix \( \|a\| \) into the form
\[
\begin{bmatrix}
\mu_1, & r_{1,2}, & r_{1,3}, & \ldots, & r_{1,n} \\
0, & b_{2,2}, & b_{2,3}, & \ldots, & b_{2,n} \\
0, & b_{3,2}, & b_{3,3}, & \ldots, & b_{3,n} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0, & b_{n,2}, & b_{n,3}, & \ldots, & b_{n,n}
\end{bmatrix}
\]
where \( r_{1,i} \) verifies the inequality
\[ 0 \leq r_{1,i} < \mu_1. \]
From this it is easy to infer that if a matrix of the type \((n-1) \times (n-1)\) can be reduced to the form (62.), the same reduction is possible for a matrix of the type \( n \times n \), i.e. since that reduction is possible when \( n = 1, n = 2, \ldots \) it is possible for every value of \( n \).
To prove that \( \|a\| \) is equivalent (by post-multiplication) to only one of the reduced matrices (62.), it is sufficient to show that no two reduced matrices can be equivalent. If \( \|a\| \) and \( \|a'\| \) be two reduced matrices, and \( \|v\| \) a unit-matrix, such that \( \|a\| \times \|v\| = \|a'\| \), it may be inferred, by comparing the corresponding constituents of the two matrices \( \|a\| \times \|v\| \) and \( \|a'\| \) (beginning with the lowest horizontal rows of each and proceeding upwards), that all the constituents of \( \|v\| \) which lie below its principal diameter are zero; and consequently that the constituents of the principal diameter itself are all positive units.
Further, that the constituents above the principal diameter of \( \|v\| \) are likewise zero, may be established (for each line of constituents parallel to the diameter, beginning with that nearest to it) by means of the inequalities (63.) which are satisfied by the constituents both of \( \|a\| \) and \( \|a'\| \). It thus appears that two reduced matrices cannot be equivalent, without being identical. It will be observed that the reducing unit is unique;
i.e. that only one post-multiplying unit can be assigned by which a given matrix can be reduced to the form (62.).
If instead of reducing the given matrix \( \|a\| \) by post-multiplication we employ a pre-multiplying unit, we obtain the following theorem:
"Every matrix of the type \( n \times n \) and of determinant \( \theta \) is equivalent (by pre-multiplication) to one, and only one of the matrices included in the formula (62.), in which \( \mu_1, \mu_2, \ldots \mu_n = \theta \), and \( r_{i,k} \) satisfies the inequality
\[ 0 \leq r_{i,k} < \mu_k. \]
(66.)
Art. 14*. The transformation to which we have referred in art. 12 is obtained by employing simultaneously a pre-multiplying and a post-multiplying unit-matrix. It is expressed by the equation
\[ \|a\| = \|a\| \times \left[ \frac{\nabla_n}{\nabla_{n-1}}, \frac{\nabla_{n-1}}{\nabla_{n-2}}, \ldots, \frac{\nabla_1}{\nabla_0} \right] \times \|b\|, \]
(67.)
in which \( \|a\| \) is a given square matrix of the type \( n \times n \), \( \|a\| \) and \( \|b\| \) are unit-matrices, and \( \nabla_n, \nabla_{n-1}, \nabla_{n-2}, \ldots, \nabla_1, \nabla_0 \) are the determinant and greatest common divisors of the minor determinants of \( \|a\| \), so that, in particular, \( \nabla_n \) is the determinant of \( \|a\| \), \( \nabla_{n-1} \) the greatest common divisor of its minor determinants of order \( n-1 \), \( \nabla_1 \) the greatest common divisor of its constituents, and \( \nabla_0 = 1 \). The units \( \|a\| \) and \( \|b\| \) are not absolutely determined, but admit, when \( n > 1 \), of an infinite number of different values.
If \( n = 1 \), it is evident that the formula (67.) is verified; for we have the identical equation
\[ \|a\| = \|1\| \times \left[ \frac{\nabla_1}{\nabla_0} \right] \times \|1\|. \]
It is therefore sufficient to show that, if the transformation indicated in the formula can be effected for matrices of the type \( (n-1) \times (n-1) \), it can also be effected for matrices of the type \( n \times n \). The demonstration depends on an elementary principle, which it is worth while to enunciate separately.
"If
\[ U_i = A_{i,1}x_1 + A_{i,2}x_2 + \ldots + A_{i,n+m}x_{n+m}, \quad i = 1, 2, 3, \ldots, n \]
(68.)
denote a system of \( n \) linear functions of \( n+m \) indeterminates, \( (m \geq 0) \), and if the constituents of the matrix \( \|A\| \) do not admit of any common divisor, it is always possible to assign integral values to \( x_1, x_2, \ldots, x_{n+m} \), which shall render \( U_1, U_2, \ldots, U_n \) relatively prime."
For, in the first place, we can obtain values for \( U_1, U_2, \ldots, U_n \), which shall not have any common divisor with a given number \( M \). Let \( p, q, r, \ldots \) be the different prime divisors of \( M \); one at least of the constituents of \( \|A\| \), for example \( A_{i,j} \), is prime to \( p \).
Attributing to \( x_j \) a value prime to \( p \), and values divisible by \( p \) to the remaining indeterminates, we shall obtain for \( U_i \) a value which is certainly prime to \( p \). Similarly, by subjecting the indeterminates to proper congruential conditions with respect to the modules \( q, r, \ldots \), we can render one, at least, of the functions \( U \) prime to \( q \), one prime to \( r \), and
* [This article has been in great part rewritten since the paper was read. The demonstration is not essentially changed, but is presented in what seems to be a simpler form.—Sept. 1861, H. J. S. S.]
so on; i.e., since we can assign to the indeterminates values simultaneously satisfying all these congruential conditions, we can give to \( U_1, U_2, \ldots, U_n \) values the greatest common divisor of which is prime to \( M \). Let \( D_n \) be the greatest divisor of \( \|A\|, D_{n-1} \), the greatest common divisor of the first minors of \( \|A\| \); and let \( C_1, C_2, \ldots, C_n \) be a set of simultaneous values of \( U_1, U_2, \ldots, U_n \), having a greatest common divisor \( c \), which is prime to \( \frac{D_n}{D_{n-1}} \).
Since the equations
\[
A_{i,1}x_1 + A_{i,2}x_2 + \cdots + A_{i,n+m}x_{n+m} = C_i,
\]
\( i = 1, 2, 3, \ldots, n \),
are resoluble, it will follow from the condition of resolubility (see art. 11), that the determinants of its augmented matrix, and in particular those which contain the column \( C_1, C_2, \ldots, C_n \), are divisible by \( D_n \). Let \( \theta \times c \times D_{n-1} \) be the greatest common divisor of these last determinants; then \( \theta \times c \times D_{n-1} \) is divisible by \( D_n \), i.e., \( \theta \) is divisible by \( \frac{D_n}{D_{n-1}} \).
It appears from this, that the condition of resolubility is satisfied by the system
\[
A_{i,1}x_1 + A_{i,2}x_2 + \cdots + A_{i,n+m}x_{n+m} = \frac{C_i}{c},
\]
\( i = 1, 2, 3, \ldots, n \),
that is to say, it is possible to obtain a simultaneous system of relatively prime values for \( U_1, U_2, \ldots, U_n \).
To apply this principle to the transformation of the matrix \( \|a\| \), let
\[
[a_{i,j}] = \frac{1}{\nabla_{n-1}} \frac{d\nabla_n}{da_{i,j}}.
\]
The constituents of the matrix \( \|a\| \) do not admit of any common divisor; consequently, in the system
\[
[a_{i,1}]b_{i,1} + [a_{i,2}]b_{i,2} + \cdots + [a_{i,n}]b_{i,n} = u_{i,1},
\]
\( i = 1, 2, 3, \ldots, n \),
we can assign values to \( b_{i,1}, b_{i,2}, \ldots, b_{i,n} \), which shall render \( u_{i,1}, u_{i,2}, \ldots, u_{i,n} \) relatively prime. Let \( \|u\| \) denote a unit-matrix of which the first column is \( u_{i,1}, u_{i,2}, \ldots, u_{i,n} \); and \( \|b\| \) a square matrix of which the first column is \( b_{i,1}, b_{i,2}, \ldots, b_{i,n} \), and of which the remaining constituents are defined by the equations
\[
b_{i,j} = a_{i,1}u_{1,j} + a_{i,2}u_{2,j} + \cdots + a_{i,n}u_{n,j}
\]
\( i = 1, 2, 3, \ldots, n \),
\( j = 2, 3, \ldots, n \).
Observing that the systems (69.) and (70.) involve the inverse system,
\[
a_{i,1}u_{1,i} + a_{i,2}u_{2,i} + \cdots + a_{i,n}u_{n,i} = \frac{\nabla_n}{\nabla_{n-1}} b_{i,1},
\]
\( i = 1, 2, 3, \ldots, n \),
we infer that the matrices \( \|u\| \) and \( \|b\| \) verify the equation
\[
\|a\| \times \|u\| = \|b\| \times \left[ \frac{\nabla_n}{\nabla_{n-1}}, 1, 1, \ldots \right],
\]
in which $\begin{vmatrix} \nabla_n \\ \nabla_{n-1} \end{vmatrix}, 1, 1, \ldots$ denotes a matrix of the type $n \times n$. It follows from (73.) that $\nabla_{n-1}$ is the determinant of $\|b\|$; let that matrix be reduced by premultiplication with a unit-matrix; and let
$$\|b\| = \|v\| \times \|\nabla_{n-1}\|,$$
where $\|v\|$ is the reducing unit, and $\|\nabla_{n-1}\|$ the reduced matrix,
$$\begin{vmatrix} \mu_1, r_{1,2}, r_{1,3}, \ldots, r_{1,n} \\ 0, \mu_2, r_{2,3}, \ldots, r_{2,n} \\ 0, 0, \mu_3, \ldots, r_{3,n} \\ \vdots \\ 0, 0, 0, \ldots, \mu_n \end{vmatrix},$$
so that (73.) assumes the form
$$\|a\| = \|v\| \times \|\nabla_{n-1}\| \times \begin{vmatrix} \nabla_n \\ \nabla_{n-1} \end{vmatrix}, 1, 1, \ldots \times \|u\|^{-1}.$$
It may be proved that in (75.) $\mu_1 = 1$, $r_{1,2} = 0$, $r_{1,3} = 0 \ldots r_{1,n} = 0$. For since the matrix $\|\nabla_{n-1}\| \times \begin{vmatrix} \nabla_n \\ \nabla_{n-1} \end{vmatrix}, 1, 1, \ldots$ is derived from $\|a\|$ by multiplication with unit-matrices, $\nabla_{n-1}$ is the greatest common divisor of the first minors of $\|\nabla_{n-1}\| \times \begin{vmatrix} \nabla_n \\ \nabla_{n-1} \end{vmatrix}, 1, 1, \ldots$. Therefore $\nabla_{n-1}$ divides $\mu_2 \times \mu_3 \times \ldots \times \mu_n$, which is one of those minors; but also $\nabla_{n-1} = \mu_1 \times \mu_2 \times \ldots \times \mu_n$; i.e. $\mu_1 = 1$, $\mu_2 \times \mu_3 \times \ldots \times \mu_n = \nabla_{n-1}$, and the product $\|\nabla_{n-1}\| \times \begin{vmatrix} \nabla_n \\ \nabla_{n-1} \end{vmatrix}, 1, 1, \ldots$ assumes the form
$$\begin{vmatrix} \nabla_n \\ \nabla_{n-1} \end{vmatrix}, r_{1,2}, r_{1,3}, \ldots, r_{1,n} \\ 0, \mu_2, r_{2,3}, \ldots, r_{2,n} \\ 0, 0, \mu_3, \ldots, r_{3,n} \\ \vdots \\ 0, 0, 0, \ldots, \mu_n \end{vmatrix}.$$
One of the minors of this matrix is $r_{1,2} \times \mu_3 \times \ldots \times \mu_n$, which cannot be divisible by $\nabla_{n-1}$ or $\mu_2 \times \mu_3 \times \ldots \times \mu_n$, unless $r_{1,2}$ is a multiple of $\mu_2$; but $r_{1,2} < \mu_2$, because $\|\nabla_{n-1}\|$ is reduced, therefore $r_{1,2} = 0$. Similarly, it may successively be shown that $r_{1,3} = 0 \ldots r_{1,n} = 0$. Now if the matrix
$$\begin{vmatrix} \mu_2, r_{2,3}, \ldots, r_{2,n} \\ 0, \mu_3, \ldots, r_{3,n} \\ \vdots \\ 0, 0, \ldots, \mu_n \end{vmatrix},$$
which is of the type $(n-1) \times (n-1)$, be reduced to the form
$$\|v'\| \times \begin{vmatrix} \nabla_{n-1} \\ \nabla_{n-2} \end{vmatrix}, \frac{\nabla_{n-2}}{\nabla_{n-3}}, \ldots, \frac{\nabla_1}{\nabla_0} \times \|u'\|,$$
in which \( \nabla_{n-2}, \nabla_{n-3}, \ldots \) represent the greatest common divisors of the minors of (77.), we may replace \( \nabla_{n-1} \) by the matrix
\[
\nabla_{n-1} = \begin{bmatrix} 1 & 0 \\ 0 & v' \end{bmatrix} \times \begin{bmatrix} 1, \nabla_{n-1}, \nabla_{n-2}, \nabla_{n-3}, \ldots, \nabla_0 \end{bmatrix} \times \begin{bmatrix} 1 & 0 \\ 0 & u' \end{bmatrix},
\]
where \( \begin{bmatrix} 1 & 0 \\ 0 & v' \end{bmatrix} \) and \( \begin{bmatrix} 1 & 0 \\ 0 & u' \end{bmatrix} \) denote unit-matrices of the type \( n \times n \), the forms of which are sufficiently indicated by the symbols themselves. Hence, observing that
\[
\begin{bmatrix} 1 & 0 \\ 0 & u' \end{bmatrix} \times \nabla_n, 1, 1, \ldots = \nabla_{n-1}, 1, 1, \ldots \times \begin{bmatrix} 1 & 0 \\ 0 & u' \end{bmatrix};
\]
and that
\[
1, \nabla_{n-1}, \nabla_{n-2}, \nabla_{n-3}, \ldots, \nabla_0 \times \nabla_n, 1, 1, \ldots = \nabla_{n-1}, \nabla_{n-2}, \nabla_{n-3}, \ldots, \nabla_0,
\]
we obtain, from (76.), \( x = v \times \begin{bmatrix} 1 & 0 \\ 0 & v' \end{bmatrix} \times \nabla_n, \nabla_{n-1}, \nabla_{n-2}, \ldots, \nabla_0 \times \begin{bmatrix} 1 & 0 \\ 0 & u' \end{bmatrix} \times u^{-1} \),
or more simply,
\[
a = \alpha \times \nabla_n, \nabla_{n-1}, \nabla_{n-2}, \ldots, \nabla_0 \times \beta.
\]
It has, however, still to be shown that \( \nabla_{n-2}, \nabla_{n-3}, \ldots \) which have been defined with reference to the matrix (77.) are the greatest common divisors of the successive systems of minors of \( \|a\| \). These greatest common divisors are the same for the given matrix \( \|a\| \) and for the matrix \( \nabla_n, \nabla_{n-1}, \nabla_{n-2}, \ldots, \nabla_0 \) which is derived from it by multiplication with unit-matrices; consequently \( \nabla_{n-1} \) divides every first minor of \( \nabla_n, \nabla_{n-1}, \ldots, \nabla_0 \), and, in particular, it divides \( \nabla_n, \nabla_{n-1}, \nabla_{n-2}, \ldots, \nabla_0 = \nabla_n \times \nabla_{n-1} \); i.e. \( \nabla_{n-2} \) divides \( \nabla_n, \nabla_{n-1} \).
Again, \( \nabla_{n-1}, \nabla_{n-2}, \ldots, \nabla_1, \nabla_0 \), which are the determinant and greatest common divisors of the minors of (77.), are also the determinant and greatest common divisors of the minors of the matrix
\[
\begin{bmatrix} \nabla_{n-1}, \nabla_{n-2}, \ldots, \nabla_0 \end{bmatrix};
\]
so that if \( s \leq n - 2 \), \( \nabla_s \) divides every minor of order \( s \) in (78.), and, consequently, the minor \( \nabla_{s+1} = \nabla_{s-1} \times \nabla_{s-2} \times \ldots \times \nabla_1; \) or \( \nabla_s \) divides \( \nabla_{s+1} \). It thus appears that in the series of numbers
\[
\nabla_n, \nabla_{n-1}, \nabla_{n-2}, \ldots, \nabla_0
\]
each term is divisible by that which comes after it. Every product of \( s \) terms of that series is therefore divisible by the product \( \nabla_s = \nabla_{s-1} \times \nabla_{s-2} \times \ldots \times \nabla_1 = \nabla_s; \) or, which is the same thing, \( \nabla_s \) is the greatest common divisor of the minors of order \( s \) in the reduced matrix (78.), and therefore in the given matrix \( \|a\| \).
MDCCCLXI.
Art. 15. If the proposed matrix \( \|a\| \) be not square, but of the type \( n \times (n+m) \), let \( \|a\| = \|v\| \times \|a'\), where \( \|a'\) is a prime matrix of the same type as \( \|a\| \), and \( \|v\| \) a square matrix, of which the determinant is \( \nabla_n \), the greatest divisor of \( \|a\| \). Then if \( \|v\| \) be expressed in the form
\[
\|v\| \times \left[ \begin{array}{ccc}
\nabla_n & \nabla_{n-1} & \cdots & \nabla_1 \\
\nabla_{n-1} & \nabla_{n-2} & \cdots & \nabla_0
\end{array} \right] \times \|u\|,
\]
and if, for brevity, we write \( \|V\| \) for \( \|u\| \times \|a'\), we obtain for \( \|a\| \) the expression
\[
\|a\| = \|v\| \times \left[ \begin{array}{ccc}
\nabla_n & \nabla_{n-1} & \cdots & \nabla_1 \\
\nabla_{n-1} & \nabla_{n-2} & \cdots & \nabla_0
\end{array} \right] \times \|V\|. \ldots \ldots \ldots \quad (79.)
\]
The numbers \( \nabla_n, \nabla_{n-1}, \ldots \), which are the greatest common divisors of the minors of \( \|v\| \), are also by the theorem of art. 12, the greatest common divisors of the minors of \( \|a\| \). We see therefore that \( \frac{\nabla_s}{\nabla_{s-1}} \) is always divisible by \( \frac{\nabla_{s-1}}{\nabla_{s-2}} \), in the case of an oblong as well as a square matrix.
Art. 16. To show still more clearly the nature of the quotients \( \frac{\nabla_s}{\nabla_{s-1}} \), we add the following proposition:
"If in any rectangular matrix we divide each minor determinant of order \( s \) by the greatest common divisor of its own first minors, the greatest common divisor of all the quotients thus obtained is \( \frac{\nabla_s}{\nabla_{s-1}} \)."
By this proposition \( \frac{\nabla_s}{\nabla_{s-1}} \) is itself defined as a greatest common divisor, instead of being defined as the quotient of one greatest common divisor, divided by another.
To establish its truth we may first consider the quotient \( \frac{\nabla_n}{\nabla_{n-1}} \) in any rectangular matrix \( \|A\| \) of the type \( n \times (m+n) \). Let \( \omega \) denote the greatest common divisor of the quotients obtained by dividing each determinant of \( \|A\| \) by the greatest common divisor of the first minors of that determinant: we have then to show that
\[
\frac{\nabla_n}{\nabla_{n-1}} = \omega.
\]
Since the greatest common divisor of any vertical column of minors in \( \|A\| \) is not altered by premultiplication with a unit-matrix, it is evident that \( \omega \) as well as \( \frac{\nabla_n}{\nabla_{n-1}} \) will remain unchanged by that operation. If, therefore,
\[
\|A\| = \|v\| \times \left[ \begin{array}{ccc}
\nabla_n & \nabla_{n-1} & \cdots & \nabla_1 \\
\nabla_{n-1} & \nabla_{n-2} & \cdots & \nabla_0
\end{array} \right] \times \|V\|, \ldots \ldots \ldots \ldots \quad (79.)
\]
where \( \|v\| \) is a unit, and \( \|V\| \) a prime matrix, we may consider instead of \( \|A\| \), the simpler matrix
\[
\left[ \begin{array}{ccc}
\nabla_n & \nabla_{n-1} & \cdots & \nabla_1 \\
\nabla_{n-1} & \nabla_{n-2} & \cdots & \nabla_0
\end{array} \right] \times \|V\|. \ldots \ldots \ldots \ldots \quad (80.)
\]
Let \( \|a_1\|, \|a_2\|, \ldots \&c. \) be the different square matrices of \( \|V\|; \theta_1, \theta_2, \ldots \) their determi-
nants; \( \psi_i \) the greatest common divisor of those first minors in \( |\theta_i| \) which do not contain the constituents of its uppermost row, so that \( \frac{\theta_i}{\psi_i} \) is integral; lastly, let \( \omega_i \) be the quotient obtained by dividing the determinant
\[
\begin{vmatrix}
\nabla_n & \nabla_{n-1} & \cdots & \nabla_1 \\
\nabla_{n-1} & \nabla_{n-2} & \cdots & \nabla_0
\end{vmatrix} \times |\theta_i|,
\]
by the greatest common divisor of its first minors, so that \( \omega \) is the greatest common divisor of \( \omega_1, \omega_2, \ldots \). Now the greatest common divisor of the first minors of (81.) is evidently divisible by \( \nabla_{n-1} \), and divides \( \nabla_{n-1} \times \psi_i \) (because \( \nabla_{n-1} \psi_i \) is the greatest common divisor of one of its rows of minors). Consequently \( \omega_i \) divides \( \nabla_n \theta_i \div \nabla_{n-1} \), and is divisible by \( \nabla_n \theta_i \div \nabla_{n-1} \psi_i \). Therefore \( \frac{\nabla_n}{\nabla_{n-1}} \) is a common divisor of certain numbers respectively dividing the numbers \( \omega_1, \omega_2, \ldots \), viz. the numbers \( \frac{\nabla_n}{\nabla_{n-1}} \cdot \frac{\theta_i}{\psi_i} \); it is also (because \( \theta_1, \theta_2, \ldots \) are relatively prime) the greatest common divisor of the numbers \( \frac{\nabla_n}{\nabla_{n-1}} \theta_i \), in which the same numbers \( \omega_i \) are respectively contained; i.e. \( \frac{\nabla_n}{\nabla_{n-1}} \) is the greatest common divisor of the numbers \( \omega_1, \omega_2, \ldots \) themselves, or
\[
\frac{\nabla_n}{\nabla_{n-1}} = \omega.
\]
By the aid of this particular case of the theorem the general proposition itself may be proved as follows:
If in any rectangular matrix of the type \( n \times (m+n) \) we propose to determine \( \Omega_s \), the greatest common divisor of the quotients obtained by dividing each minor determinant of order \( s \), by the greatest common divisor of its own first minors, we may begin by selecting any \( s \) vertical columns \([s < n]\), and forming the proper quotient for each determinant of order \( s \), contained in this partial matrix of the type \( n \times s \). Let \( \lambda_i \) denote the greatest common divisor of these quotients; then, as we have just seen, \( \lambda_i \) is the greatest common divisor of all the determinants of the partial matrix, divided by the greatest common divisor of all its first minors. Hence (by art. 12) \( \lambda_i \) will remain unchanged when the given matrix is premultiplied by a unit-matrix. But \( \Omega_s \) is the greatest common divisor of all the divisors \( \lambda_1, \lambda_2, \ldots \) corresponding to every group of \( s \) vertical columns; therefore \( \Omega_s \) is itself unchanged by premultiplication. Similarly, if a square matrix be post-multiplied by a rectangular prime matrix, it may be shown that \( \Omega_s \) is the same for the given square matrix, and for the resulting rectangular matrix. Hence if, as before,
\[
\|A\| = \|v\| \times \begin{vmatrix}
\nabla_n & \nabla_{n-1} & \cdots & \nabla_1 \\
\nabla_{n-1} & \nabla_{n-2} & \cdots & \nabla_0
\end{vmatrix} \times \|V\|,
\]
\( \Omega_s \) and \( \frac{\nabla_s}{\nabla_{s-1}} \) are the same for \( \begin{vmatrix}
\nabla_n & \nabla_{n-1} & \cdots & \nabla_1 \\
\nabla_{n-1} & \nabla_{n-2} & \cdots & \nabla_0
\end{vmatrix} \) and for \( \|A\| \). But in the matrix \( \begin{vmatrix}
\nabla_n & \nabla_{n-1} & \cdots & \nabla_1 \\
\nabla_{n-1} & \nabla_{n-2} & \cdots & \nabla_0
\end{vmatrix} \) it is evident that \( \frac{\nabla_s}{\nabla_{s-1}} \) and \( \Omega_s \) coincide; therefore in any rectangular matrix
\[
\frac{\nabla_s}{\nabla_{s-1}} = \Omega_s.
\]
From the definition of \( \frac{\nabla_s}{\nabla_{s-1}} \) as a greatest common divisor, which we have now obtained, we infer that if \( \|D\| \) be any matrix containing another matrix \( \|\nabla\| \), and if \( D_s, D_{s-1}, \ldots, \nabla_s, \nabla_{s-1}, \ldots \) be the greatest common divisors of the corresponding minors in \( \|D\| \) and \( \|\nabla\| \) respectively, not only is \( \nabla_s \) divisible by \( D_s \), and \( \nabla_{s-1} \) by \( D_{s-1} \), but also \( \frac{\nabla_s}{\nabla_{s-1}} \) by \( \frac{D_s}{D_{s-1}} \).
It is not difficult to show that in any matrix \( \frac{\nabla_s}{\nabla_{s-k}} \) is the greatest common divisor of all the quotients obtained by dividing each minor of order \( s \) by the greatest common divisor of its minors of order \( s-k \). But as this extension of the preceding result is not needed in what follows, we may omit it here.
We may add, that the theorem of this article is precisely equivalent to the following, which may be demonstrated by a different method.
"If \( P^{l_s} \) be the highest power of a given prime that divides all the minors of order \( s \) in a given matrix, and if all the minors of order \( s-1 \) contained in one particular minor of order \( s \) are divisible by \( P^{l_{s-1}+m} \), that minor is itself divisible by \( P^{l_s+m} \)."
It should be observed that whenever all the minors of any determinant are zero, the quotient obtained by dividing the determinant by the greatest common divisor of its minors is also zero.
Art. 17. These results admit of immediate application to the theory of systems of linear congruences. The general type of such systems is
\[
A_{i_1}x_1 + A_{i_2}x_2 + \ldots + A_{i_n}x_n \equiv A_{i_{n+1}}, \mod M \\
i = 1, 2, 3, \ldots n' \\
\]
and to construct a complete theory of them it is requisite, first, to assign a criterion for their resolubility or irresolubility; secondly, when they are resoluble, to investigate the number of incongruous solutions of which they are susceptible; and, lastly, to exhibit a method for obtaining all these solutions. We shall first suppose that \( n' = n \); i.e. that the proposed system is neither defective nor redundant.
Let \( D_n, D_{n-1}, \ldots, \nabla_n, \nabla_{n-1}, \ldots \) respectively denote the greatest common divisors of the determinants and minors of the augmented and unaugmented matrices of the system (82.); also let \( \delta_n, \delta_{n-1}, \ldots, \delta_1 \) denote the greatest common divisors of \( M \) with \( \frac{\nabla_n}{\nabla_{n-1}} \), of \( M \) with \( \frac{\nabla_{n-1}}{\nabla_{n-2}} \), ..., and let \( d_n, d_{n-1}, \ldots \) similarly represent the greatest common divisors of \( M \) with \( \frac{D_n}{D_{n-1}} \), of \( M \) with \( \frac{D_{n-1}}{D_{n-2}} \), &c.; then, if \( d = d_n \times d_{n-1} \times \ldots \times d_1 \), \( \delta = \delta_n \times \delta_{n-1} \times \ldots \times \delta_1 \), we have the two following theorems:
(i.) "The necessary and sufficient condition for the resolubility of the system (81.) is \( d = \delta \)."
(ii.) "When this condition is satisfied, the number of its incongruous solutions is \( d \)."
To demonstrate the first of these theorems, we revert to the principle of art. 11, from which it appears that the necessary and sufficient condition for the resolubility of the system (82.) is that the greatest divisors of the two matrices
are to be equal to one another. Now the first of those greatest common divisors is evidently the greatest common divisor of
\[ M^n, M^{n-1} \nabla_1, M^{n-2} \nabla_2, \ldots, M \nabla_{n-1}, \nabla_n; \]
which, for brevity, we shall represent by the symbol
\[ [M^n, M^{n-1} \nabla_1, M^{n-2} \nabla_2, \ldots, \nabla_{n-1}, M, \nabla_n]. \quad (85.) \]
Let \( M = P \times Q \times R \ldots, P, Q, R, \ldots \) denoting powers of different primes; we may then, in (85.), replace \( M \) by \( P, Q, R, \ldots \) successively, since
\[ [M^n, M^{n-1} \nabla_1, \ldots, M \nabla_{n-1}, \nabla_n] \]
\[ = [P^n, P^{n-1} \nabla_1, \ldots, P \nabla_{n-1}, \nabla_n] \times [Q^n, Q^{n-1} \nabla, \ldots, \nabla_n] \times \ldots \]
If \( P \) divide any one of the numbers \( \frac{\nabla_s}{\nabla_{s-1}} \ldots \), let \( \frac{\nabla_s}{\nabla_{s-1}} \) be the least of them that it divides; also let \( P_i = \left[ P, \frac{\nabla_i}{\nabla_{i-1}} \right] \); so that \( P_i = P \), if \( i = s \). Then
\[ [P^n, P^{n-1} \nabla_1, \ldots, P \nabla_{n-1}, \nabla_n] \]
\[ = P_1 \times \left[ \frac{P^n}{P_1}, \frac{P^{n-1} \nabla_1}{P_1}, \frac{P^{n-2} \nabla_2}{P_1}, \ldots, \frac{\nabla_n}{P_1} \right] \]
\[ = P_1 \times \left[ P^{n-1}, P^{n-2} \frac{\nabla_2}{\nabla_1}, P^{n-3} \frac{\nabla_3}{\nabla_1}, \ldots, \frac{\nabla_n}{\nabla_1} \right], \]
observing that \( \frac{\nabla_1}{P_1} \) is prime to \( P \) [if \( s > 1 \)], and that we may therefore divide the last \( n \) numbers by \( \frac{\nabla_1}{P_1} \); and may then omit \( \frac{P^n}{P_1} \), which is divisible by \( P^{n-1} \). Continuing this process, we find
\[ [P^n, P^{n-1} \nabla_1, \ldots, P \nabla_{n-1}, \nabla_n] \]
\[ = P_1 \times P_2 \times \ldots \times P_{s-1} \left[ P^{n-s+1}, P^{n-s} \frac{\nabla_s}{\nabla_{s-1}}, P^{n-s-1} \frac{\nabla_{s+1}}{\nabla_{s-1}}, \ldots, \frac{\nabla_n}{\nabla_{s-1}} \right]; \]
or, since \( \frac{V_s}{V_{s-1}} \) is divisible by \( P \), and \( \frac{V_{s+k}}{V_{s+k-1}} = \frac{V_{s+k}}{V_{s+k-2}} \times \frac{V_{s+k-1}}{V_{s+k-2}} \cdots \times \frac{V_s}{V_{s-1}} \) by \( P^{k+1} \),
\[
[P^n, P^{n-1} V_1, P^{n-2} V_2, \ldots, P V_{n-1}, V_n]
= P_1 \times P_2 \times P_3 \cdots \times P_{s-1} \times P_{n-s+1}
= \prod_{i=1}^{i=n} P_i.
\]
But \( \delta_i = P_i \times Q_i \times R_i \times \ldots \); and consequently the greatest common divisor of the determinants of (83.) is \( \delta_i \times \delta_k \times \ldots \times \delta_n \) or \( \delta \). Similarly, the greatest divisor of (84.) is \( d_1 \times d_2 \times \ldots \times d_n \) or \( d \). The necessary and sufficient condition for the resolubility of the proposed system of congruences is therefore contained in the formula
\[
d = \delta.
\]
It should, however, be observed that, since \( D_s \) divides \( V_s \) (art. 16), \( d_s \) divides \( \delta_s \), and therefore the equation
\[
d = \delta
\]
involves the coexistence of the \( n \) equations
\[
d_i = \delta_i, \quad d_2 = \delta_2, \ldots, d_n = \delta_n. \quad \ldots \quad \ldots \quad \ldots \quad \ldots \quad (86.)
\]
To investigate the number of solutions of the system (82.), supposed to be resoluble, let \( \|a\| \) and \( \|b\| \) be two unit-matrices satisfying the equation
\[
\|a\| \times \|A\| \times \|b\| = \begin{bmatrix} V_n \\ V_{n-1} \\ V_{n-2} \\ \vdots \\ V_0 \end{bmatrix}; \quad \ldots \quad \ldots \quad \ldots \quad \ldots \quad (87.)
\]
also let
\[
x_i = \beta_{i,1} v_1 + \beta_{i,2} v_2 + \ldots \beta_{i,n} v_n \quad i = 1, 2, 3, \ldots n
\]
\[
c_i = a_{i,1} A_{1,n+1} + a_{i,2} A_{2,n+1} + \ldots + a_{i,n} A_{n,n+1} \quad i = 1, 2, 3, \ldots n.
\]
Then it is evident that the proposed system of congruences is precisely equivalent to the system
\[
\frac{V_{n-i+1}}{V_{n-i}} v_i = c_i \mod M_i \quad i = 1, 2, 3, \ldots n \quad \ldots \quad \ldots \quad \ldots \quad \ldots \quad (88.)
\]
in such a manner that the two systems are simultaneously resoluble or irresoluble; and that from any number of incongruous solutions of the one an equal number of incongruous solutions of the other is deducible. But the whole number of incongruous solutions of (88.) is \( \delta_1 \times \delta_2 \times \ldots \times \delta_n = \delta \); i.e., the number of solutions of the proposed system is \( \delta \).
By the use of the unit-matrices \( \|a\| \) and \( \|b\| \) the actual resolution of the proposed system is made to depend on the resolution of the \( n \) congruences contained in (88.). But this method of solving a system of linear congruences, though very symmetrical, is perhaps too tedious for the purposes of computation.
Art. 18*. Let the proposed system of congruences be the defective system
\[ A_{i,1}x_1 + A_{i,2}x_2 + \ldots + A_{i,n+m}x_{n+m} \equiv A_{i,n+m+1}, \mod M, \quad i = 1, 2, 3, \ldots n, \]
and let the notation of the last Article be retained. It is easily seen that the condition of resolubility of the system (89.) is, as before,
\[ \delta = d. \]
But the number of its incongruous solutions, when that condition is satisfied, is not \( \delta \), but \( \delta \times M^m \). For we have seen that we can find a unit-matrix \( \|a\| \), and a prime matrix \( \|A'\| \) of the type \( n \times (n+m) \), satisfying the equation
\[ \|a\| \times \|A'\| = \begin{bmatrix} \nabla_n \\ \nabla_{n-1} \\ \vdots \\ \nabla_1 \\ \nabla_0 \end{bmatrix} \times \|A'\|; \]
we may therefore replace the system (89.) by a system of the form
\[ \frac{\nabla_{n-i+1}}{\nabla_{n-i}} U_i \equiv C_i, \mod M, \quad i = 1, 2, 3, \ldots n, \quad (90.) \]
in which
\[ U_i = A'_{i,1}x_1 + A'_{i,2}x_2 + \ldots + A'_{i,n+m}x_{n+m}, \]
and
\[ C_i = a_{i,1}A_{i,n+m+1} + a_{i,2}A_{i,n+m+1} + \ldots + a_{i,n}A_{i,n+m+1}. \]
If the system (89.) is resolvable, the system (90.) will be so too, and will give \( d \) or \( \delta \) different systems of values for \( U_1, U_2, \ldots U_n \), any one of which may be represented by the formula
\[ U_i \equiv u_i, \mod M, \quad i = 1, 2, 3, \ldots n, \quad (91.) \]
Let us replace the modulus \( M \) by \( P \), the highest power of one of its prime divisors. Since \( \|A'\| \) is a prime matrix, one at least of its determinants, for example, the determinant \( \Sigma \pm A'_{i,1}A'_{i,2} \ldots A'_{i,n} \), is prime to \( P \). It will follow from this that, whatever values we attribute to \( x_{n+1}, x_{n+2}, \ldots x_{n+m} \), each of the \( \delta \) systems represented by (91.) is resolvable for the modulus \( P \), and gives, for any assumed values of \( x_{n+1}, x_{n+2}, \ldots x_{n+m} \), only one set of values of \( x_1, x_2, \ldots x_n \). Each of those \( \delta \) systems admits, therefore, of \( P^m \) solutions for the modulus \( P \), i.e., of \( M^m \) for the modulus \( M \). The system (89.) will consequently admit of \( \delta \times M^m \) solutions.
Let us also consider the redundant system of congruences,
\[ A_{i,1}x_1 + A_{i,2}x_2 + \ldots + A_{i,n}x_n \equiv A_{i,n+1}, \mod M, \quad i = 1, 2, 3, \ldots n+m, \quad (92.) \]
and let \( D_{n+1} \) denote the greatest divisor of its augmented matrix. Let \( p \) represent a prime divisor of \( M \), and let \( p^\alpha, p^\beta, p^\gamma \) be the highest powers of \( p \), which divide \( M, D_s, \nabla_s \) respectively. The condition of resolubility of art. 11, applied to the system (92.), considered with respect to the modulus \( p^\alpha \), becomes, after division by \( p^{(m-1)\alpha} \),
\[ [p^{i_n+1}, p^{i_n+\alpha}, p^{i_n-1+\alpha}, \ldots, p^{(n+1)\alpha}] \]
\[ = [p^{i_n+\alpha}, p^{i_n-1+\alpha}, \ldots, p^{(n+1)\alpha}], \quad (93.) \]
* [This article has been added since the paper was read. The theorems contained in it are supplementary to that of the preceding article. September 1861, H. J. S. S.]
And this equation is impossible, if $\theta > I_{n+1} - I_n$. For $I_{s+1} - I_s \geq I_s - I_{s-1}$, because $\frac{D_{s+1}}{D_s}$ is divisible by $\frac{D_s}{D_{s-1}}$; the inequality, $I_{n+1} < I_n + \theta$, involves, therefore, the inequalities
$$I_{n+1} < I_{n-s+1} + s\theta,$$
$s = 1, 2, 3, \ldots n + 1,$
(94.)
and these, again, imply the corresponding inequalities
$$I_{n+1} < i_{n-s+1} + s\theta,$$
$s = 1, 2, 3, \ldots n + 1,$
(95.)
because $I_{n-s+1} \leq i_{n-s+1}$. From (94.) it appears that the value of $[p^{i_{n+1}}, p^{i_n + \theta}, \ldots p^{(n+1)\theta}]$ is $p^{i_{n+1}}$, and from (95.), that the value of $[p^{i_n + \theta}, p^{i_{n-1} + 2\theta}, \ldots p^{(n+1)\theta}]$ is a power of $p$ superior to $p^{i_{n+1}}$; i.e., the equation (93.) is impossible. We thus obtain, as a first condition for the resolvability of the proposed system (92.), the congruence
$$\frac{D_{n+1}}{D_n} \equiv 0 \mod M.$$
(96.)
When this condition is satisfied, we obtain from (93.), omitting the term $p^{i_{n+1}}$ (because $I_n + \theta \geq I_n + \theta$), and dividing by $p^\theta$, the equation of condition,
$$[p^{i_n}, p^{i_{n-1} + \theta}, p^{i_{n-2} + 2\theta}, \ldots p^{n\theta}] = [p^{i_n}, p^{i_{n-1} + \theta}, p^{i_{n-2} + 2\theta}, \ldots p^{n\theta}],$$
which leads us (as in the last article) to the simple formula
$$d = \delta.$$
This equation, therefore, and the congruence (96.), express the necessary and sufficient conditions for the resolvability of the proposed redundant system.
When these two conditions are simultaneously satisfied, the number of incongruous solutions is $\delta$. For, if we again consider the proposed system of congruences with respect to the modulus $p^\theta$, and select from it a partial system of $n$ congruences such that the determinants of its augmented matrix, which are necessarily divisible by $p^{i_n}$, are not divisible by any higher power of $p$, it is readily seen that every set of values of the indeterminates $x_1, x_2, \ldots, x_n$, which satisfies the partial system, will also (by virtue of the inequality $\theta \leq I_{n+1} - I_n$) satisfy the remaining congruences of the proposed system. The number of solutions of the proposed system is therefore the same as that of the partial system. And because $p^{i_n}$ the highest power of $p$ which divides every determinant of order $n$ in the augmented matrix of the proposed system is also the highest power of $p$ which divides the augmented matrix of the partial system, it follows from the last theorem of art. 16, that $p^{i_{n-1}}, p^{i_{n-2}}, \ldots$ are the highest powers of $p$ which divide the corresponding orders of determinants in the latter, as well as in the former matrix. The number of solutions of the partial system (and consequently of the proposed system), considered with respect to the modulus $p^\theta$, is therefore expressed by the formula
$$[p^{i_n}, p^{i_{n-1} + \theta}, \ldots, p^{n\theta}];$$
or, finally, the number of solutions of the proposed system, considered with respect to M as modulus, is \(d\) or \(\delta\).
Art. 19. We shall terminate this paper with an elementary theorem relating to linear systems of equations, which admits of frequent application in other parts of the theory of numbers.
Resuming the notation of art. 11, we may see from the theorem of that article, that if the system (56.) be resolvable for any given values of the numbers \(A_{1,0}, A_{2,0}, \ldots A_{n,0}\), it is also resolvable for any other values of those numbers, respectively congruous, for the modulus \(D\), to the given values; so that the resolvability or irresolvability of the system depends exclusively on the residues of the numbers \(A_{i,0} \mod D\). There are \(D^n\) possible combinations of these residues, and we shall now show that for \(D^{n-1}\) of them the system is resolvable, while for the remaining \(D^n - 1\) (\(D - 1\)) it is irresolvable. For this purpose let
\[
\|a\| \times \|A\| = \begin{vmatrix}
D_n & D_{n-1} & \cdots & D_0 \\
D_{n-1} & D_{n-2} & \cdots & D_0 \\
\vdots & \vdots & \ddots & \vdots \\
D_0 & D_0 & \cdots & D_0 \\
\end{vmatrix} \times \|A\|,
\]
\(\|a\|\) denoting a unit-matrix, and \(\|A'\|\) a prime matrix of the same type as \(\|A\|\), while \(D_n, D_{n-1}, D_1, D_0\) are of course the greatest common divisors of the determinants and minors of \(\|A\|\). Let also
\[-C_i = A_{i,0} a_{i,1} + A_{2,0} a_{i,2} + \cdots + A_{n,0} a_{i,n}.\]
The given system is then exactly equivalent to the system
\[
\frac{D_{n-i+1}}{D_{n-i}} [A'_{i,1} x_1 + A'_{i,2} x_2 + \cdots + A'_{i,n+m} x_{n+m}] = C_i,
\]
\(i = 1, 2, 3, \ldots n.\)
For the resolvability of this system it is requisite that \(C_i\) should be divisible by \(\frac{D_{n-i+1}}{D_{n-i}}\); and this condition is sufficient as well as necessary, because \(\|A'\|\) is a prime matrix. Now of the \(D\) or \(D_n\) values, incongruous for the modulus \(D\), which may be attributed to \(C_i\), \(\frac{D_n \times D_{n-i}}{D_{n-i+1}}\) are divisible by \(\frac{D_{n-i+1}}{D_{n-i}}\); whence it is evident that of the \(D^n\) systems of values which may be attributed to \(C_1, C_2, \ldots C_n, D^n \div \left[ \frac{D_1}{D_0}, \frac{D_2}{D_1}, \frac{D_3}{D_2}, \ldots \frac{D_n}{D_{n-1}} \right]\), i.e. \(D^{n-1}\) render the system (98.) resolvable. Consequently the given system is also resolvable for \(D^{n-1}\), and no more, of the systems of values that can be attributed \((\mod D)\) to \(A_{1,0}, A_{2,0}, \ldots A_{n,0}\).
Art. 20. The methods employed in the present paper are without exception such as to be immediately applicable to any species of complex numbers which admit of resolution into actual or ideal prime factors. And the greater part of the results at which we have arrived may be transferred, mutatis mutandis, to the theories of such numbers. For example, if in the equations (56.) we suppose the constituents of \(\|A\|\) to represent complex numbers, it will be found that the criterion for the resolvability or irresolvability of the system, which we have demonstrated in the case of ordinary integers, applies equally in the case of complex numbers; and again, the condition of resolvability of a system of congruences of which the modulus as well as the coefficients are complex numbers, is
precisely the same as in the case of common whole numbers; while the expression for the number of the solutions (when the condition of resolubility is satisfied) is simply the norm of \( m \).
But without entering into the developments which this extension of the subject of this paper would require, we shall confine ourselves to an application of the result of the last article to a demonstration of the fundamental principle in the arithmetical theory of complex numbers, that the number of incongruous residues for any complex modulus is represented by the norm of the modulus.
Let \( \alpha \) be one of the roots \( \alpha_1, \alpha_2, \ldots, \alpha_n \) of the equation \( F_n(x) = 0 \), which is supposed to be of \( n \) dimensions, to be irreducible, and to have all its coefficients integral, and that of its first term unity. Let also \( \varphi_{n-1}(\alpha) \) be the complex modulus under consideration; its norm, which we shall symbolize by \( N \), is defined by the equation
\[
N = N \cdot \varphi_{n-1}(\alpha) = \prod_{i=1}^{n} \varphi_{n-1}(\alpha_i).
\]
Consider the \( N_{2n-1} \) residues (incongruous mod. \( N \)) which are included in the formula
\[
R_{2n-2}(\alpha),
\]
where \( R_{2n-2} \) denotes an integer function of \( 2n-2 \) dimensions; it is evident that every complex number is congruous, for the modulus \( \varphi_{n-1}(\alpha) \), to one at least of these \( N_{2n-1} \) residues. If \( R \) and \( R' \) be any two (the same or different) of the same residues, it is also plain that the congruence
\[
R \equiv R', \mod. \varphi_{n-1}(\alpha)
\]
will, or will not, be satisfied, according as it is, or is not, possible to assign two functions of \( x \), \( F_{n-1}(x) \) and \( \varphi_{n-2}(x) \) having integer coefficients, and satisfying the equation
\[
F_n(x)\varphi_{n-2}(x) + F_{n-1}(x)\varphi_{n-1}(x) = R(x) - R'(x).
\]
This equation is equivalent to a system of \( 2n-1 \) linear equations, in which the unknown quantities are the \( 2n-1 \) coefficients of \( \varphi_{n-2}(x) \) and \( F_{n-1}(x) \), and of which the determinant is the dialytic resultant of \( F_n(x) \) and \( \varphi_{n-1}(x) \), i.e. the norm of \( \varphi_{n-1}(\alpha) \) or \( N \). If then we suppose \( R(\alpha) \) to represent any given residue included in the formula (99.), it will appear from the theorem of the last article that the equation (100.) is resoluble for \( N_{2n-2} \) different values of \( R'(x) \), i.e. that every complex number is congruous, for the modulus \( \varphi_{n-1}(\alpha) \), to precisely \( N_{2n-2} \) of the \( N_{2n-1} \) residues contained in the formula (99.), or that the number of residues, incongruous mod. \( \varphi_{n-1}(\alpha) \), is precisely \( N \).
It is, however, proper to observe, that a complete demonstration of this important theorem has already been given by Professor Sylvester (see a paper signed "Lanavicensis," in the 'Quarterly Journal of Pure and Applied Mathematics,' vol. iv. p. 94 and 124).