Page 289 - Linear Models for the Prediction of Animal Breeding Values 3rd Edition
P. 289
generally improving the efficiency of the iterative process, it is sometimes rec-
ommended that the system of equations should be ordered such that the coef-
ficient of b of the greatest magnitude occurs in the first equation, the coefficient
1
of b of the greatest magnitude in the remaining equations occurs in the second
2
equation, etc.
The iterative procedure described above is usually called Jacobi iteration as
all new solutions in the current (r) round of iteration are obtained using solu-
tions only from the previous (r − 1) round of iteration. The Jacobi iterative
procedure is inefficient in handling systems of equations that are not con-
strained (i.e. with no restrictions placed on the solutions for the levels of an
effect) and convergence is not guaranteed (Maron, 1987; Misztal and Gianola,
1988). When a random animal effect is involved in the system of equations with
relationships included, it is usually necessary to use a relaxation factor of below
1.0, otherwise equations may not converge (Groeneveld, 1990). The relaxation
factor refers to a constant estimated on the basis of the linear changes in the
solutions during the iteration process and applied to speed up the solutions
towards convergence. When iterating on the data (Section 17.4), the Jacobi
iterative procedure involves reading only one data file, even with several effects
in the model. With large data sets this has the advantage of reducing memory
requirement and processing time compared with the Gauss–Seidel iterative pro-
cedure (see Section 17.3.2).
The Jacobi iterative procedure can be briefly summarized as follows.
Following Ducrocq (1992), Eqn 17.1 can be written as:
[M + (C − M)]b = y
if M is the diagonal matrix containing the diagonal elements of C; then the algorithm
for Jacobi iteration is:
−1
(r)
b (r+1) = M (y − Cb ) + b (r) (17.3)
When a relaxation factor (w) is applied, the above equation becomes:
(r)
−1
b (r+1) = w[M (y − Cb )] + b (r)
Another variation of the Jacobi iteration, called second-order Jacobi, is usually
employed in the analysis of large data sets and it can increase the rate of convergence.
The iterative procedure for second-order Jacobi is:
−1
(r)
(r)
(r)
b (r+1) = M (y − Cb + b + w(b − b (r−1) ))
Example 17.1
Using the coefficient matrix and the RHS for Example 3.1, Jacobi iteration (Eqn 17.2)
is carried out using only the non-zero element of the coefficient matrix. Solutions for
sex effect (b vector) and random animal effect (u vector) are shown below with the
round of iteration. The convergence criterion (CONV) was the sum of squares of
differences between the current and previous solutions divided by the sum of squares
of the current solution.
Solving Linear Equations 273