Page 118 - Linear Models for the Prediction of Animal Breeding Values 3rd Edition
P. 118

corresponding specific factors. Since the factors are assumed to be uncorrelated, sub-
         stantial sparsity of the MME is achieved.
            On the other hand, PC aims to identify factors that explain the maximum amount of
         variation and does not imply any underlying model. The first PC explains the maximum
         amount of genetic variation in the data and each successive PC explains the maximum
         amount of the remaining variation. Thus for highly correlated traits, only the leading PC
         have a practical influence on genetic variation and those with negligible effect can be omit-
         ted without reducing the accuracy of estimation. For example, with t traits, k independent
         principal components (k ≤ t) can be derived that explain a maximum proportion of the
         total multivariate system. Similar to the FA, the PC approach requires decomposing
         the genetic covariance matrix into pertaining matrices of eigenvalues and eigenvectors. The
         eigenvector or PC can be regarded as a linear combination of the traits and they are
         independent, while the corresponding eigenvalues gives the variance explained.


         6.4.1  Factor analysis

         Assume that w is a vector of n variables with covariance matrix equal to G and that
         w can be modelled as:
            w = m + Fc + s
         where m is the vector of means, c is a vector of common factors of length m, s is the
         vector of residuals or specific effects of length n and F is the matrix of order n × m of
         the so-called factor loadings. In the most common form of FA, the columns of F are
         orthogonal, i.e. j j  = 0, for i ≠ j and thus the elements of c are uncorrelated and assumed
                       i j
         to have unit variance, var(c) = I. The columns j  are determined as corresponding eigen-
                                                i
         vectors of G, scaled by the square root of the respective eigenvalues (Meyer, 2009).
            Usually F is not unique but is often orthogonally transformed to obtain factor
         loadings that are more interpretable than those derived from the eigenvectors. The
         specific effects (s) are assumed to be independently distributed and therefore the vari-
         ance of s is a diagonal matrix S of order n. Therefore:
            var(w) = G  = FF′ + S                                            (6.8)
                      FA
            The above indicates that all the covariances between the levels of w are modelled
         through the common factors while the specific factors account for the additional indi-
         vidual variances of the elements of w. Thus the n(n + 1)/2 elements of G are modelled
         through the n elements of the specific variances and m(2n – m + 1)/2 elements of F and
         additional m(m − 1)/2 of F which is determined by the orthogonal constraints. For
         example if n is 4 and m = 1, then the 10 elements of G are modelled by the four ele-
         ments of S and the four elements of F. FA with a small m thus provides a parsimonious
         way to model the covariances among a large number of variables. When all the ele-
         ments of S are non-zero, then four traits is the minimum number of variables for which
         imposing an FA structure results in a reduction of the parameters (Meyer, 2009).


         Mixed model equations

         Assume the following multi-trait linear mixed model in Eqn 5.1 is presented as:
            y = Xb + Za + e                                                  (6.9)


          102                                                             Chapter 6
   113   114   115   116   117   118   119   120   121   122   123