Page 214 - Linear Models for the Prediction of Animal Breeding Values 3rd Edition
P. 214

2
                                                                    2
         algorithm to sample from p(s |y*) using the prior distribution, p(s ), as the driver
                                   gi                               gi
         distribution to suggest updates for the MH chain as follows:
                   2     from the prior distribution p(s ).
                                                   2
         1. Sample s gi(new)                       gi
                              2     2    with a probability of k:
                              gi
         2. Replace the current s  by s gi(new)
            k = minimize{p(y*|s gi(new))/p(y*|s ); 1}
                              2
                                         2
                                         gi
            and then go to step 1:
                     2
                                                          2
         where p(y*|s ) is the likelihood of the data given  s . The likelihood can be
                     gi                                   gi
         calculated as:
                           1       − 12  yV  − 1  ) y
                                    /( *′
                   2
              y     ) =           e                                        (11.26)
            L(* ⏐ s gi    / 12 n
                       2 p    | V  |
                             2
         where V = z (Is )z ′ + Is  and |V| is the determinant of V. Note that if s  is zero, as will
                                                                    2
                      2
                   i  gi  i  e                                      gi
                                                        2
         happen in the course of the MH sampling, then V = Is .
                                                        e
            The computation of the required likelihood is easier to implement in a log-likelihood
         form. Fernando (2010) presented the following algorithm for the log-likelihood:
            logLH = −0.5(log(V)) + (((z ′y*)′ V ) z ′y*)                   (11.27)
                                          −1
                                    i        i
         with:
                                   2
                                                        2
                      2
                                                2
            V = (z ′z Is z ′z ) + z ′z *s  or V = z ′z *s  when s  is zero
                 i  i  gi, i  i  i  i  e   i  i  e      gi
         In practice, a required number of MH cycles are implemented per cycle of Gibbs
         sampling. The implementation of each MH cycle involves:
                                                                                2
         1. Using Eqn 11.26 or 11.27 compute an initial likelihood (LH1) using the current s .
                                                                                gi
                             2
         Note that the current s  could be zero and LH1 is also computed but with V appro-
                             gi
         priately defined.
         2. Then commence the MH cycle, by drawing  r from a uniform distribution. Set
          2                                    2
          gi(new)                              gi(new)
         s     to be zero. If r < (1 − π), sample a s   from the driver distribution using
                                                 2
                                                 gi(new)
         Eqn 11.25. Compute likelihood (LH2) using s   and calculate k as k = minimize
         (LH2/LH1; 1). Note that if log-likelihood Eqn 11.27 is used, then k = exp(LH2 – LH1).
         The value of k is compared with a number s drawn from a uniform distribution. If s
                                 2
                                 gi(new)
         is less than k, then accept s   and then set LH1 = LH2. Go to step 1 and begin
         another MH cycle until required MH cycles are complete.
                                              2
                                              gi(new)        i
         After the required number of MH cycles, if s   is > 0 then g  is sampled as in BayesA,
                                                      2
         otherwise g  = 0. Similarly, the sampling of b and s  is implemented as described in
                  i                                   e
         BayesA.
         Example 11.8
         The application of BayesB is illustrated using the data in Example 11.1 with residual
         updating. The data for the reference animals is analysed with the model in Eqn 11.6.
         The initial parameters are the same as outlined for BayesA in Example 11.7 and the
         starting value of π was set at 0.30.
                                ˆ
                                      2
            The starting values for b, gˆ, s  and aˆ were the same as for BayesA. The sampling
                                      gi
                                                                           2
         procedure for parameters is the same as for BayesA apart from sampling for s .
                                                                           gi
            Initially, the vector of residuals, ê, is set up and this has been given in Example 11.7.
                                     2[1]  = 99.345/8.131 = 12.218.
         Therefore, in the first iteration s e
          198                                                            Chapter 11
   209   210   211   212   213   214   215   216   217   218   219