Page 276 - Linear Models for the Prediction of Animal Breeding Values 3rd Edition
P. 276

16Use of Gibbs Sampling


                      in Variance Component

                      Estimation and Breeding

                      Value Prediction




         16.1 Introduction

         Gibbs sampling is a numerical integration method and is one of several Markov chain
         Monte Carlo (MCMC) methods. They involve drawing samples from specified dis-
         tributions; hence they are called Monte Carlo and are referred to as Markov chain
         because each sample depends on the previous sample. Specifically, Gibbs sampling
         involves generating random drawings from marginal posterior distributions through
         iteratively sampling from the conditional posterior distributions. For instance, given
         that Q′ = (Q , Q ) and P(Q , Q ) is the joint distribution of Q  and Q , Gibbs sampling
                   1  2        1   2                        1      2
         involves sampling from the full conditional posterior distributions of Q , P(Q |Q ) and
                                                                    1     1  2
         Q , P(Q |Q ).
          2     2  1
            Thus given that the joint posterior distribution is known to proportionality, the
         conditional distributions can be generated. However, defining the joint density
         involves the use of Bayes’ thereom. In general, given that the probability of two events
         occurring together, P(B, Y), is:

            P(B,Y) = P(B)P(Y|B) = P(Y)P(B|Y)
         then:
            P(B|Y) = P(B)P(Y|B)/P(Y)                                        (16.1)
         Equation 16.1 implies that inference about the variable B depends on the prior prob-
         ability of its occurrence, P(B). Given that observations on Y are available, this prior
         probability is then updated to obtain the posterior probability or density of B, (P(B|Y).
         Equation 16.1 is commonly expressed as:
            P(B|Y) ∝ P(B)P(Y|B)                                             (16.2)
         as the denominator is not a function of B. Therefore, the posterior density of B is
         proportional to the prior probability of  B times the conditional distribution of  Y
         given B. Assuming that B in Eqn 16.2 is replaced by W, a vector of parameters, such
         that W′ = (W ,W ,W ), and that the joint posterior distribution is known to propor-
                    1  2   3
         tionality (Eqn 16.2), the full conditional probabilities needed for the Gibbs sampler
         can be generated for each parameter as  P(W |W ,W ,Y), P(W |W ,W ,Y) and
                                                    1  2  3        2  1   3
                                                              [0]
                                                     [0]
                                                [0]
         P(W |W ,W ,Y). Assuming starting values W , W  and W , the implementation
            3   1  2                            1    2        3
         of the Gibbs sampler involves iterating the following loop:
          260            © R.A. Mrode 2014. Linear Models for the Prediction of Animal Breeding Values,
                                                                3rd Edition (R.A. Mrode)
   271   272   273   274   275   276   277   278   279   280   281