Page 277 - Linear Models for the Prediction of Animal Breeding Values 3rd Edition
P. 277
[i]
[i]
1. Sample W [i+1] from P(W |W , W , Y)
1 1 2 3
[i]
2. Sample W [i+1] from P(W |W [i+1] , W , Y)
2 2 1 3
3. Sample W [i+1] from P(W |W [i+1] , W [i+1] , Y)
3 3 2 3
Usually, the initial samples are discarded (the so-called burn-in period). In summary,
the application of the Gibbs sampler involves defining the prior distributions and the
joint posterior density and generating the full conditional posterior distributions and
sampling from the latter.
The Gibbs sampler was first implemented by Geman and Geman (1984). In ani-
mal breeding, Wang et al. (1993, 1994) used Gibbs sampling for variance component
estimation in sire and animal models. It has been implemented for the study of covari-
ance components in models with maternal effects (Jensen et al., 1994), in threshold
models (Sorensen et al., 1995) and in random regression models (Jamrozik and
Schaeffer, 1997). It has recently been employed for the purposes of variance compo-
nent estimation and breeding value prediction in linear threshold models (Heringstad
et al., 2002; Wang et al., 2002). Detailed presentations of the Gibbs sampling within
the general framework of Bayesian inference and its application for variance compo-
nents estimation under several models have been published by Sorensen and Gianola
(2002). In this chapter, the application of the Gibbs sampler for variance component
estimation and prediction of breeding values with univariate and multivariate animal
models are presented and illustrated.
16.2 Univariate Animal Model
Consider the following univariate linear model:
y = Xb + Zu + e
where terms are as defined in Eqn 3.1 but with u = a in Eqn 3.1. The conditional
distribution that generates the data, y, is:
2
2
y|b, u, s ~ N(Xb + Zu + Rs ) (16.3)
e e
16.2.1 Prior distributions
2
2
Prior distributions of b, u, s and s are needed to complete the Bayesian specification
u e
of the model (Wang et al., 1993). Usually, a flat prior distribution is assigned to b. Thus:
P(b) ~ constant (16.4)
This represents an improper or ‘flat’ prior distribution, denoting lack of prior know-
ledge about this vector. However, if there is information a priori about value of b in
terms of upper or lower limits, this can be incorporated in defining the posterior
distribution of b. Such a prior distribution will be called a proper prior distribution.
Assuming an infinitesimal model, the distribution of u is multivariate normal
and is:
2
u|A, s ~ N(O, As ) (16.5)
2
u u
Use of Gibbs Sampling 261