Page 530 - Fiber Optic Communications Fund
P. 530

Digital Signal Processing                                                          511


           The adaptive equalizer has 2K + 1 adjustable complex coefficients. The coefficients W[k] can be adjusted so
           that the mean square error is minimum,
                                                   J
                                                       = 0                                 (11.70)
                                                 W[k]
           and
                                                   J
                                                       = 0.                                (11.71)
                                                    ∗
                                                 W [k]
           Using Eqs. (11.67) and (11.69) in Eq. (11.70), we find
                                     J       ∗                  ∗
                                         = < −x [n]y[n − k]+ y[n − k]̂x [n] >
                                    W[k]
                                                      ∗
                                         =− < y[n − k]e [n] >= 0.                          (11.72)
                            ∗
                                                                  ∗
           Note that W[k] and W [k] are independent variables and, therefore, ̂x [n]∕W[k]= 0. From Eq. (11.71), we
           obtain
                                          J        ∗
                                              =− < y [n − k]e[n] >= 0,                     (11.73)
                                           ∗
                                        W [k]
           which is nothing but the complex conjugate of Eq. (11.72).
            The tap weights W[−K], W[−K + 1], … , W[K] are optimum when the cost function J is minimum. To find
           the optimum tap weights, we follow an iterative procedure. Initially, tap weights are chosen arbitrarily as
                                    (0)    (0)      (0)            (0)
                                  W    =[W [−K], W [−K + 1], … , W [K]],                   (11.74)
           where ‘(0)’ stands for the zeroth iteration. To update the tap weights for the next iteration, we need to move
           in a vector space of 2K + 1 dimensions such that we are closer to a minimum of the cost function J.The
           gradient vector is defined as
                                       G =[g[−K], g[−K + 1], … , g[K]],                    (11.75)
                                              J         ∗
                                     g[k]= 2      =−2 < y [n − k]e[n] >.                   (11.76)
                                               ∗
                                            W [k]
                                                                             (0)
           At the starting point, we have the tap weight vector W (0)  and the gradient vector G . From Eq. (11.71), we
           see that J is minimum when g[k] is zero. But at the starting point, g[k] may not be zero. Iteratively, we need
           to find W[k] such that g[k] is close to zero. The tap weight vector for the next iteration should be chosen in
                              (0)
                                                                        (0)
           a direction opposite to G . This is because, if we move in the direction of G , J would be maximized. So,
           the tap weights for the next iteration are chosen as
                                                         Δ   (0)
                                                     (0)
                                               (1)
                                             W   = W   −   G                               (11.77)
                                                          2
           or
                                     W[k] (1)  = W[k] (0)  −  Δ g[k] (0)
                                                      2
                                                          ∗
                                           = W[k] (0)  +Δ < y [n − k]e[n] >,               (11.78)
           where Δ is a step-size parameter and the factor 1∕2 in Eq. (11.77) is introduced for convenience. The con-
           vergence of the iterative procedure depends on the value of Δ chosen. In practice, it is difficult to evaluate
           the expectation operator of Eq. (11.78), which requires knowledge of the channel response H[n]. Instead, the
   525   526   527   528   529   530   531   532   533   534   535