Page 53 - From GMS to LTE
P. 53

Global System for Mobile Communications (GSM)  39

               burst that can only transport a few bits in exchange for large guard periods at the beginning
               and end of the burst. This is necessary because the mobile device is unaware of the
               distance between itself and the base station when it attempts to contact the network.
               Thus, the mobile device is unable to select an appropriate timing advance value. When
               the base station receives the burst, it measures the delay and forwards the request,
               including a timing advance value required for this mobile device, to the BSC. As has
               been shown in Figure 1.27, the BSC reacts to the connection request by returning an
               Immediate Assignment message to the mobile device on the AGCH. Apart from the
               number of the assigned SDCCH, the message also contains a first timing advance value
               to be used for the subsequent communication on the SDCCH. Once the connection has
               been successfully established, the BTS continually monitors the delay experienced for
               this channel and reports any changes to the BSC. The BSC in turn instructs the mobile
               device to change its timing advance by sending a message on the SACCH.
                For  special  applications,  like  coastal  communication,  the GSM  standard  offers  an
               additional timeslot configuration to increase the maximum distance to the base station
               to up to 120 km. This is achieved by only using every second timeslot per carrier, which
               allows a burst to overlap onto the following (empty) timeslot. While this significantly
               increases the range of a cell, the number of available communication channels is cut in
               half. Another issue is that mobile devices that are limited to a transmission power of
               1 W (1800 MHz band) or 2 W (900 MHz band) may be able to receive the BCCH of such
               a cell at a great distance but are unable to communicate with the cell in the uplink direc-
               tion. Thus, such an extended‐range configuration mostly makes sense with permanently
               installed mobile devices with external antennas that can transmit with a power level of
               up to 8 W.


               1.7.5  The TRAU for Voice Encoding
               For the transmission of voice data, a TCH is used in GSM as described in Section 1.7.3.
               A TCH uses all but two bursts of a 26‐burst multiframe, with one being reserved for the
               SACCH, as shown in Figure 1.25, and the other remaining empty to allow the mobile
               device to perform neighboring cell measurements. As has been shown in the preceding
               section, a burst that is sent to or from the mobile every 4.615 milliseconds can carry
               exactly 114 bits of user data. When taking the two bursts which are not used for user
               data of a 26‐burst multiframe into account, this results in a raw datarate of 22.8 kbit/s.
               As is shown in the remainder of this section, a substantial part of the bandwidth of a
               burst is required for error detection and correction bits. The resulting datarate for the
               actual user data is thus around 13 kbit/s.
                The narrow bandwidth of a TCH stands in contrast to how a voice signal is transported
               in the core network. Here, the PCM algorithm is used (see Section 1.6.1) to digitize the
               voice signal, which makes full use of the available 64 kbit/s bandwidth of an E‐1 timeslot
               to encode the voice signal (see Figure 1.31).
                A simple solution for the air interface would have been to define air interface channels
               that can also carry 64 kbit/s PCM‐coded voice channels. This has not been done because
               the scarce resources on the air interface have to be used as efficiently as possible. The
               decision to compress the speech signal was taken during the first standardization phase
               in the 1980s because it was foreseeable that advances in hardware and software‐processing
               capabilities would allow compression of a voice data stream in real‐time.
   48   49   50   51   52   53   54   55   56   57   58