Page 136 - From GMS to LTE
P. 136
122 From GSM to LTE-Advanced Pro and 5G
While this concept enabled the network to transfer data to a user much faster than
before, there were still a number of shortcomings, which were resolved by UMTS.
With GPRS, it was only possible to bundle timeslots on a single carrier frequency.
Therefore, it was theoretically possible to bundle up to eight timeslots. In an opera-
tional network, however, it was rare that a mobile device was assigned more than four
to five timeslots, as some of the timeslots of a carrier were used for the voice calls of
other users. Furthermore, on the mobile device side, most phones could only handle
four or five timeslots at a time in the downlink direction.
A GSM base station was initially designed for voice traffic, which only required a
modest amount of transmission capacity. This is why GSM base stations were usually
connected to the BSC via a single 2 Mbit/s E‐1 connection. Depending on the number
of carrier frequencies and sectors of the base station, only a fraction of the capacity of
the E‐1 connection was used. The remaining 64 kbit/s timeslots were used for other
base stations. Furthermore, the processing capacity of GSM base stations was only
designed to support the modest requirements of voice processing rather than the com-
puting‐intensive high‐speed data transmission capabilities required today.
At the time UMTS was first rolled out, the existing GPRS implementations assigned
resources (i.e. timeslots) in the uplink and downlink directions to the user only for
exactly the time they were required. In order for uplink resources to be assigned, the
mobile device had to send a request to the network. A consequence of this was unwanted
delays ranging from 500 to 700 milliseconds when data needed to be sent.
Likewise, resources were only assigned in the downlink direction if data had to be
sent from the core network to a user. Therefore, it was necessary to assign resources
before they could be used by a specific user, which took another 200 milliseconds.
These delays were tolerable if a large chunk of data had to be transferred. For short
and bursty data transmissions as in a web‐browsing session, however, the delay was
negatively noticeable.
UMTS Release 99 solved these shortcomings as follows.
To increase the data transmission speed per user, UMTS increased the bandwidth per
carrier frequency from 200 kHz to 5 MHz. This approach had advantages over simply
adding more carriers (dispersed over the frequency band) to a data transmission, as
mobile devices can be manufactured much more cheaply when only a single frequency
is used for data transfer.
The most important improvement of UMTS was the use of a new medium access
scheme on the air interface. Instead of using an FTDMA scheme as per GSM, UMTS
introduced code multiplexing to allow a single base station to communicate with many
users at the same time. This method is called Code Division Multiple Access (CDMA).
Contrary to the frequency and time multiplexing of GSM, all users communicate on
the same carrier frequency and at the same time. Before transmission, a user’s data is
multiplied by a code that can be distinguished from codes used by other users. As the
data of all users is sent at the same time, the signals add up on the transmission path to
the base station. The base station uses the inverse of the mathematical approach that
was used by the mobile device, as the base station knows the code of each user.
This principle can also be described within certain boundaries with the following analogy:
Communication during a lecture. Usually there is only one person speaking at a
●
time while many people in the room are just listening. The bandwidth of the ‘trans-
mission channel’ is high as it is only used by a single person. At the same time,