As was already mentioned digital television broadcasts digital signal but multimedia content i.e. video and audio signals come from their sources as analog signals and they have to be converted into a digital form (analog-to-digital converter).
However, the analog video signal that needs a bandwidth of 5 MHz in case of a standard European 625-line TV signal with 720 pixels per line amounts to 414,720 (576 x 720) pixels per picture (frame). After digitization a black and white video signal (with 25 pictures per second) would require a rate of about 83 Mbps (or about 250 Mbps for color video). Those bit rates are too high and almost inapplicable in real systems (e.g. over satellite). Fortunately, video signals as well as audio signals contain a lot of redundant information that can be removed via suitable compression techniques. Using a compression the original rate can be decreased (based on quality and resolutions) to several Mbps.
Redundant information represents (for example) information that can be predicted and it is not necessary to transmit it because the receiver (decoder) is able to add it. Compression techniques also rely on limitations related to human perception of acoustical and visual information. Audio signals contain some information that is not heard by human ears and can be removed. Video signal contains a lot of information that is repeating in consecutive frames and based on this principle suitable algorithms can decrease an amount of data transferred from the transmitter to the receiver.
For compression of digital stills the JPEG format was developed that is using discrete cosine transform. In case of moving pictures the Moving Pictures Experts Group (MPEG) was formed with a task to develop efficient compression techniques for a work with video clips in computers and their transport between computers or other devices.
DVB technology adopted MPEG compression standards [2].
The first standard defined by this group was MPEG-1. This standard provides medium quality video at low constant bit rates up to 1.5 Mbps for interactive systems with video storage on CD-ROMs. It also became very popular for video clips distribution over Internet. However, MPEG-1 was not capable to replace analog television. Based on MPEG-1 principle the new MPEG-2 standard was developed. MPEG-2 definition was influenced by a need for encoding standard television and its distribution via terrestrial, cable and satellite systems.
MPEG-2 standard is optimized for broadcasting and also for higher bit rates (2 Mbps and more) based on final video quality and resolution. It is also suitable for movie storage on e.g. DVDs. MPEG-2 standard is compatible with MPEG-1, i.e. MPEG-2 decoder can decode all MPEG-1 encoded elementary streams [2].
MPEG-4 standard published in 1998 offers coding of audio-visual objects.
It contains more complex algorithms which allow this standard to provide users with video at same quality but lower bit rates than MPEG-2. MPEG-4 supports wide range of bit rates and can be used for low bit rate Internet (IP) TV as well as high definition resolutions TV distribution. ITU standardized this standard as H.264.
The MPEG-2 standard (just as MPEG-1) defines three main parts:
MPEG-2 video coder takes uncompressed video frames and encodes them whereby fixed sized frames are converted (compressed) to frames (access units, Fig. 3) of variable size. Their size depends on original picture complexity and a type of each frame whether it is an I, P or B frame [2]:
The MPEG video coder produces sequences of I, P, B pictures forming them in groups of pictures (GOP). Each GOP starts with an I picture followed by P and/or B pictures (Fig. 4). Presence and number of P and B pictures in one GOP influence the final compression rate, coding delay, ability to edit it and propagation of errors.
The MPEG-1 audio part defines three audio layers: the Layer I (most used on Philips’ digital audio cassettes), Layer II (more effective coder for fixed bit rates from 32 to 192 kbps pre channel), Layer III (popular as MP3 format). The MPEG-2 enhances these audio coders by coding of more than two audio channels (up to 5.1 multichannel) as well as by other audio coders (MPEG-2 AAC).
With new ultra high definition resolutions (UHD, 4K as 3840x2160 and 8K as 7680x4320) new demand for efficient video codecs appeared. H.264/MPEG-4 AVC codec was enhanced to support these resolutions and new video compression standard was developed and standardized in 2013 as H.265/MPEG-H called as High Efficiency Video Coding (HEVC) standard. In comparison with its predecessor HEVC should double compression rate at the same level of video quality.