Error-Control Coding
 Hesham M. Al-Salman

In This Presentation
 Definition and importance of coding. Types of codes: block and convolutional. Linear block codes. Convolutional codes. Turbo Codes. Conclusion

Coding: Pros and Cons
 Channel or error-control coding is adding redundant  bits to detect only or detect and correct errors. Goal: Increase the operational range, reduce error probability or reduce power transmission requirements. Typical gains are in the range of 4-5 dB in SNR for BER and much better FER gains. Much higher gains are now possible with relatively little cost than other methods. Sacrifice: Bit rate and/or complexity.

Types of Codes
 In general there are two types of codes: Block codes: Hamming Cyclic BCH and Reed Solomon Codes Convolutional codes Trellis-coded modulation (TCM) Space-time

The Distance
 Hamming Distance is the number of differences between code words. e.g. d (1101, 0101) = 1 d (0000, 0111) = 3 Minimum distance (dmin): the min no of differences between any two code words Adding redundant bits limits the number of code words used. Hence, increasing the minimum distance between them

Example: (7,4) Block Code

Linear Block Codes
 Check the previous code to see that it consists of block of code words. Your message will be changed to this format. Add any two code words to get a third one. A code with dmin is ‘guaranteed’ to correct (dmin – 1) / 2 errors or detect dmin – 1 errors. dmin = wmin (the smallest weight of the nonzero code word. Any linear block code can be systematic:

Hamming Codes
 Definition: (n,k) = (2m – 1, 2m – 1 – m) For Hamming codes dmin = 3. Therefore, they are capable of correcting any single error or detecting any double error.

Encoding Hamming Codes
 Encoding can be done using the generator matrix: c = u × G For the (7, 4) code mentioned earlier:

Decoding Hamming Codes:
 The direct way is to measure the Hamming distance between the transmitted code word and all possible code words (16 in the past example). The minimum distance gives the correct transmitted code word. In practice, syndrome decoding is used. In syndrome decoding the received code word is multiplied by a matrix called the parity check matrix or the H matrix (G×H = 0) The result indicates the error pattern (if any).

Cyclic Codes
 An (n, k) linear block code C is cyclic if every shift of any code word in C is also a code word in C. The (7,4) Hamming code discussed before is a cyclic code. For every cyclic code there is a single minimum degree r generating polynomial g(x) All code words are generated from g(x)

Encoding Cyclic Codes
 Encoding can be done by multiplying the message polynomial by the generating polynomial g(x) e.g. for the mentioned (7, 4) code g(X) = 1+X+X3. The message 1001 in polynomial format is 1 + X3. The resulting code word is (1+X+X3) × (1 + X3) = 1 + X + X4 + X6  (1100101)

(7,4) Code Generated by 1+X+X3
 Infor.      Code            Code polynomials 0000 0000000 0 = 0 . g(X) 1000 1101000 1 + X + X3 = 1 . g(X) 0100 0110100 X + X2 + X4 = X . g(X) 1100 1011100 1 + X2 + X3 + X4 = (1 + X) . g(X) 0010 0011010 X2 + X3 + X5 = X2 . g(X) 1010 1110010 1 + X+ X2 + X5 =  (1  +  X2) . g(X) 0110 0101110 X+ X3 + X4 + X5 = (X+ X2) . g(X) 1110 1000110 1 + X4 + X5 = (1 + X + X2) . g(X) 0001 0001101 X3 + X4 + X6 = X3 . g(X)

(7,4) Code Generated by 1+X+X3
 Infor.      Code            Code polynomials 1001 1100101 1 + X + X4 + X6 = (1 +  X3) . g(X) 0101 0111001 X+ X2 + X3 + X6 = (X+ X3) . g(X) 1101 1010001 1 + X2 + X6 = (1 + X + X3)  .  g(X) 0011 0010111 X2 + X4 + X5 + X6 = (X2 + X3). g(X) 1011 1111111 1 + X + X2 +  X3  +  X4  +  X5  + X6 = (1 + X2 + X3) . g(X) 0111 0100011 X + X5 + X6 = (X + X2 + X3). g(X) 1111 1001011 1 + X3 + X5 + X6   = (1 + X + X2 + X3) . g(X)

Decoding Cyclic Codes
 Syndrome decoding is again used to find the error pattern and correct it. The difference here (from Hamming codes) is that the cyclic property can be used to simplify the Encoder/Decoder circuit very much.

BCH Codes
 Bose-Chaudhuri-Hocquenghem (BCH) codes are a class of cyclic codes constructed using finite fields. Their generator polynomials are factors of p2m-1 + 1 (where p is a prime number). Tables of BCH generator polynomials are found in most digital communications text books.

Reed Solomon Codes
 These are a sub-class of BCH codes with non-binary alphabets. Non-binary codes consist of n symbols rather than n bits. This ‘grouping’ of bits enables the construction of long codes with very strong error correcting capabilities. The added complexity is almost negligible.

Convolutional Codes
 Described by (n,k) and K (constraint length). Main difference: no block, only a bit stream. Unlike block codes minimum distance and performance properties are not well-established mathematically. Main performance criterion is the minimum free distance dfree.

Convolutional Code Example
 Rate 1/2 (2,1) code with g1(D) = 1 + D + D2 , g2(D) = 1 + D2

State Diagram Representation

Trellis Diagram Representation

Decoding
 Decoding is done by maximum likelihood. One method of decoding is the Viteribi algorithm. It measures the path metric of all possible paths and eliminates the ones with higher pm until there is only one survivor. This can be done in hard decision or soft decision (better performance)

Viterbi Algorithm

Performance
 The longer the constraint length the better. The longer the depth of the encoder the better. About 2.5 dB increase in performance between HD and SD decoding. Reaches near asymptotic value at relatively high bit error rates. Puncturing can be introduced to improve bit rate with some loss of performance.

Trellis Coded Modulation
 Combined coding and modulation. Error correcting capabilities without the sacrifice of information bit rate. e.g. if a BPSK modulation technique is used for transmission, one can add a rate 1/2 convolutional encoder and use QPSK modulation. Set partitioning is used to ensure performance enhancement. The distance here is no longer Hamming but rather Euclidean distance. Encoding and decoding is as convolutional codes, except that one deals with symbols rather than bits.

General TCM Block Diagram

Space-Time Codes
 Combines TCM with multiple transmit antennas. Output of one transmit antenna is a sequence of coded symbols different than any other transmit antenna. All transmit antennas work on the same frequency, time slot, power…etc (i.e. overlapping signals). Very strong error-correcting capabilities in fading channels.

A Space-Time Encoder

Turbo Codes
 New: introduced in 1993 In addition to the encoder there is an interleaver to shuffle bits. De-interleaving is done at the decoder. Promises a lot: Shannon limit. For AWGN channel Eb/N0=0.7 dB for BER=10-5. Problems: delay, complexity.

Turbo Encoder

Performance Simulations
 In the coming slides we have simulation for various coding schemes: Block AND BCH codes. 1/2 and 1/3 convolutional codes. Concatenation of 1/2 conv and RS. TCM and space-time.

Slide 32

Slide 33

Slide 34

Slide 35

The Dilemma:
Block or Convolutional?
 Many debates are out there in the literature on which coding technique is better. It really depends on the application. For an AWGN channel convolutional codes are preferable. They approach asymptotic gains at relatively high BER (10-5 to 10-7) (e.g sat TV). For low BER requirements RS codes are better because they achieve higher asymptotic gains at lower BER (10-10). Concatenation of RS and convolutional codes produce very strong error correcting capabilities.

Continued…
 Near Shannon-limit performance is possible with turbo codes, making them suitable for non-real time applications. Space-time codes gave strong error correcting capabilities for fading channels. NASA Voyager mission transmission of 2 Mbps data rates at Eb/N0=2.53 dB uses a concatenation of a rate 1/2 K=7 convolutional encoder inner code and a (255, 233) RS outer code with BER=10-6. A 48 Mbps satellite TDMA system with BER around 10-11 using a (204, 192) RS code was described by Berlekamp et al.