



Definition and importance of coding. 

Types of codes: block and convolutional. 

Linear block codes. 

Convolutional codes. 

Turbo Codes. 

Conclusion 




Channel or errorcontrol coding is adding
redundant bits to detect only
or detect and correct errors. 

Goal: Increase the operational range, reduce
error probability or reduce power transmission requirements. 

Typical gains are in the range of 45 dB in SNR
for BER and much better FER gains. Much higher gains are now possible with
relatively little cost than other methods. 

Sacrifice: Bit rate and/or complexity. 






In general there are two types of codes: 

Block codes: 

Hamming 

Cyclic 

BCH and Reed Solomon Codes 

Convolutional codes 

Trelliscoded modulation (TCM) 

Spacetime 




Hamming Distance is the number of differences
between code words. 

e.g. d (1101, 0101) = 1 

d (0000, 0111) = 3 

Minimum distance (d_{min}): the min no
of differences between any two code words 

Adding redundant bits limits the number of code
words used. Hence, increasing the minimum distance between them 







Check the previous code to see that it consists
of block of code words. Your message will be changed to this format. 

Add any two code words to get a third one. 

A code with d_{min} is ‘guaranteed’ to
correct (d_{min} – 1) / 2 errors or detect d_{min} – 1
errors. 

d_{min} = w_{min} (the smallest
weight of the nonzero code word. 

Any linear block code can be systematic: 






Definition: 

(n,k) = (2^{m} – 1, 2^{m} – 1 –
m) 

For Hamming codes d_{min} = 3.
Therefore, they are capable of correcting any single error or detecting any
double error. 




Encoding can be done using the generator matrix: 

c = u × G 

For the (7, 4) code mentioned earlier: 




The direct way is to measure the Hamming
distance between the transmitted code word and all possible code words (16
in the past example). The minimum distance gives the correct transmitted
code word. 

In practice, syndrome decoding is used. 

In syndrome decoding the received code word is
multiplied by a matrix called the parity check matrix or the H matrix (G×H = 0) 

The result indicates the error pattern (if any). 




An (n, k) linear block code C is cyclic if every
shift of any code word in C is also a code word in C. 

The (7,4) Hamming code discussed before is a
cyclic code. 

For every cyclic code there is a single minimum
degree r generating polynomial g(x) 

All code words are generated from g(x) 




Encoding can be done by multiplying the message
polynomial by the generating polynomial g(x) 

e.g. for the mentioned (7, 4) code g(X) = 1+X+X^{3}.
The message 1001 in polynomial format is 1 + X^{3}. The resulting
code word is (1+X+X^{3}) × (1 + X^{3}) = 1 + X + X^{4} + X^{6 } (1100101) 




Infor. Code
Code polynomials 

0000 0000000 0 = 0 . g(X) 

1000 1101000 1 + X + X^{3} = 1 . g(X) 

0100 0110100 X + X^{2} + X^{4} =
X . g(X) 

1100 1011100 1 + X^{2} + X^{3} +
X^{4} = (1 + X) . g(X) 

0010 0011010 X^{2} + X^{3} + X^{5}
= X^{2} . g(X) 

1010 1110010 1 + X+ X^{2} + X^{5}
= (1 + X^{2})
. g(X) 

0110 0101110 X+ X^{3} + X^{4} +
X^{5} = (X+ X^{2}) . g(X) 

1110 1000110 1 + X^{4} + X^{5} =
(1 + X + X^{2}) . g(X) 

0001 0001101 X^{3} + X^{4} + X^{6}
= X^{3} . g(X) 






Infor. Code
Code polynomials 

1001 1100101 1 + X + X^{4} + X^{6}
= (1 + X^{3}) . g(X) 

0101 0111001 X+ X^{2} + X^{3} +
X^{6} = (X+ X^{3}) . g(X) 

1101 1010001 1 + X^{2} + X^{6} =
(1 + X + X^{3}) . g(X) 

0011 0010111 X^{2} + X^{4} + X^{5}
+ X^{6} = (X^{2} + X^{3}). g(X) 

1011 1111111 1 + X + X^{2} + X^{3} + X^{4}
+ X^{5} + X^{6}
= (1 + X^{2}
+ X^{3}) . g(X) 

0111 0100011 X + X^{5} + X^{6} =
(X + X^{2} + X^{3}). g(X) 

1111 1001011 1 + X^{3} + X^{5} +
X^{6}
= (1 + X + X^{2}
+ X^{3}) . g(X) 






Syndrome decoding is again used to find the
error pattern and correct it. 

The difference here (from Hamming codes) is that
the cyclic property can be used to simplify the Encoder/Decoder circuit
very much. 




BoseChaudhuriHocquenghem (BCH) codes are a
class of cyclic codes constructed using finite fields. 

Their generator polynomials are factors of p^{2m1}
+ 1 (where p is a prime number). 

Tables of BCH generator polynomials are found in
most digital communications text books. 




These are a subclass of BCH codes with
nonbinary alphabets. 

Nonbinary codes consist of n symbols rather
than n bits. 

This ‘grouping’ of bits enables the construction
of long codes with very strong error correcting capabilities. 

The added complexity is almost negligible. 




Described by (n,k) and K (constraint length). 

Main difference: no block, only a bit stream. 

Unlike block codes minimum distance and
performance properties are not wellestablished mathematically. 

Main performance criterion is the minimum free
distance d_{free}. 




Rate 1/2 (2,1) code with g_{1}(D) = 1 +
D + D^{2} , g_{2}(D) = 1 + D^{2} 






Decoding is done by maximum likelihood. 

One method of decoding is the Viteribi
algorithm. 

It measures the path metric of all possible
paths and eliminates the ones with higher pm until there is only one
survivor. 

This can be done in hard decision or soft
decision (better performance) 





The longer the constraint length the better. 

The longer the depth of the encoder the better. 

About 2.5 dB increase in performance between HD
and SD decoding. 

Reaches near asymptotic value at relatively high
bit error rates. 

Puncturing can be introduced to improve bit rate
with some loss of performance. 




Combined coding and modulation. 

Error correcting capabilities without the
sacrifice of information bit rate. 

e.g. if a BPSK modulation technique is used for
transmission, one can add a rate 1/2 convolutional encoder and use QPSK
modulation. 

Set partitioning is used to ensure performance
enhancement. 

The distance here is no longer Hamming but
rather Euclidean distance. 

Encoding and decoding is as convolutional codes,
except that one deals with symbols rather than bits. 





Combines TCM with multiple transmit antennas. 

Output of one transmit antenna is a sequence of
coded symbols different than any other transmit antenna. 

All transmit antennas work on the same
frequency, time slot, power…etc (i.e. overlapping signals). 

Very strong errorcorrecting capabilities in
fading channels. 





New: introduced in 1993 

In addition to the encoder there is an
interleaver to shuffle bits. 

Deinterleaving is done at the decoder. 

Promises a lot: Shannon limit. 

For AWGN channel Eb/N0=0.7 dB for BER=10^{5}. 

Problems: delay, complexity. 






In the coming slides we have simulation for
various coding schemes: 

Block AND BCH codes. 

1/2 and 1/3 convolutional codes. 

Concatenation of 1/2 conv and RS. 

TCM and spacetime. 










Many debates are out there in the literature on
which coding technique is better. 

It really depends on the application. 

For an AWGN channel convolutional codes are
preferable. They approach asymptotic gains at relatively high BER (10^{5}
to 10^{7}) (e.g sat TV). 

For low BER requirements RS codes are better
because they achieve higher asymptotic gains at lower BER (10^{10}). 

Concatenation of RS and convolutional codes
produce very strong error correcting capabilities. 




Near Shannonlimit performance is possible with
turbo codes, making them suitable for nonreal time applications. 

Spacetime codes gave strong error correcting
capabilities for fading channels. 

NASA Voyager mission transmission of 2 Mbps data
rates at E_{b}/N_{0}=2.53 dB uses a concatenation of a rate
1/2 K=7 convolutional encoder inner code and a (255, 233) RS outer code
with BER=10^{6}. 

A 48 Mbps satellite TDMA system with BER around
10^{11} using a (204, 192) RS code was described by Berlekamp et
al. 
