Online Learning Platform

Information Theory and Coding > Channel Capacity > Shannon’s theorem: on channel capacity

Shannon’s theorem: on channel capacity

It is possible, in principle, to device a means where by a communication system will transmit information with an arbitrary small probability of error, provided that the information rate, R = r×I (X;Y),where r is the symbol rate, is less than or equal to a rate ‘ C’ called “channel capacity”. The technique used to achieve this objective is called coding. To put the matter more formally, the theorem is split into two parts and we have the following statements.

Positive statement:

“Given a source of M equally likely messages, with M>>1, which is generating information at a rate R, and a channel with a capacity C. If R ≤ C, then there exists a coding technique such that the output of the source may be transmitted with a probability of error of receiving the message that can be made arbitrarily small”.

This theorem indicates that for R< C transmission may be accomplished without error even in the presence of noise. The situation is analogous to an electric circuit that comprises of only pure capacitors and pure inductors. In such a circuit there is no loss of energy at all as the reactors have the property of storing energy rather than dissipating.

Negative statement:

“ Given the source of M equally likely messages with M>>1, which is generating information at a rate R and a channel with capacity C. Then, if R>C, then the probability of error of receiving the message is close to unity for every set of M transmitted symbols”.

This theorem shows that if the information rate R exceeds a specified value C, the error probability will increase towards unity as M increases. Also, in general, increase in the complexity of the coding results in an increase in the probability of error.

Prev
What is Channel Capacity?

No More

Feedback
ABOUT

Statlearner


Statlearner STUDY

Statlearner