Introduction to the Information Theory

In the early 1940's, C. E. Shannon developed a mathematical
theory,called information theory, for dealing with the most
fundamental aspects of communication systems.

Characteristics of information theory:

 1.probability theory highly involved
 2.a major concern with the encoder and decoder

In the past 40 years, information theory has been made more
precise extended, and become pratical in real communication
systems.

Return to Index | Next Page  

 

 

 

 

 

 

 

 

The Mathematical Model of Communication Systems

We use a maticalmatical model to stand
for the real communication systems in
information theory.

The system is modeled by:

An information source
An encoding of the source
A channel through which the
information is sent
A moise that is added to the
signal in the channel
A decoding to recover the
original information
A sink for the information
The encoding block is often devided into
2 stages. One encodes the source, and the
other further encodes to fit the channel.
So the decoding block is devided too.

The purpose of the source encoder is
to represent the source output by a
sequence of binary digits.And the most
important question of concern is that
how many binary digits per unit time
are required to represent the output

The purpose of the channel encoder and
edcoder is to gaurantee that the binary
sequence will be reproduced at the output
of the channel decoder correctly.

Previous Page | Next Page  

 

 

 

 

 

 

 

 

The Entropy Function

Assume there is an information source of symbols

We introduce the probability of these symbol occuring

The self-information of each symbol is defined as:

The base of the logarithm determines the units used to express
the amount of information.

The average value of self-information of the source alphabet is known
as the entropy:

Previous Page | Next Page  

 

 

 

 

 

 

 

 

Mutual Information

Let {a1,...,ak} be the X sample space and {b1,...,bj} be the Y sample space
in an XY joint ensemble with the prob. assignment Px|y(ak|bj).

The occurrence of y=bj changes the probability of x = ak from the priori
probability Px(Ak) to the posteriori probability Px|y(ak|bj).The quantitative
measure of this which turns out to be useful is the logarithm of the ratio
of a posteriori to priori probability.

The information provided about the event x=ak by the occurence of the event y=bj is:

The information above is called the mutual information between the events x= ak and y=bj.

Previous Page | Next Page  

 

 

 

 

 

 

 

 

Topic Name

Topic Content

Previous Page | Next Page