Home Page Overview Site Map Index Appendix Illustration About Contact Update FAQ


Information, SMI, and Entropy


Mathematical Formulation of Information and SMI

To quantify the concept of information, let us consider a simple linear array, like the deck of eight cards (Figure 07a). How much information would be needed to locate a particular member in such an array with a perfectly shuffled set? With the cards, the problem is like finding out in a binary guessing game when they are turned face down. Let's start, for example, with the Ace of Hearts. We are allowed in such a binary game to ask someone who knows where the Ace is a series of questions that can be answered "yes" or "no" - a game where the question "Is it this one?" is repeated as often as needed. Guessing randomly would eventually get us there; there are eight possibilities for drawing a first card. But, on the average, we would do better if we asked the question for successive subdivisions of the deck - first for subsets of four card, then
Information and Rules of two, and finally of one. This way, we would hit upon the Ace in three steps, regardless where it happens to be. The "yes" or "no" are the constraints (the rules or procedures), which resolve more uncertainties with more questionings and in turn obtaining more information. Thus, 3 is the minimum number of correct binary choices or, by our definition above, the amount of information needed to locate a card in this particular arrangement. In effect, what we have been doing here is taking the binary logarithm of the number of possibilities (N=8), i.e., log2(8) = 3. In other words, the information required to determine the location in a deck of 8 cards is 3 bits. A bit is the smallest unit of information. It contains two distinct

Figure 07a Information and Rules [view large image]

alternatives, which can be any of the hot/cold, black/white, in/out, up/down, ... pairs. The number of possible alternatives N for a series of K trials is 2K. Using up/down for example with K=3 trials. There are N = 23 = 8 possible alternatives: (up, down, up), (up, down, down),
(up, up, down), (up, up, up), (down, up, down), (down, up, up), (down, down, up), (down, down, down). In layman's language, this elaborate scheme can be translated to mean picking out a particular card from the set (of 8) by careful examination. Information is the retrieval of that card, meanwhile entropy is generated in the process (in the brain) for the cognitive act.

For any number of possibilities (alternatives) N, the information I for specifying a member in a linear array (or specifying a particular combination among the many alternatives), is given by

I = - log2(N) = -[ln(2)]ln(N) = log2(1/N) ,

or    I = - K (for binary choices).

I here has a negative sign for any N larger than 1, denoting that the information has to be acquired in order to make a correct choice.

For the case where the N choices are subdivided into subsets/partitions ( i ) of uniform size (ni), like the four suits in a complete deck of cards. Then the information needed to specify the membership in a partition is given by :

Ii = log2(ni/N) = log2(pi)      where pi = ni/N is the probability of finding the member in the partition.

Using the example in Figure 07a with N = 8, the different values of ni/N and Ii associated with different partition size are shown in the table below:

        i          1               2                3                 4
       ni         8               4                2                 1
      ni/N      8/8=1         4/8=1/2       2/8=1/4        1/8
        Ii        0               -1               -2                -3

Information For ni = 8, all the members are distributed randomly in one partition, finding the member there is certain but difficult without any information; while for ni = 1, the members are distributed orderly in eight equal size partitions, the chance of finding the member in each is 1/8 and 3 bits of information is required to find the member. In other words, a greater amount of uncertainty is resolved (more information) if the target chosen is one of a large number of possibilities as shown in Figure 07b.

Figure 07b Information and Probability [view large image]

If the partitions are non-uniform in size, then the mean information is given by summing over all the partitions:

I = (pi) log2(pi)      ---------- (1).

Thus, for N = 8;  n1 = 2;  n2 = 2;  n3 =4;  I = (1/4)x(-2) + (1/4)x(-2) + (1/2)x(-1) = -3/2.

Alternatively, ni doesn't have to be the number of choices in the partitions, and be labeled just as n. It can be interpreted as the number of acceptable choices from the pool of N elements. Thus, n = 1 signifies an unique pick by careful examination creating information I = log2(1/N); while n = N is for the case of picking all the elements indiscriminately resulting in I = log2(N/N) = log2(1) = 0. The idea can be illustrated further by an example of dating via an agent (Figure 08). If there is a lot of requirement such as height, weight, age, ..., then only one choice
Dating Information such as "C" will fit the bill resulting in knowing the date a lot before the rendezvous. If the requirement is relaxed to only a few attributes, then "B" or "C" or "E" may be suitable, and the knowledge for the dates is

Figure 08 Dating Information [view large image]

reduced accordingly. Finally and desperately, a random selection will result in picking anyone on the list with absolutely no information about them.


The negative of I is referred as SMI (Shannon's Measure of Information), i.e.,

SMI = - (pi)log2(pi) = - ln(2)(pi)ln(pi)      ---------- (2).

This is a positively defined mathematical entity in between the loosely defined "information" and the strict definition of "entropy". It can be applied to cases in equilibrium or non-equilibrium. Unfortunately, it is often mis-interpreted as "entropy" as in the case for the investigation of "Black Hole Entropy". Many usage of "information" or "entropy" in the literature should be replaced by SMI.

Go to Next Section
 or to Top of Page to Select
 or to Main Menu

.