The Statistical independence reference article from the English Wikipedia on 24-Jul-2004
(provided by Fixed Reference: snapshots of Wikipedia from wikipedia.org)

Statistical independence

Videos from a children's charity on sponsorship
In probability theory, to say that two events are independent intuitively means that knowing whether or not one of them occurs makes it neither more probable nor less probable that the other occurs. For example, the event of getting a "1" when a die is thrown and the event of getting a "1" the second time it is thrown are independent.

Similarly, when we assert that two random variables are independent, we intuitively mean that knowing something about the value of one of them does not yield any information about the value of the other. For example, the number appearing on the upward face of a die the first time it is thrown and that appearing the second time are independent.

Table of contents
1 Independent events
2 Independent random variables
3 Conditionally independent random variables

Independent events

If two events A and B are independent, then the conditional probability of A given B is the same as the "unconditional" (or "marginal") probability of A, i.e.,

There are at least two reasons why this statement is not taken to be the definition of independence: (1) the two events A and B do not play symmetrical roles in this statement, and (2) problems arise with this statement when events of probability 0 are involved.

When one recalls that the conditional probability P(A | B) is given by

one sees that the statement above is equivalent to

Here AB is the intersection of A and B, i.e., it is the event that both events A and B occur. Thus we could say:

Thus the standard definition says:

Two events A and B are independent iff P(AB)=P(A)P(B).

More generally, and collection of events -- possibly more than just two of them -- are mutually independent precisely if for any finite subset A1, ..., An of the collection we have

This is called the multiplication rule for independent events.

If any two of a collection of random variables are independent, they may nonetheless fail to be mutually independent; this is called pairwise independence.

Independent random variables

Two random variables X and Y are independent iff for any numbers a and b the events [Xa] and [Yb] are independent events as defined above. Similarly an arbitrary collection of random variables -- possible more than just two of them -- is independent precisely if for any finite collection X1, ..., Xn and any finite set of numbers a1, ..., an, the events [X1a1], ..., [Xnan] are independent events as defined above.

The measure-theoretically inclined may prefer to substitute events [XA] for events [Xa] in the above definition, where A is any Borel set. That definition is exactly equivalant to the one above when the values of the random variables are real numbers. It has the advantage of working also for complex-valued random variables or for random variables taking values in any topological space.

If X and Y are independent, then the expectation operator E has the nice property

E[X· Y] = E[X] · E[Y]

and for the variance we have

var(X + Y) = var(X) + var(Y).

If X and Y are independent, the covariance cov(X,Y) is zero; otherwise we would have

var(X + Y) = var(X) + var(Y) + 2 cov(X, Y).

(The converse of the proposition that if two random variables are independent then their covariance is 0 is not true. See uncorrelated.)

Furthermore, if X and Y are independent and have probability densities fX(x) and fY(y), then the combined random variable (X,Y) has a joint density

fXY(x,y) dx dy = fX(x) fY(y) dx dy.

Conditionally independent random variables

We define random variables X and Y to be conditionally independent given random variable Z if

P[(X in A) & (Y in B) | Z in C] = P[X in A | Z in C] · P[Y in B | Z in C]
for any Borel subsets A, B and C of the real numbers.

If X and Y are conditionally independent given Z, then

P[(X in A) | (Y in B) & (Z in C)]
= P[(X in A) | (Z in C)]
for any Borel subsets A, B and C of the real numbers. That is, given Z, the value of Y does not add any additional information about the value of X.

Independence can be seen as a special kind of conditional independence, since probability can be seen as a kind of conditional probability given no events.