UP | HOME

independence

0.1 definition

Let \((\Omega, \mathcal{F}, \mathbb{P})\) be a probability space:

0.1.1 1

Two events \(A\) and \(B\) are independent (notation: \(A \perp B\)) if \(\mathbb{P}(A \cap B) = \mathbb{P}(A)\mathbb{P}(B)\). If \(P(B) > 0\), then, equivalently, \(\mathbb{P}(A \mid B) = \mathbb{P}(A)\) (see conditional probability).

0.1.2 2

Let's generalize (1) to more than two events. Let \(S\) be an index set (possibly countable, or uncountable). Let \(\{A_s \mid s \in S\}\) be a family of events indexed by \(S\). Then the events of this family are said to be independent if for every finite subset \(S_0 \subset S\) we have \[ P(\cap_{s \in S_0} A_s) = \prod_{s\in S_0} \mathbb{A_s} \]

0.1.3 3

Let's generalize (1) to \(\sigma\) -fields (see Sigma Field). Let \(\mathcal{F}_1 \subset \mathcal{F}\) and \(\mathcal{F}_2 \subset \mathcal{F}\) be two \(\sigma\) -fields. We say that \(\mathcal{F}_1\) and \(\mathcal{F}_2\) are independent if any two events \(A\in \mathcal{F}_1\) and \(B \in \mathcal{F}_2\) are independent.

0.1.4 4

Let's generalize (3) to more than two \(\sigma\) -fields. Let \(S\) be an index set, and for every \(s\in S\), let \(\mathcal{F}_s \subset \mathcal{F}\) be a \(\sigma\) -field. We say that this family of \(\sigma\) -fields is independent if the following holds: if we pick one \(A_s\) for every \(\mathcal{F}_s\), then for every such constructed family, \(\{A_s \mid s \in S\}\) is an independent family of events.

0.2 proving independence of \(\sigma\) -algebras

How can we prove that two \(\sigma\) -algebras are independent? It turns out that we can do something a little similar to Caratheodory's Extension Theorem, in which we defined a measure on an algebra, and then extended it to a \(\sigma\) -algebra. Here, we will show independence for two p-systems (see below) and then extend it to the \(\sigma\) -algebras that they generate

0.2.1 definition p-system

A p-system \(\Pi\) is a collection of sets closed under finite intersection (compare with an algebra). So \(A, B \in \Pi \Rightarrow A \cap B \in \Pi\)

0.3 theorem

Let \(\Pi_1\) and \(\Pi_2\) be two p-systems. And let \(\mathcal{F}_1 = \sigma(\Pi_1)\) and \(\mathcal{F}_2 = \sigma(\Pi_2)\) be the two \(\sigma\) -algebras that they generate. IF \[ \mathbb{P}(A\cap B) = \mathbb{P}(A)\mathbb{P}(B) \] for every \(A\in \Pi_1\) and \(B\in \Pi_2\), THEN \(\mathcal{F}_1\) and \(\mathcal{F}_2\) are independent.

0.4 definition infinitely often

Given a sequence \(A_n\), \(n \in \mathbb{N}\), the event \(\{A_n i.o.\}\) is defined as the set of outcomes \(\omega \in \Omega\) that belong to infinitely many \(A_n\). Equivalently, \[ \{A_n i.o.\} = \cap_{n=1}^{\infty}\cup_{i=n}^{\infty} A_i \]

0.4.1 borel-cantelli lemma

Let \(A_n\), \(n \in \mathbb{N}\) be a sequence of events and let \(A = \{A_n i.o.\}\). Then

  1. IF \(\sum_{i=1}^{\infty} \mathbb{P}(A_i) < \infty\), THEN \(\mathbb{P}(A) = 0\)
  2. IF \(\sum_{i=1}^{\infty} \mathbb{P}(A_i) = \infty\) and the events \(A_i\) are independent, THEN \(\mathbb{P}(A) = 1\)

1 independence of random variables

From the 6.436 lecture notes here

What does it mean for two random variables to be independent? Intuitively, it should mean that learning anything about one random variable should not affect the distribution of the other r.v. Or, put another way, restricting your attention (conditioning) on the outcomes that give \(X\) a certain value should not change the distribution of \(Y\). The more general, formal definition is given below:

1.1 definition: independence of random variables

1.1.1 1

Let \(X_1,...,X_n\) be random variables on a probability space. Then, these r.v.'s are independent, if the following holds for all Borel subsets \(B_1,...,B_n\subset \mathbb{R}\): the events \(X_1\in B_1, X_2\in B_2,...X_n\in B_n\) are independent. That is, \[ \mathbb{P}(X_1\in B_1,...,X_n\in B_n) = \mathbb{P}(X_1 \in B_1)\cdots\mathbb{P}(X_n \in B_n) \]

1.1.2 2

More generally, let \(S\) be a (possibly infinite) index set. Then the collection \(\{X_s \mid s \in S\}\) of random variables is independent, if every finite subset \(X_{s_1},...,X_{s_n}\) is independent.

1.2 comment: pairwise independence

A collection of r.v.'s is pairwise independent, if any two of them are independent. Note that pairwise independence does not imply mutual independence (as defined in above in (2)).

1.2.1 example

Consider indicator random variables \(X\), \(Y\), and \(Z\) with the outcome space divided as such:

xyz z
x y

Then, \(p_X(1) = p_Y(1) = p_z(1) = \frac{1}{2}\). And \(p_{X\mid Y}(1) = \frac{1}{2}\). The rest of the conditional distributions are the same. But \(p_{X\mid Y,Z}(1) = 1\).

1.3 theorem: independence of discrete random variables

Let \(X\) and \(Y\) be discrete random variables on a probability space. The following are equivalent:

  1. The random variables \(X\) and \(Y\) are independent
  2. For any \(x,y \in \mathbb{R}\), the events \(\{X = x\}\) and \(\{Y = y\}\) are independent
  3. For any \(x,y \in \mathbb{R}\), we have \(p_{X,Y}(x,y) = p_X(x)p_Y(y)\)
  4. For any \(x,y \in \mathbb{R}\) such that \(p_{Y}(y) > 0\), we have \(p_{X\mid Y}(x\mid y) = p_X(x)\)

1.4 theorem: functions of independent discrete r.v.'s

Let \(X\) and \(Y\) be discrete independent random variables. Let \(f\) and \(g\) be functions from \(\mathbb{R}\) to \(\mathbb{R}\). Then, \(f(X)\) and \(g(Y)\) are independent.

1.4.1 proof

Proof here. In short, take an event \(\{f(X) = c\}\). This is actually the event \(\{X \in A\}\) where \(A = \{x \mid f(x) = c\}\). But \(\{X \in A\}\) is independent of every event \(\{Y \in B\}\), where \(B\) is defined similarly for \(Y\).

Created: 2021-09-14 Tue 21:43