Engineering Math

Independence and conditional probability

Two events \(A\) and \(B\) are independent if and only if \[\begin{aligned} P(A\cap B) = P(A) P(B). \end{aligned}\] If an experimenter must make a judgment without data about the independence of events, they base it on their knowledge of the events, as discussed in the following example.

Example 3.2

Answer the following questions and imperatives.

  1. Consider a single fair die rolled twice. What is the probability that both rolls are 6?

  2. What changes if the die is biased by a weight such that P({6}) = 1/7?

  3. What changes if the die is biased by a magnet, rolled on a magnetic dice-rolling tray such that P({6}) = 1/7?

  4. What changes if there are two dice, biased by weights such that for each P({6}) = 1/7, rolled once, both resulting in 6?

  5. What changes if there are two dice, biased by magnets, rolled together?

  1. Let event A = {6}. Assuming a fair die, P(A) = 1/6. Having no reason to judge otherwise, we assume the results are independent events. Therefore, $$\begin{aligned} P(A\cap A) = P(A) P(A) = \frac{1} {6}\cdot\frac{1} {6} = \frac{1} {36}. \end{aligned}$$

  2. Bias is not dependence. So $$\begin{aligned} P(A\cap A) = P(A) P(A) = \frac{1} {7}\cdot\frac{1} {7} = \frac{1} {49}. \end{aligned}$$

  3. Again, just bias, still independent.

  4. Still independent.

  5. The magnet dice can influence each other! This means they are not independent! If one wanted to estimate the probability, either a theoretical prediction based on the interaction would need to be developed or several trials could be conducted to obtain an estimation.

Conditional probability

If events \(A\) and \(B\) are somehow dependent, we need a way to compute the probability of \(B\) occurring given that \(A\) occurs. This is called the conditional probability of \(B\) given \(A\), and is denoted \(P(B \mid A)\). For \(P(A) > 0\), it is defined as \[\begin{aligned} P(B \mid A) &= \frac{P(A \cap B)} {P(A)}. \end{aligned}\qquad{(1)}\] We can interpret this as a restriction of the sample space \(\Omega\) to \(A\); i.e. the new sample space \(\Omega' = A \subseteq \Omega\). Note that if \(A\) and \(B\) are independent, we obtain the obvious result: \[\begin{aligned} P(B \mid A) &= \frac{P(A) P(B)} {P(A)} \\ &= P(B).\end{aligned}\]

Example 3.3

Consider two unbiased dice rolled once. Let events A = {sum of faces = 8} and B = {faces are equal}. What is the probability the faces are equal given that their sum is 8?

Directly applying [@eq:conditional-probability], $$\begin{aligned} P(B \mid A) &= \frac{P(A \cap B)} {P(A)} \\ &= \frac{P(\{(4,4)\})} {P(\{(4,4)\})+P(\{(2,6)\})+P(\{(6,2)\})+P(\{(3,5)\})+P(\{(5,3)\})} \\ &= \frac{\frac{1} {6}\cdot\frac{1} {6}} {5\cdot\frac{1} {6}\cdot\frac{1} {6}} \\ &= \frac{1} {5}. \end{aligned}$$ We don’t count the event {(4,4)} twice, but we do count both {(3,5)} and {(5,3)}, since they are distinct events. We say “order matters” for these types of events.

Online Resources for Section 3.3

No online resources.