Bayes’ theorem
Given two events \(A\) and \(B\), Bayes' theorem (aka Bayes’ rule) states that \[\begin{aligned} P(A \mid B) = P(B \mid A)\frac{P(A)} {P(B)}. \end{aligned}\] Sometimes this is written \[\begin{align} P(A \mid B) &= \frac{P(B \mid A)P(A)} {P(B \mid A)P(A) + P(B \mid \lnot{A}) P(\lnot{A})} \label{eq:bayes2} \\ &= \frac{1} {1 + \dfrac{P(B \mid \lnot{A})} {P(B \mid A)}\cdot\dfrac{P(\lnot{A})} {P(A)}}. \label{eq:bayes3} \end{align}\]
This is a useful theorem for determining a test’s effectiveness. If a test is performed to determine whether an event has occurred, we might as questions like “if the test indicates that the event has occurred, what is the probability it has actually occurred?” Bayes’ theorem can help compute an answer.
Testing outcomes
The test can be either positive or negative, meaning it can either indicate or not indicate that \(A\) has occurred. Furthermore, this result can be either true or false .
There are four options, then. Consider an event \(A\) and an event that is a test result \(B\) indicating that event \(A\) has occurred. tbl. ¿tbl:test-outcomes? shows these four possible test outcomes. The event \(A\) occurring can lead to a true positive or a false negative, whereas \(\lnot{A}\) can lead to a true negative or a false positive.
Terminology is important, here.
\(P(\{\text{true positive}\}) = P(B \mid A)\), aka sensitivity or detection rate,
\(P(\{\text{true negative}\}) = P(\lnot{B} \mid \lnot{A})\), aka specificity,
\(P(\{\text{false positive}\}) = P(B \mid \lnot{A})\),
\(P(\{\text{false negative}\}) = P(\lnot{B} \mid A)\).
Clearly, the desirable result for any test is that it is true. However, no test is true \(100\) percent of the time. So sometimes it is desirable to err on the side of the false positive, as in the case of a medical diagnostic. Other times, it is more desirable to err on the side of a false negative, as in the case of testing for defects in manufactured balloons (when a false negative isn’t a big deal).
Posterior probabilities
Returning to Bayes’ theorem, we can evaluate the posterior probability \(P(A \mid B)\) of the event \(A\) having occurred given that the test \(B\) is positive, given information that includes the prior probability \(P(A)\) of \(A\). The form in eq. ¿eq:bayes2? or eq. ¿eq:bayes3? is typically useful because it uses commonly known test probabilities: of the true positive \(P(B \mid A)\) and of the false positive \(P(B \mid \lnot{A})\). We calculate \(P(A \mid B)\) when we want to interpret test results.
Some interesting results can be found from this. For instance, if we let \(P(B \mid A) = P(\lnot{B} \mid \lnot{A})\) (sensitivity equal specificity) and realize that \(P(B \mid \lnot{A}) + P(\lnot{B} \mid \lnot{A}) = 1\) (when \(\lnot{A}\), either \(B\) or \(\lnot{B}\)), we can derive the expression \[\begin{aligned} \label{eq:bayes4} P(B \mid \lnot{A}) = 1 - P(B \mid A). \end{aligned}\] Using this and \(P(\lnot{A}) = 1 - P(A)\) in eq. ¿eq:bayes3? gives (recall we’ve assumed sensitivity equals specificity!) \[\begin{aligned} P(A \mid B) &= \frac{1} {1 + \dfrac{1 - P(B \mid A)} {P(B \mid A)}\cdot\dfrac{1 - P(A)} {P(A)}} \\ &= \frac{1} {1 + \left(\dfrac{1} {P(B \mid A)} - 1\right) \left(\dfrac{1} {P(A)} - 1\right)} \end{aligned}\] This expression is plotted in fig. ¿fig:bayes?. See that a positive result for a rare event (small \(P(A)\)) is hard to trust unless the sensitivity \(P(B \mid A)\) and specificity \(P(\lnot{B} \mid \lnot{A})\) are very high, indeed!
Suppose 0.1 percent of springs manufactured at a given plant are defective. Suppose you need to design a test that, when it indicates a deffective part, the part is actually defective 99 percent of the time. What sensitivity should your test have assuming it can be made equal to its specificity?
We proceed in Python.
from sympy import * # for symbolics
import numpy as np # for numerics
import matplotlib.pyplot as plt # for plots
Define symbolic variables.
'p_A,p_nA,p_B,p_nB,p_B_A,p_B_nA,p_A_B',real=True) var(
(p_A, p_nA, p_B, p_nB, p_B_A, p_B_nA, p_A_B)
Beginning with Bayes’ theorem and assuming the sensitivity and specificity are equal by [@eq:bayes4], we can derive the following expression for the posterior probability P(A∣B).
= Eq(p_A_B,p_B_A*p_A/p_B).subs(
p_A_B_e1
{*p_A+p_B_nA*p_nA, # conditional prob
p_B: p_B_A1-p_B_A, # Eq (3.5)
p_B_nA: 1-p_A
p_nA:
}
)print(p_A_B_e1)
$\displaystyle p_{A B} = \frac{p_{A} p_{B A}}{p_{A} p_{B A} + \left(1 - p_{A}\right) \left(1 - p_{B A}\right)}$
Solve this for P(B∣A), the quantity we seek.
= solve(p_A_B_e1,p_B_A,dict=True)
p_B_A_sol = Eq(p_B_A,p_B_A_sol[0][p_B_A])
p_B_A_eq1 print(p_B_A_eq1)
$\displaystyle p_{B A} = \frac{p_{A B} \left(1 - p_{A}\right)}{- 2 p_{A} p_{A B} + p_{A} + p_{A B}}$
Now let’s substitute the given probabilities.
= p_B_A_eq1.subs(
p_B_A_spec
{0.001,
p_A: 0.99,
p_A_B:
}
)print(p_B_A_spec)
pBA = 0.999989888981011
That’s a tall order!
Online Resources for Section 3.4
No online resources.