Let us assume that our null hypothesis is that when someone is sick, it is not swine flu. A type I error is a false positive. That is, we claim that the person has the swine flu, when actually then do not. A type II error is a false negative. This means that the person has swine flu, but we erroneously conclude that they do not.
What is the probability that someone who has flu-like symptoms actually has swine flu? We can calculate this using Bayes Rule:
- P(H1N1|symptoms) = P(Symptoms|H1N1)*P(symptoms)/P(H1N1)
Let us assume that all individuals with swine flu have symptoms so that P(Symptoms|H1N1)=1. Let us assume 2% of the population gets any type of flu each year and displays symptoms. Let us assume only .02% of the population gets H1N1. So, P(symptoms)=0.02 and P(H1N1)=.0002. Thus we have:
- P(H1N1|symptoms) = 1*0.02/0.002=.01.
This means that if we see a random person with the flu like symptoms, there is only a 1% chance that they actually have the swine flu.
This may explain why the CDC and WHO ignored early warnings from a Washington-based biosurveillance company concerning a possible flu outbreak. Although there was an increase in the number of cases of influenza, the probability that it was an outbreak of H1N1 (or any type of outbreak) was low. Although probability of a false positive was high, the cost of a false negative is also large. Ex-post, it is obvious that the CDC and WHO should have acted quicker to fight the spread H1N1. Ex-ante, these organizations likely receive numerous reports of potential outbreaks and acting on every single one–most of which turn out to be false–would be very costly. Identifying the optimal time to initial school closings and public health warnings is very difficult and must take into account both the probabilities and the costs of type I and type II errors.