**Poisson**

The Poisson distribution arises from one of those beautiful
results that make mathematics so fascinating. The general rule covering the
probability of getting a certain result in a number of trials of a random event
is the *binomial distribution*.

This distribution arises from a theorem of Bernoulli, which states that:

If the probability of success in
each trial is *p*, and hence the probability of failure is *1-p *then
the probability of *r* successes in *n* trials is

.

This all looks a bit daunting to those unfamiliar with the mathematical notation of combinations, but it is really all quite simple and you can derive it for yourself from first principles by thinking about problems such as successive tossings of coins. You can generate the numbers by a simple device know as Pascal’s triangle.

The Poisson distribution is a special case of the binomial,
for the condition that *p* is small and therefore *q* approaches
unity. **It is important** because most of the cases for which statistical
support is needed are necessarily concerned with small probabilities, i.e.
relatively rare events. To cut a long story short, if the mean value is *m
*then the probability of 0,1, 2, 3,…… successes is given by:

Again, this looks daunting to those unfamiliar with the
notation, but the important thing to notice (and the beautiful result) is that
the distribution has only one parameter, *m*.
And, in fact, the variance (square of the standard deviation) is also *m*.
Thus for any given value of m
there is a unique sequence of numbers corresponding to the probability of
getting 0, 1, 2, 3,….. in random trials.

Often, in areas such as epidemiology, we know the value we expect for the mean from observations of the general population or a control group, so we know the rough sort of deviation we would expect from the mean, which is the square root of the mean itself.

Here is what the numbers look like for expected values of 1, 2, 5 and 10. We can see that, as the expected valued increases, the shape rapidly tends towards the familiar bell of the normal distribution. We also note that the most likely value (mode) is close to the expected value and that the spread is in the region of the square root of this number.

To take a simplified numerical example: if we know from observations of the general population that the annual probability of catching a certain disease is 0.001 and we have a sample of 10,000 people who have been exposed to a putative cause, then we would expect at random 10 of our sample to get the disease. If 14 actually got it, we would not be surprised, since the standard deviation from the mean would be root ten or just over three. On the other hand, if 20 got the disease, this is about three standard deviations away from the mean and therefore more likely to be significantly different from a random result. This is all clearly shown by fourth graph. Thus we have a useful rule of thumb for judging claims without having to resort to probability calculations.