Assuming the probability of obtaining heads in coin flip is exactly fifty percent, why should a test group of a
ten flips produce less accurate results than one of one million flips?
Asked by: Ian L. Musil
Answer
Probability is one of the hardest things for most people, including me, to understand. You are doing better
than most people just in being able to ask a coherent question in this difficult field.
The best, and easiest, way I know to answer your question is to remind you of what you already know. You know
that each time you get ready to flip the coin the coin has exactly as much chance to land on one side as it
does on the other side. Once the coin is flipped that 50-50 chance remains. The coin remains in this 50-50
state until it actually lands and even then it is still 50-50 until you actually look at it. Only upon that
final observation is the 50-50 changed to 100-0.
Now, when you get ready to flip the coin again the chances are again 50-50. The result of one throw in no way
influences the outcome of the next throw. This is what is meant by the chances of the coin being heads or
tails is 50-50. 50-50 does NOT mean that in ten throws 5 will always be heads and 5 will always be tails. So,
why should a million throws be more nearly half one side and half the other? This is simply a matter of
percentages. If the ten throws gives you 7-3 than it looks like 70% of the throws were one side and 30% the
other side. But in a million throws a difference of only four more of one than the other is a lot less %
difference than in ten throws.
So, the important thing to remember is that the probability exists for each throw and not for the total number
of throws. I hope this helps!
Answered by: Tom Young, M.S., Science Teacher, Whitehouse High School, Texas
'We are all in the gutter, but some of us are looking at the stars.'