All measurements of a continuously varying quantity (length, weight, mass, etc.) have some level of uncertainty (more commonly referred to as the ‘error’) associated with them, due to the limits of the measuring instrument or limitations of the measurer. In order to mitigate the effects of this, nowadays we take many measurements and calculate the average value of the quantity.
It seems obvious that doing so gives us a more accurate value of the measured quantity, and with the invention of the discipline of statistics, we now know why our belief is justified. But like most ‘obvious’ things, it was not always thus. If you reflect for a moment, it is a little strange that by simply manipulating the results obtained by repeating the same error-prone measurement many times, one can get a smaller error. So for a long time people did not do that. What people did before the invention of statistics was to repeat measurements and then try and select from the range of values they got what they thought was the best one, depending on their judgment of the quality of the measurement. This was something of an art.
So when did it come to be realized that taking the average was the better, more scientific way of doing things? And who was behind it?
Kevin Drum writes that a new book points the finger at one well-known scientific figure who used this technique with great effect and that helped lead him to his great discoveries. But for whatever reason, he kept quiet about his averaging technique, letting it instead be assumed that he just had a knack for careful measurements.