The origins of experimental error.

            Errors – or uncertainties in experimental data – can arise in numerous ways. Their quantitative assessment is necessary since only then can a hypothesis be tested properly. The modern theory of atomic structure is believed because it quantitatively predicted all sorts of atomic properties; yet the experiments used to determine them were inevitably subject to uncertainty, so that there has to be some set of criteria that can be used to decide whether two compared quantities are the same or not, or whether a particular reading truly belongs to a set of readings. A number of titration results from a given analysis is an example of the latter.
 

Blunders (mistakes).

            Mistakes (I prefer the stronger 'blunder') such as failing to fill a burette to the jet, or dropping some of a solid on the balance pan, are not errors in the sense meant in these pages. Unfortunately many critiques of investigations written by students are fond of quoting blunders as a source of error, probably because they're easy to think of. They are neither quantitative nor helpful; experimental error in the true sense of uncertainty cannot be assessed if the experimenter was simply incompetent.
 

Human error.

            This is often confused with blunders, but is rather different – though one person's human error is another's blunder, no doubt. Really it hinges on the experimenter doing the experiment truly to the best of his ability, but being let down by inexperience. Such errors lessen with practice. They also do not help in the quantitative assessment of error.
 

Instrumental limitations.

            Uncertainties are inherent in any measuring instrument. A ruler, even if as well-made as is technologically possible, has calibrations of finite width; a 25.0 cm3 pipette of grade B accuracy delivers this volume to within 0.06 cm3 if used correctly. A digital balance showing three decimal places can only weigh to within 0.0005 g by its very nature and even then only if it rounds the figures to those three places.

            Calibrations are made under certain conditions, which have to be reproduced if the calibrations are to be true within the specified limits. Volumetric apparatus is usually calibrated for 20oC, for example; the laboratory is usually at some other temperature.

            Analogue devices such as thermometers or burettes often require the observer to interpolate between graduations on the scale. Some people will be better at this than others.

            These limitations exist; whether they are dominant errors is another matter.
 

Observing the system may cause errors.

            If you have a hot liquid and you need to measure its temperature, you will dip a thermometer into it. This will inevitably cool the liquid slightly. The amount of cooling is unlikely to be a source of major error, but it is there nevertheless.

            The act of observation can cause serious errors in biological systems. Simply handling an animal will cause adrenalin release that will change its biochemistry, for example. The design of biological experiments is not our concern here, but it is a particularly difficult aspect of experimental design.
 

Errors due to external influences.

            Such errors may come from draughts on the balance pan, for example (though this for me seems pretty close to a blunder), or maybe from impurity in the chemicals used. Again such things are unlikely to be significant in a carefully-designed and executed experiment, but are often discussed by students, again because they are fairly obvious things.
 

Not all measurements have well-defined values.

            The temperature of a system, or its mass, for example, has particular values which can be determined to acceptable degrees of uncertainty with suitable care. Other properties do not; the diameter of a planet, for example, although quoted in tables of data, is a mean value. The same is true for the thickness of a piece of paper or the diameter of a wire. These measurements will vary somewhat at different places. It is important to realise what sort of data you are dealing with.
 

Sampling.

            Many scientific measurements are made on populations. This is most obviously true in biology, but even the three values that you (perhaps) get from a titration is a population, albeit rather a small one. It is intuitively understood that the more samples you have from a given population the less the error is likely to be. It is why I do not permit students to be satisfied with two congruent titration figures; I am slightly more convinced by three, and prefer four.

            Related to this are errors arising from unrepresentative samples. Suppose that a chemist wishes to measure the levels of river pollution. The amount of a particular pollutant will depend on the time of day, the season of the year, and so on. So a measurement made at 3 o'clock on a Friday afternoon may be utterly unrepresentative of the mean levels of the pollutant during the rest of the week. It doesn't matter how many samples he takes – if the sampling method is this biased, a true picture of the mean levels of pollutant in the river cannot be obtained. A large population does not of itself ensure greater accuracy.

            The bias in this example is fairly obvious. This is not always so, even to experienced investigators. Sir Ronald Fisher's famous text 'The Design of Experiments' deals with the difficulties of removing bias in biological investigations, and is a work on statistical methods. This degree of analysis is outside our present concerns, though will not be outside yours if you go on to do research in many fields of science.

 
 

Experimental error   Combining error    Chemistry contents     Home page