spacer spacer Go to Kaye and Laby Home spacer
spacer
spacer spacer spacer
spacer
spacer
spacer
spacer spacer

You are here:

spacer

Chapter: 6 Statistical methods for the treatment of experimental data
    Section: 6.1 Statistical methods for the treatment of experimental data

spacer
spacer

spacer

« Previous Chapter

Next Chapter »

Unless otherwise stated this page contains Version 1.0 content (Read more about versions)

6.1 Statistical methods for the treatment of experimental data

The result of an experiment to measure a physical quantity is never exact but is subject to errors. The error in a result x is defined as its difference (xxt) from the true value xt of the quantity to be measured. This should not be confused with the uncertainty in x which is expressed by some convenient measure of how large the error might be or, equivalently, of the range about the result x in which the true value is thought to lie.

Errors may be divided into two classes: (a) random errors which vary unpredictably from one repetition of a measurement to another, and (b) systematic errors arising from bias in the measurement process perhaps due to the equipment used, calibration errors, corrections based on simplified error models or data processing techniques. If both types of error are small, the measurement will be accurate. If only random errors are small, it will merely be precise. The effect of random errors can be seen in the variation of results of repeated measurements. This permits statistical techniques to be employed to reduce them and to estimate correspond­ing components of uncertainty. In contrast, the treatment of systematic errors will generally depend on a worker's technical judgements about their causes.

Treatment of random errors

If the experiment is carried out n times, there will be n results, x1...xi...xn, varying in value because of random errors. A first step which is often useful when n is large is to plot a histogram of the results, dividing the range of values taken by the xi into a number of equal intervals and plotting the number of readings falling in each interval. In a typical case the histogram will have a single peak, which, in the absence of systematic errors, we may assume to be somewhere near the true value of x, and will have a spread about this peak which is an indication of the precision of the measurements.

When systematic errors are negligible it is usual to take the mean of the xi as best estimate of the true value of x where

 

      

 

 = n−1

 xi

The standard deviation, s, is taken as a quantitative measure of the spread of the readings. Its square is known as the sample variance and is given by

      

 

v = s2 = (n − 1)−1

 (xi)2

Another important quantity is the standard deviation of the mean, sometimes called the standard error and equal to n−1/2s when n is large. This quantity is a measure of the spread of the different values of which would be obtained from successively measured sets of n values of x. It is therefore a measure of the precision of the result of the experiment and of that component of its uncertainty associated with random errors. For this reason it has been recommended that it be referred to as a ‘standard uncertainty’. Where systematic errors are negligible, the mean and its standard deviation (with the number of readings n) are sufficient to characterize an experimental result. To appreciate their significance more fully requires some underlying theory.

In biology and the social sciences it is often the object of an investigation to make an estimate of some property of a large but finite population of individuals, too numerous for each to be measured, by means of measurements on a limited sample. There is a unique true value of the property being estimated, which could be ascertained given sufficient time and effort. A range of statistical techniques is available to estimate the properties of the parent population from sample measurements with calculable uncertainties and confidence limits.

In the physical sciences it is convenient to postulate that our series of measurements forms a finite sample from an infinite population. We can speak of this population having a probability distribution function F(x)


with the following properties:

   

(i)   

F(x) dx = 1


   


(ii) Where systematic errors are negligible the true value of the quantity being measured is the mean of the population,


       


μ =


xF(x)dx


   


(iii) Again, where systematic errors can be ignored, a measure of the uncertainty of a single measurement result is given by the standard deviation, σ, of the population; its square, the variance, is given by


       


σ2 =


(xμ)2F(x) dx

The histogram, mean and standard deviation derived from our set of measurements can be regarded as sample approximations to the probability distribution, mean and standard deviation of the parent population.

It is sometimes necessary to combine the results of a number of independent determinations of a quantity into a single estimate. The best precision (minimum variance) is obtained if the individual results are ‘weighted’ inversely to their variances. Suppose that there are m separate results Xj, each the mean of nj readings with variance sj2. Then the best estimate of the quantity is

   

 

=

Xjnjsj−2

  njsj−2

and the expected variance of this will be

   

 

1

  njsj−2

In most physical measurements, where the random errors can be thought of as made up of a number of small independent contributions, the probability distribution has the form of the normal error function or Gaussian distribution,

    F(x) dx = (2π)−1/2σ−1 exp{−(xμ)2 / 2σ2} dx

Much of the further statistical treatment of the data is based on theory which assumes that the distribution function is Gaussian. If the histogram has a form which differs widely from the Gaussian it is a warning to proceed with caution. However, the Central Limit Theorem states that the sample means from a non-Gaussian population have a distribution which approximates to the Gaussian, and the larger the number of observations the better the approximation. Consequently, many tests are valid even with non-Gaussian populations. The Gaussian and its integral are tabulated in a number of the references below.

The component of uncertainty in due to random errors can be expressed in several ways, for example, as:
 
      • sn−1/2 (one standard uncertainty as discussed above);

      • ksn−1/2 (the ‘expanded uncertainty’ measure defined as k standard uncertainties where k is some small number, e.g. k = 2);

      • a confidence interval (a range of values that is expected to include μ with a stated level of confidence).

The first two uncertainty measures have the advantage of not presupposing any particular form of distribution and are increasingly recommended in standards.

In order to derive a confidence interval for a single measured result x which would contain the mean μ of its parent population, assumed to have standard deviation σ, we form the test function c = |xμ|/σ. By integrating the Gaussian function we can calculate the probability p(C) that a sample observation taken at random will have a value of c less than C, and construct the following table:


 

 

 

 

 

 C  .    .    .    .    

1.65

1.96

2.58

3.29

 p(C)   .    .    .    

90%

95%

99%

99.9%

 

 

 

 

 


If in our experiment c > 2.58 we can say that, given the assumed value of σ, the difference between x and μ is significant at the 1% level of probability, meaning that there is a less than 1% chance that it is due to random causes. Thus the range of values of μ for which c 2.58 defines a 99% confidence interval about x.

In practice we do not usually know the standard deviation σ of the parent population but have to work with the standard deviation s of the observations. This causes an important change in the method as the following illustration will show: given a set of n observations with mean and standard deviation s, test the hypothesis that the true value is μ (i.e. that they are a sample from a parent population with Gaussian distribution having mean μ). We now form the test function

                 t = |μ|/(s/n1/2)

This has a distribution which can be calculated for a Gaussian population and is known as the ‘Studentt distribution. It involves a parameter known as the number of degrees of freedom which is, loosely, the number of independent observations, n − 1 in our case (not n because for a given , once n − 1 values are known, the nth is determinate). Values of t2 for given significance levels P and numbers of degrees of freedom are tabulated in the literature and in the following short table where values of t2 are given against parameter values N1 = 1, N2 = n − 1 for three significance levels P = 0.05, 0.01 and 0.001 (corresponding to values of our earlier p of 95%, 99% and 99.9%).

As an illustration of the use of this we may return to our original experiment, to measure x, and improve on our earlier statement that, in the absence of systematic errors, is an estimate of the true value of x by adding confidence limits. We can say that the value of x is

                  ± tsn−1/2

at the confidence level 1 − P, where t2 is the entry in the following table at the appropriate values of P and the other parameters. We are then asserting that the probability is 1 − P that the true value of x lies between the limits specified.


Treatment of systematic errors

The discussion above has not dealt with uncertainty components associated with systematic errors. Sometimes a component of bias in a result is due to a random error ‘sampled’ only once, for example when taking an unbiased measured value subject to a random error from another worker. In this case a worker must use their technical judgement to estimate the corresponding standard deviation or confidence interval. A more difficult case arises when a component of bias is due to the method of the measurement. This can occur with an uncorrected error of method (e.g. due to a cosine error in a length measurement) or when a correction itself is subject to an error of method (e.g. due to the use of a simplified theoretical model for the error being corrected). Here there can be no question of a population of varying errors ‘sampled’ by the experiment. In this type of situation statisticians usually resort to ‘subjective probability distributions’ expressing degrees of belief in possible values of a variable. In this way subjective standard deviations or confidence intervals can be derived, again on the basis of technical judgement.
 

Combination of uncertainties

Usually a worker needs to calculate an overall uncertainty allowing for both random and systematic errors. There is considerable argument on how to proceed here, but a method increasingly recommended is to correct for all known systematic error components and then simply to combine all estimated standard deviations sj in quadrature, no matter what type of error they are associated with, to produce the standard uncertainty measure:


   

u =

s2j

1/2

where n errors are involved. An expanded uncertainty measure U defining a range ±U can be stated using a k factor:

                  U = ku

The value k = 2 is recommended for general use corresponding closely to a 95% confidence level in the case of a Gaussian distribution. Experimental results can be expressed in the form ± u or ± U and should always be accompanied by a statement of what uncertainty measure is being used, the value of any k factor and the associated number of degrees of freedom. The measures u and U have the merit of simplicity, not requiring the difficult classification of errors into the random and systematic categories. This is particularly helpful when estimating uncertainties for results calculated using the results of other workers where adequate information may be lacking. However, where errors of method are involved, it is not possible to interpret U in terms of the frequency with which the range ±U contains a true value.

References

K. A. Brownlee (1960) Statistical Theory and Methodology in Science and Engineering, Wiley.
BS 5233: 1986, Glossary of Terms Used in Metrology.
R. A. Fisher (1958) Statistical Methods for Research Workers, Oliver and Boyd.
ISO/IEC/OIML/BIPM (1993) Guide to the Expression of Uncertainty in Measurement.
ISO 5725, Accuracy (Trueness and Precision) of Measurement Methods and Results: Parts 1 to 6.
L. J. Savage (1972) The Foundations of Statistics, 2nd revised edn, Dover Publications.


Making statistical tests on data

Nine commonly needed tests are listed in column 1 including the two discussed above. Use of the transformations of the observations given in column 2 enables a single table to be employed. To make a test calculate the function of the observations given in column 2 and compare its value with those given in that cell of the table identified in column 3. Greater values should be adjudged significant. Smaller values are not conclusive; a larger experiment might show significance. P of the table is the level of significance to be quoted.

The three entries in each cell of the table are for three levels of significance (P = 0.05, 0.01, 0.001), the values for P = 0.01 being printed in bold type. P is the risk of a wrong decision when no difference exists; the risk of a wrong decision in other cases obviously cannot be stated generally because the differences may be of any magnitude. The smaller the value of P used, the larger will a real difference have to be before it makes itself apparent by these tests. The choice of P must therefore be made by balancing this risk against the magnitude of the difference which will just escape detection, i.e. the table value.

Except for test 9, the table is calculated on the assumption that the error or random sampling variation referred to above results in observations being normally distributed. Small departures from normality will not usually affect the decisions because their effect on P is small.


1
To test whether:

2
Function of observations to be calculated

3
Cell of table

N1

N2

1. A single, randomly chosen, observation differs significantly from a given mean (μ) (standard deviation (σ) known).

∞ 

2. The mean of n observations differs significantly from a given value (μ) (standard deviation (σ) known).

 1

∞ 

3. The mean of n observations differs significantly from a given value (μ) (standard deviation estimated (s) from sample).

1
 
 

n − 1 
 
 

4. Two means, of n1 and n2 observations respectively, differ significantly (standard deviation estimated (s) from samples). 

1
 
 
 

n1 + n2 − 2 
 
 

5. The numbers (o) of observations falling into k classes differ significantly from expected numbers (e) (total number expected made to agree with observation). 

{(o e)2/e}/(k 1)

No e should be less than 5

k − 1

∞ 

6. The numbers (o) of observations falling into k classes differ significantly from expected numbers (e) (expected numbers calculated from observations via a function of l parameters).

{(o e)2/e}/(k l 1)

No e should be less than 5

kl − 1

 ∞

7. One variance (v1) estimated from n1, observations is significantly larger than another (v2) estimated from n2 observations.

v1/v2

n1 − 1

n1 − 1

8. One variance estimated from n1 observations is significantly different from another estimated from n2 observations.

v1/v2 where v1 is the larger of the two estimates,
v2 the smaller.‡

 (n1 − 1)

 (n2 − 1)

9. An observed proportion, r out of n, differs significantly from a given proportion p.

 

 

 

       

       (a) p<r/n

(1 − p)r/p(n r + 1)

2(nr + 1)

2r

       

       (b) p>r/n

p(nr)/(1 − p)(r + 1)

2(r + 1)

2(nr)

 

     
E.g. this test would not be legitimate for testing the largest of a set.
For this test the values of P are to be doubled, the three lines of the table are then for levels of significance P = 0.10, 0.02, 0.002.

 
      This table abridged, by kind permission of the authors and publishers, from Table V of Statistical Tables for Biological, Agricultural and Medical Research by R. A. Fisher and F. Yates (Oliver and Boyd).
 


Table for significance tests

 N2

N1

1

2

3

4

5

6

8

12

24

 

 

 

 

 

 

 

 

 

  

  

1    .    .    .    .

161.4  

199.5  

215.7  

224.6  

230.2  

234.0  

238.9  

243.9  

249.0  

254.3

 

4052     

4999     

5403     

5625     

5764     

5859     

5981     

6106     

6234     

6366   

 

Values for P = 0.001 too large for entry

2 .    .    .    .   

18.5  

19.0  

19.2  

19.2  

19.3  

19.3  

19.4  

19.4  

19.5  

19.5  

 

98.5  

99.0  

99.2  

99.2  

99.3  

99.3  

99.4  

99.4  

99.5  

99.5  

 

998.5  

999.0  

999.2  

999.2  

999.3  

999.3  

999.4  

999.4  

999.5  

999.5  

 

 

 

 

 

 

 

 

 

 

 

3    .    .    .    .

10.1  

9.6  

9.3  

9.1  

9.0  

8.9  

8.8  

8.7  

8.6  

8.5  

 

34.1  

30.8  

29.5  

28.7  

28.2  

27.9  

27.5  

27.1  

26.6  

26.1  

 

167.5  

148.5  

141.1  

137.1  

134.6  

132.8  

130.6  

128.3  

125.9  

123.5  

 

 

 

 

 

 

 

 

 

 

 

4     .    .    .    .

7.7  

6.9  

6.6  

6.4  

6.3  

6.2  

6.0  

5.9  

5.8  

5.6  

 

21.2  

18.0  

16.7  

16.0  

15.5  

15.2  

14.8  

14.4  

13.9  

13.5  

 

74.1  

61.2  

56.2  

53.4  

51.7  

50.5  

49.0  

47.4  

45.8  

44.1  

 

 

 

 

 

 

 

 

 

 

 

5     .    .    .    .

6.6  

5.8  

5.4  

5.2  

5.1  

5.0  

4.8  

4.7  

4.5  

4.4  

 

16.3  

13.3  

12.1  

11.4  

11.0  

10.7  

10.3  

9.9  

9.5  

9.0  

 

47.0  

36.6  

33.2  

31.1  

29.7  

28.8  

27.6  

26.4  

25.1  

23.8  

 

 

 

 

 

 

 

 

 

 

 

6     .    .    .    .

6.0  

5.1  

4.8  

4.5  

4.4  

4.3  

4.1  

4.0  

3.8  

3.7  

 

13.7  

10.9  

9.8  

9.1  

8.7  

8.5  

8.1  

7.7  

7.3  

6.9  

 

35.5  

27.0  

23.7  

21.9  

20.8  

20.0  

19.0  

18.0  

16.9  

15.7  

 

 

 

 

 

 

 

 

 

 

 

7     .    .    .    .

5.6  

4.7  

4.3  

4.1  

4.0  

3.9  

3.7  

3.6  

3.4  

3.2  

 

12.2  

9.5  

8.5  

7.8  

7.5  

7.2  

6.8  

6.5  

6.1  

5.6  

 

29.2  

21.7  

18.8  

17.2  

16.2  

15.5  

14.6  

13.7  

12.7  

11.7  

 

 

 

 

 

 

 

 

 

 

 

8     .    .    .    .

5.3  

4.5  

4.1  

3.8  

3.7  

3.6  

3.4  

3.3  

3.1  

2.9  

 

11.3  

8.6  

7.6  

7.0  

6.6  

6.4  

6.0  

5.7  

5.3  

4.9  

 

25.4  

18.5  

15.8  

14.4  

13.5  

12.9  

12.0  

11.2  

10.3  

9.3  

 

 

 

 

 

 

 

 

 

 

 

9     .    .    .    .

5.1  

4.3  

3.9  

3.6  

3.5  

3.4  

3.2  

3.1  

2.9  

2.7  

  

10.6  

8.0  

7.0  

6.4  

6.1  

5.8  

5.5  

5.1  

4.7  

4.3  

 

22.9  

16.4  

13.9  

12.6  

11.7  

11.1  

10.4  

9.6  

8.7  

7.8  

 

 

 

 

 

 

 

 

 

 

 

10  .    .    .    .

5.0  

4.1  

3.7  

3.5  

3.3  

3.2  

3.1  

2.9  

2.7  

2.5  

 

10.0  

7.6  

6.6  

6.0  

5.6  

5.4  

5.1  

4.7  

4.3  

3.9  

 

21.0  

14.9  

12.6  

11.3  

10.5  

9.9  

9.2  

8.4  

7.6  

6.8  

 

 

 

 

 

 

 

 

 

 

 

11 .   .    .    .

4.7  

4.0  

3.6  

3.4  

3.2  

3.1  

2.9  

2.8  

2.6  

2.4  

 

9.6  

7.2  

6.2  

5.7  

5.3  

5.1  

4.7  

4.4  

4.0  

3.6  

 

19.7  

13.8  

11.6  

10.3  

9.6  

9.0  

8.4  

7.6  

6.8  

6.0  

 

 

 

 

 

 

 

 

 

 

 

12 .   .    .    .

4.8  

3.9  

3.5  

3.3  

3.1  

3.0  

2.8  

2.7  

2.5  

2.3  

 

9.3  

6.9  

6.0  

5.4  

5.1  

4.8  

4.5  

4.2  

3.8  

3.4  

 

18.6  

13.0  

10.8  

9.6  

8.9  

8.4  

7.7  

7.0  

6.2  

5.4  

 

 

 

 

 

 

 

 

 

 

 

13 .   .    .    .

4.7  

3.8  

3.4  

3.2  

3.0  

2.9  

2.8  

2.6  

2.4  

2.2  

 

9.1  

6.7  

5.7  

5.2  

4.9  

4.6  

4.3  

4.0  

3.6  

3.2  

 

17.8  

12.3  

10.2  

9.1  

8.4  

7.9  

7.2  

6.5  

5.8  

5.0  

 

 

 

   

 

 

   

 

 

 

 

14   .    .    .

  4.6  

  3.7   

3.3   

3.1  

3.0  

2.8  

2.7  

2.5  

2.3  

2.1  

 

8.9  

6.5  

5.6  

5.0  

4.7  

4.5  

4.1  

3.8  

3.4  

3.0  

 

17.1  

11.8  

9.7  

8.6  

7.9  

7.4  

6.8  

6.1  

5.4  

4.6  

 

 

 

 

 

 

 

 

 

 

 

15.    .    .    .

  4.5  

  3.7  

3.3  

3.1  

2.9  

2.8  

2.6  

2.5  

2.3  

2.1  

  

8.7  

  6.4  

5.4  

4.9  

4.6  

4.3  

4.0  

3.7  

3.3  

2.9  

  

16.6  

11.3  

9.3  

8.3  

7.6  

7.1  

6.5  

5.8  

5.1  

4.3  

 

 

 

 

 

 

 

 

 

 

 

16   .    .    . 

  4.5  

  3.6  

3.2  

3.0  

2.9  

2.7  

2.6  

2.4  

2.2  

2.0  

  

  8.5  

  6.2  

5.3  

4.8  

4.4  

4.2  

3.9  

3.6  

3.2  

2.8  

  

16.1  

11.0  

9.0  

7.9  

7.3  

6.8  

6.2  

5.5  

4.8  

4.1  

  

 

 

 

 

 

 

 

 

 

 

17.    .    .    .

  4.5  

  3.6  

3.2  

3.0  

2.8  

2.7  

2.5  

2.4  

2.2  

2.0  

  

  8.4  

  6.1  

5.2  

4.7  

4.3  

4.1  

3.8  

3.5  

3.1  

2.7  

  

15.7  

10.7  

8.7  

7.7  

7.0  

6.6  

6.0  

5.3  

4.6  

3.8  

  

 

 

 

 

 

 

 

 

 

 

18  .    .    .    .

  4.4  

  3.6  

3.2  

2.9  

2.8  

2.7  

2.5  

2.3  

2.1  

1.9  

  

  8.3  

  6.0  

5.1  

4.6  

4.2  

4.0  

3.7  

3.4  

3.0  

2.6  

  

15.4  

10.4  

8.5  

7.5  

6.8  

6.4  

5.8  

5.1  

4.4  

3.7  

  

 

 

 

 

 

 

 

 

 

 

19   .    .    .    .

  4.4  

  3.5  

3.1  

2.9  

2.7  

2.6  

2.5  

2.3  

2.1  

1.9  

    

  8.2  

  5.9  

5.0  

4.5  

4.2  

3.9  

3.6  

3.3  

2.9  

2.5  

  

15.1  

10.2  

8.3  

7.3  

6.6  

6.2  

5.6  

5.0  

4.3  

3.5  

  

 

 

 

 

 

 

 

 

 

 

20   .    .    .    .

  4.4  

  3.5  

3.1  

2.9  

2.7  

2.6  

2.4  

2.3  

2.1  

1.8  

  

  8.1  

  5.8  

4.9  

4.4  

4.1  

3.9  

3.6  

3.2  

2.9  

2.4  

  

14.8  

10.0  

8.1  

7.1  

6.5  

6.0  

5.4  

4.8  

4.1  

3.4  

  

 

 

 

 

 

 

 

 

 

 

22.    .    .    .

  4.3  

3.4  

3.0  

2.8  

2.7  

2.5  

2.4  

2.2  

2.0  

1.8  

  

  7.9  

5.7  

4.8  

4.3  

4.0  

3.8  

3.5  

3.1  

2.7  

2.3  

  

16.4  

9.6  

7.8  

6.8  

6.2  

5.8  

5.2  

4.6  

3.9  

3.2  

  

 

 

 

 

 

 

 

 

 

 

24   .    .    .    .

  4.3  

3.4  

3.0  

2.8  

2.6  

2.5  

2.4  

2.2  

2.0  

1.7  

  

  7.8  

5.6  

4.7  

4.2  

3.9  

3.7  

3.4  

3.0  

2.7  

2.2  

  

14.0  

9.3  

7.6  

6.6  

6.0  

5.6  

5.0  

4.4  

3.7  

3.0  

  

 

  

 

 

 

 

 

 

 

 

26   .    .    .    . 

  4.2  

3.4  

3.0  

2.7  

2.6  

2.5  

2.3  

2.1  

1.9  

1.7  

   

  7.7  

5.5  

4.6  

4.1  

3.8  

3.6  

3.3  

3.0  

2.6  

2.1  

  

13.7  

9.1  

7.4  

6.4  

5.8  

5.4  

4.8  

4.2  

3.6  

2.8  

  

 

 

 

 

 

 

 

 

 

 

28  .    .    .    . 

  4.2  

3.3  

2.9  

2.7  

2.6  

2.4  

2.3  

2.1  

1.9  

1.7  

 

  7.6  

5.5  

4.6  

4.1  

3.8  

3.5  

3.2  

2.9  

2.5  

2.1  

 

13.5  

8.9  

7.2  

6.3  

5.7  

5.2  

4.7  

4.1  

3.5  

2.7  

  

 

 

 

 

 

 

 

 

 

 

30  .    .    .    .

  4.2  

3.3  

2.9  

2.7  

2.5  

2.4  

2.3  

2.1  

1.9  

1.6  

  

  7.6  

5.4  

4.5  

4.0  

3.7  

3.5  

3.2  

2.8  

2.5  

2.0  

  

13.3  

8.8  

7.0  

6.1  

5.5  

5.1  

4.6  

4.0  

3.4  

2.6  

  

 

 

 

 

 

 

 

 

 

 

40 .    .    .    .

  4.1  

3.2  

2.8  

2.6  

2.4  

2.3  

2.2  

2.0  

1.8  

1.5  

  

  7.3  

5.2  

4.3  

3.8  

3.5  

3.3  

3.0  

2.7  

2.3  

1.8  

  

12.6  

8.3  

6.6  

5.7  

5.1  

4.7  

4.2  

3.6  

3.0  

2.2  

 

 

 

 

 

 

 

 

 

 

 

60.    .    .    .

4.0  

3.2  

2.8  

2.5  

2.4  

2.3  

2.1  

1.9  

1.7  

1.4  

   

7.1  

5.0  

4.1  

3.6  

3.3  

3.1  

2.8  

2.5  

2.1  

1.6  

 

12.0  

7.8  

6.2  

5.3  

4.8  

4.4  

3.9  

3.3  

2.7  

1.9  

 

 

 

 

 

 

 

 

 

 

 

120    .    .    

3.9  

3.1  

2.7  

2.4  

2.3  

2.2  

2.0  

1.8  

1.6  

1.3  

 

6.9  

4.8  

3.9  

3.5  

3.2  

3.0  

2.7  

2.3  

1.9  

1.4  

 

11.4  

7.3  

5.8  

4.9  

4.4  

4.0  

3.6  

3.0  

2.4  

1.6  

 

 

 

 

 

 

 

 

 

 

 

     .    .    .

3.8  

3.0  

2.6  

2.4  

2.2  

2.1  

1.9  

1.8  

1.5  

1.0  

   

6.6  

4.6  

3.8  

3.3  

3.0  

2.8  

2.5  

2.2  

1.8  

1.0  

 

10.8  

6.9  

5.4  

4.6  

4.1  

3.7  

3.3  

2.7  

2.1  

1.0  

 

 

 

 

 

 

 

 

 

 

 

E.D.v.Rest/A.R.Colclough

spacer


spacer
spacer
spacer spacer spacer

Home | About | Table of Contents | Advanced Search | Copyright | Feedback | Privacy | ^ Top of Page ^

spacer

This site is hosted and maintained by the National Physical Laboratory © 2017.

spacer