spacer spacer Go to Kaye and Laby Home spacer
spacer
spacer spacer spacer
spacer
spacer
spacer
spacer spacer

You are here:

spacer

Chapter: 8 Introduction to quality assurance of measurements
    Section: 8.6 Quality Control and Proficiency Testing in Chemical Laboratories

spacer
spacer

spacer

« Previous Section

Next Section »

Unless otherwise stated this page contains Version 1.0 content (Read more about versions)

Version 2.0
Updated: 28 October 2011
Previous versions

8.6 Quality Control and Proficiency Testing in Chemical Laboratories

Internal monitoring, or quality control, comprises a systematic series of checks to monitor day-to-day and batch-to-batch variation in performance. It is best achieved by periodic re-measurement of check specimens which resemble as closely as possible real test pieces or samples. These specimens need to be stable and available in sufficient quantity for measurements over an extended period of time. Good practice normally dictates the use of several specimens, some giving a similar response to the test pieces and others with zero or very low response (blank determinations). Generally these QC samples will be prepared on a routine basis by the measurement staff in accordance with procedures documented in the quality system. This type of QC is most appropriate for monitoring the performance of a specific procedure or instrument. In some organisations "blind" samples (i.e. samples where the property to be measured is already known and which are included with other work without prior warning) are also used. The use of blind samples is sometimes controversial and may be difficult or expensive to achieve. When feasible, however, this approach provides a more realistic assessment of overall performance of the laboratory, including aspects such as sample reception, selection of appropriate procedures, and reporting.

Routine internal QC is an essential part of a quality assurance programme and its provision should be devised and documented with the same care as the measurement methods themselves. The level and type of QC must be agreed by the responsible scientist or engineer and be clearly defined in a quality manual or other documentation. Important considerations include the type of QC samples, how they will be sourced, the frequency of use, the results to be recorded, control limits, analysis and review of the QC data, and procedures for dealing with results which fall outside specification. Decisions on most of these aspects will depend on knowledge of the measurement procedures and their robustness, or lack of it, on customer requirements, and the consequences of errors in the measurement of individual samples. Where possible, QC samples should be as similar as possible to the real samples likely to be encountered during a run of measurements. If these vary widely then several different types of QC material should be used to reflect the range likely to be encountered. With some types of measurement, for example trace chemical analysis, it is also necessary to include QC samples representative of a sample or reagent blank. QC samples may be obtained from previous sample batches or prepared in house using large batches of material similar to the anticipated samples. In either case, a key requirement is for a batch of QC samples, or a QC material, to be sufficiently homogeneous and stable over a reasonable period of time. This will ensure consistent QC data which can be used to identify trends or long term drift of measurements. Preparation and use of QC materials should be documented in the same way as the measurement procedures.

The data obtained from QC specimens should be archived with the relevant test data and also subjected to appropriate statistical analysis. The latter, including the use of control charts, is designed to highlight deviations of the method from the specification. It is important that this is done in sufficient time so that the data can be used to identify problems with an individual sample run as well as any long-term trends. A wide variety of statistical and charting techniques are used to analyse QC data and present them with sufficient clarity to facilitate acting on the outcome where necessary. Some of the approaches are described in references cited in the bibliography but the actual choice is less important than reviewing and acting on the QC data in a timely and appropriate way. An essential element of every QC regime is the existence of clear, mandatory instructions setting out the steps to be taken when data falling outside the required limits are detected. It is also important, regardless of the effort put into collecting and analysing QC data, to recognise the strengths and weaknesses of internal QC. It is most effective in identifying problems such as instrumental drift, calibration failure, reagent contamination, or consistent operator error. Problems less likely to be identified include measurement bias, correct estimation of measurement uncertainty, incorrect identification of samples, transcript errors in reporting results, or errors in handling individual samples.

Careful selection of the measurement method is essential if reliable data are to be obtained. A key factor in this process is determining whether the method is capable of providing a sufficiently reliable result, allowing for cost or other constraints such as time and safety considerations which may apply. There is no simple, established approach for doing this; it frequently comes down to the application of expert knowledge and judgment by the measurement scientist. The amount of work required to select the method varies widely. In some cases use of an existing, specific method may be dictated by company policy, by the customer, or by a regulatory body. At the other extreme an extensive experimental programme may be needed to develop a novel method. More generally, it will be necessary to seek guidance from previous work, from colleagues, or from the published literature and to carry out a modest experimental programme. In doing this it is advantageous to follow a few simple rules:

External monitoring is designed to address some of the weaknesses of internal QC. Generally the laboratory is presented with an unknown sample and must report its result to an independent organisation which can compare the result with those from other laboratories and/or an independent reference value. This may simply involve the inclusion of check samples by the originator of a batch of samples. A widely used approach is known as proficiency testing (PT). This is based on cooperative schemes which arrange for distribution of a test sample and subsequent collation and analysis of the data provided by participants. Such schemes almost invariably show much wider variability of results than would be found within a single organisation, even when a standard measurement method is used. As such they often provide a more realistic estimate of actual measurement uncertainty than that quoted by individual measurement scientists. Most PT schemes rank laboratories against a "true value", which may be obtained independently by the organisers or as a consensus value. There are many statistical approaches which may be used to derive a ranking but by far the most common is the so-called "z score". This is a quantity which represents the distance between the participant's score and the population mean (or reference) in units of the standard deviation. The score is negative when the participant is below the mean, positive when above. The scheme organiser, or possibly some other authority such as a regulator or accreditation body, will set the value of an acceptable score. For example, a (+/-) z score of 2 or less may be designated as a pass, 3 will invoke a warning and anything larger is treated as a fail. As with internal QC, performance in proficiency tests or other external monitoring schemes should be reviewed and appropriate action taken when the laboratory fails to meet agreed performance standards.

In the past, proficiency testing schemes were frequently relatively informal activities, often organised on a voluntary basis by a group of laboratories with similar interests, but today PT schemes are widely recognised as a key tool for monitoring and improving measurement laboratory performance. This requires professionally organised schemes operating to internationally recognised standards. Large numbers of schemes in many application areas are now operated in accordance with ISO/IEC 17043:2010 which provides extensive guidance on general requirements for proficiency testing. In addition, accreditation bodies (see below) in a number of countries now have arrangements to accredit PT providers against this standard. Schemes operated in this way should not only monitor laboratory performance but also make provision to warn participants of problems and where feasible offer guidance on possible improvements. Many organisations, including regulatory or standards bodies, large companies and government authorities, now recognise the additional value of data from PT schemes for tasks such as identifying problems with widely used measurement techniques or calibrants and for validating industry-standard methods.

M. Sargent

spacer


spacer
spacer
spacer spacer spacer

Home | About | Table of Contents | Advanced Search | Copyright | Feedback | Privacy | ^ Top of Page ^

spacer

This site is hosted and maintained by the National Physical Laboratory © 2017.

spacer