Home

Metrology

Metrology

 

 

Metrology

CHAPTER 4
testing, calibration and intercomparison
4.1                    General
One of the purposes of WMO, set forth in Article 2 (c) of the WMO Convention, is “to promote standardization of meteorological and related observations and to ensure the uniform publication of observations and statistics”. For this purpose, sets of standard procedures and recommended practices have been developed, and their essence is contained in this Guide.
Valid observational data can be obtained only when a comprehensive quality assurance programme is applied to the instruments and the network. Calibration and testing are inherent elements of a quality assurance programme. Other elements include clear definition of requirements, instrument selection deliberately based on the requirements, siting criteria, maintenance and logistics. These other elements must be considered when developing calibration and test plans. On an international scale, the extension of quality assurance programmes to include intercomparisons is important for the establishment of compatible data sets.
Because of the importance of standardization across national boundaries, several WMO regional associations have set up Regional Instrument Centres to organize and assist with standardization and calibration activities. Their terms of reference and locations are given in Part I, Chapter 1, Annex 1.A. Similarly, on the recommendation of JCOMM , a network of Regional Marine Instrument Centres has been set up to provide for similar functions regarding marine meteorology and related oceanographic measurements. Their terms of reference and locations are given in Part II, Chapter 4, Annex 4.A, respectively.
National and international standards and guidelines exist for many aspects of testing and evaluation, and should be used where appropriate. Some of them are referred to in this chapter.
4.1.1                Definitions
Definitions of terms in metrology are given in International vocabulary of metrology – Basic and general concepts and associated terms (VIM) by the  Joint Committee for Guides in Metrology (JCGM 200:2012). Many of them are reproduced in Part I, Chapter 1, and some are repeated here for convenience. They are not universally used and differ in some respects from terminology commonly used in meteorological practice. However, the ISO definitions are recommended for use in meteorology. The JCGM document is a joint production with the International Bureau of Weights and Measures, the International Organization of Legal Metrology, the International Electrotechnical Commission, and other similar international bodies.
The VIM terminology differs from common usage in the following respects in particular:
Accuracy (of a measurement) is the closeness of the agreement between a measured quantity value and a true quantity value of a measurand. The accuracy of a measurement is sometimes understood as closeness of agreement between measured quantity values that are being attributed to the measurand. and it is a qualitative term. It is possible to refer to an instrument or a measurement as having a high accuracy, but the quantitative measure of the accuracy is the uncertainty.
Uncertainty is expressed as a non-negative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based on the information used.
The error of a measurement is the measured quantity value minus a reference quantity  value (the deviation has the other sign), and it is composed of the random and systematic errors (the term bias is commonly used for systematic error).
Repeatability is  expressed as the closeness of agreement between indications or measured quantity values obtained by replicate measurements on the same or similar objects under a set of repeatability conditions of measurement that includes the same measurement procedure, same operators, same measuring system, same operating conditions and same location, and replicate measurements over a short period of time .
Reproducibility isis  expressed as the closeness of agreement between indications or measured quantity values obtained by replicate measurements on the same or similar objects under a set of reproducibility conditions of measurement that includes different locations, operators, measuring systems, and replicate measurements.VIM does  define precision, but advises against the use of the term.
4.1.2                Testing and calibration programmes
Before using atmospheric measurements taken with a particular sensor for meteorological purposes, the answers to a number of questions are needed as follows:
(a)     What is the sensor or system accuracy?
(b)     What is the variability of measurements in a network containing such systems or sensors?
(c)      What change, or bias, will there be in the data provided by the sensor or system if its siting location is changed?
(d)     What change or bias will there be in the data if it replaces a different sensor or system measuring the same weather element(s)?
To answer these questions and to assure the validity and relevance of the measurements produced by a meteorological sensor or system, some combination of calibration, laboratory testing and functional testing is needed.
Calibration and test programmes should be developed and standardized, based on the expected climatic variability, environmental and electromagnetic interference under which systems and sensors are expected to operate. For example, considered factors might include the expected range of temperature, humidity and wind speed; whether or not a sensor or system must operate in a marine environment, or in areas with blowing dust or sand; the expected variation in electrical voltage and phase, and signal and power line electrical transients; and the expected average and maximum electromagnetic interference. Meteorological Services may purchase calibration and test services from private laboratories and companies, or set up test organizations to provide those services.
It is most important that at least two like sensors or systems be subjected to each test in any test programme. This allows for the determination of the expected variability in the sensor or system, and also facilitates detecting problems.
4.2                    Testing
4.2.1                The purpose of testing
Sensors and systems are tested to develop information on their performance under specified conditions of use. Manufacturers typically test their sensors and systems and in some cases publish operational specifications based on their test results. However, it is extremely important for the user Meteorological Service to develop and carry out its own test programme or to have access to an independent testing authority.
Testing can be broken down into environmental testing, electrical/electromagnetic interference testing and functional testing. A test programme may consist of one or more of these elements.
In general, a test programme is designed to ensure that a sensor or system will meet its specified performance, maintenance and mean-time- between-failure requirements under all expected operating, storage and transportation conditions. Test programmes are also designed to develop information on the variability that can be expected in a network of like sensors, in functional reproducibility, and in the comparability of measurements between different sensors or systems.
Knowledge of both functional reproducibility and comparability is very important to climatology, where a single long‑term database typically contains information from sensors and systems that through time use different sensors and technologies to measure the same meteorological variable. In fact, for practical applications, good operational comparability between instruments is a more valuable attribute than precise absolute calibration. This information is developed in functional testing.
Even when a sensor or system is delivered with a calibration report, environmental and possibly additional calibration testing should be performed. An example of this is a modern temperature measurement system, where at present the probe is likely to be a resistance temperature device. Typically, several resistance temperature devices are calibrated in a temperature bath by the manufacturer and a performance specification is provided based on the results of the calibration. However, the temperature system which produces the temperature value also includes of power supplies and electronics, which can also be affected by temperature. Therefore, it is important to operate the electronics and probe as a system through the temperature range during the calibration. It is good practice also to replace the probe with a resistor with a known temperature coefficient, which will produce a known temperature output and operate the electronics through the entire temperature range of interest to ensure proper temperature compensation of the system electronics.
Users should also have a programme for testing randomly selected production sensors and systems, even if pre‑production units have been tested, because even seemingly minor changes in material, configurations or manufacturing processes may affect the operating characteristics of sensors and systems.
The International Organization for Standardization has standards (ISO, 1989a, 1989b) which specify sampling plans and procedures for the inspection of lots of items.
4.2.2                Environmental testing
4.2.2.1             Definitions
The following definitions serve to introduce the qualities of an instrument system that should be the subject of operational testing:
Operational conditions: Those conditions or a set of conditions encountered or expected to be encountered during the time an item is performing its normal operational function in full compliance with its performance specification.
Withstanding conditions: Those conditions or a set of conditions outside the operational conditions which the instrument is expected to withstand. They may have only a small probability of occurrence during an item’s lifetime. The item is not expected to perform its operational function when these withstanding conditions exist. The item is, however, expected to be able to survive these conditions and return to normal performance when the operational conditions return.
Outdoor environment: Those conditions or a set of conditions encountered or expected to be encountered during the time that an item is performing its normal operational function in an unsheltered, uncontrolled natural environment.
Indoor environment: Those conditions or a set of conditions encountered or expected to be encountered during the time that an item is energized and performing its normal operational function within an enclosed operational structure. Consideration is given to both the uncontrolled indoor environment and the artificially controlled indoor environment.
Transportation environment: Those conditions or a set of conditions encountered or expected to be encountered during the transportation portion of an item’s life. Consideration is given to the major transportation modes – road, rail, ship and air transportation, and also to the complete range of environments encountered – before and during transportation, and during the unloading phase. The item is normally housed in its packaging/shipping container during exposure to the transportation environment.
Storage environment: Those conditions or a set of conditions encountered or expected to be encountered during the time an item is in its non‑operational storage mode. Consideration is given to all types of storage, from the open storage situation, in which an item is stored unprotected and outdoors, to the protected indoor storage situation. The item is normally housed in its packaging/shipping container during exposure to the storage environment.
The International Electrotechnical Commission also has standards (IEC, 1990) to classify environmental conditions which are more elaborate than the above. They define ranges of meteorological, physical and biological environments that may be encountered by products being transported, stored, installed and used, which are useful for equipment specification and for planning tests.
4.2.2.2             Environmental test programme
Environmental tests in the laboratory enable rapid testing over a wide range of conditions, and can accelerate certain effects such as those of a marine environment with high atmospheric salt loading. The advantage of environmental tests over field tests is that many tests can be accelerated in a well‑equipped laboratory, and equipment may be tested over a wide range of climatic variability. Environmental testing is important; it can give insight into potential problems and generate confidence to go ahead with field tests, but it cannot replace field testing.
An environmental test programme is usually designed around a subset of the following conditions: high temperature, low temperature, temperature shock, temperature cycling, humidity, wind, rain, freezing rain, dust, sunshine (insolation), low pressure, transportation vibration and transportation shock. The ranges, or test limits, of each test are determined by the expected environments (operational, withstanding, outdoor, indoor, transportation, storage) that are expected to be encountered.
The purpose of an environmental test programme document is to establish standard environmental test criteria and corresponding test procedures for the specification, procurement, design and testing of equipment. This document should be based on the expected environmental operating conditions and extremes.
For example, the United States prepared its National Weather Service standard environmental criteria and test procedures (NWS, 1984), based on a study which surveyed and reported the expected operational and extreme ranges of the various weather elements in the United States operational area, and presented proposed test criteria (NWS, 1980). These criteria and procedures consist of three parts:
(a)     Environmental test criteria and test limits for outdoor, indoor, and transportation/storage environments;
(b)     Test procedures for evaluating equipment against the environmental test criteria;
(c)      Rationale providing background information on the various environmental conditions to which equipment may be exposed, their potential effect(s) on the equipment, and the corresponding rationale for the recommended test criteria.
4.2.3                Electrical and electromagnetic interference testing
The prevalence of sensors and automated data collection and processing systems that contain electronic components necessitates in many cases the inclusion in an overall test programme for testing performance in operational electrical environments and under electromagnetic interference.
An electrical/electromagnetic interference test programme document should be prepared. The purpose of the document is to establish standard electrical/electromagnetic interference test criteria and corresponding test procedures and to serve as a uniform guide in the specification of electrical/electromagnetic interference susceptibility requirements for the procurement and design of equipment.
The document should be based on a study that quantifies the expected power line and signal line transient levels and rise times caused by natural phenomena, such as thunderstorms. It should also include testing for expected power variations, both voltage and phase. If the equipment is expected to operate in an airport environment, or other environment with possible electromagnetic radiation interference, this should also be quantified and included in the standard. A purpose of the programme may also be to ensure that the equipment is not an electromagnetic radiation generator. Particular attention should be paid to equipment containing a microprocessor and, therefore, a crystal clock, which is critical for timing functions.
4.2.4                Functional testing
Calibration and environmental testing provide a necessary but not sufficient basis for defining the operational characteristics of a sensor or system, because calibration and laboratory testing cannot completely define how the sensor or system will operate in the field. It is impossible to simulate the synergistic effects of all the changing weather elements on an instrument in all of its required operating environments.
Functional testing is simply testing in the outdoor and natural environment where instruments are expected to operate over a wide variety of meteorological conditions and climatic regimes, and, in the case of surface instruments, over ground surfaces of widely varying albedo. Functional testing is required to determine the adequacy of a sensor or system while it is exposed to wide variations in wind, precipitation, temperature, humidity, and direct, diffuse and reflected solar radiation. Functional testing becomes more important as newer technology sensors, such as those using electro‑optic, piezoelectric and capacitive elements, are placed into operational use. The readings from these sensors may be affected by adventitious conditions such as insects, spiders and their webs, and the size distribution of particles in the atmosphere, all of which must be determined by functional tests.
For many applications, comparability must be tested in the field. This is done with side‑by‑side testing of like and different sensors or systems against a field reference standard. These concepts are presented in Hoehne (1971; 1972; 1977).
Functional testing may be planned and carried out by private laboratories or by the test department of the Meteorological Service or other user organization. For both the procurement and operation of equipment, the educational and skill level of the observers and technicians who will use the system must be considered. Use of the equipment by these staff members should be part of the test programme. The personnel who will install, use, maintain and repair the equipment should evaluate those portions of the sensor or system, including the adequacy of the instructions and manuals that they will use in their job. Their skill level should also be considered when preparing procurement specifications.
4.3                    Calibration
4.3.1                The purpose of calibration
Sensor or system calibration is the first step in defining data validity. In general, it involves comparison against a known standard to determine how closely instrument output matches the standard over the expected range of operation. Performing laboratory calibration carries the implicit assumption that the instrument’s characteristics are stable enough to retain the calibration in the field. A calibration history over successive calibrations should provide confidence in the instrument’s stability.
Specifically, calibration is the operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication.. It should define a sensor/system’s bias or average deviation from the standard against which it is calibrated, its random errors, the range over which the calibration is valid, and the existence of any thresholds or non‑linear response regions. It should also define resolution and hysteresis. Hysteresis should be identified by cycling the sensor over its operating range during calibration. The result of a calibration is often expressed as a calibration factor or as a series of calibration factors in the form of a calibration table or calibration curve. The results of a calibration must be recorded in a document called a calibration certificate or a calibration report.
The calibration certificate or report should define any bias that can then be removed through mechanical, electrical or software adjustment. The remaining random error is not repeatable and cannot be removed, but can be statistically defined through a sufficient number of measurement repetitions during calibration.
4.3.2                Standards
The calibration of instruments or measurement systems is customarily carried out by comparing them against one or more measurement standards. These standards are classified according to their metrological quality. Their definitions (JCGM 200: 2012) are given in Part I, Chapter 1 and may be summarized as follows:
Primary standard: A  measurement standard established using a primary reference measurement procedure, or created as an artifact, chosen by convention.
Note 2: When these standards are relevant to NMHS’ calibration laboratories or RIC’ they should also be traceable to SI.
Secondary standard: A measurement standard established through calibration with respect to a primary measurement standard for a quantity of the same kind.
International standard: A measurement standard recognized by signatories to an international agreement and intended to serve worldwide.
National standard: A measurement standard recognized by national authority to serve in a state or economy as the basis for assigning quantity values to other measurement standards for the kind of quantity concerned.
Reference standard: A measurement standard designated for the calibration of other measurement standards for quantities of a given kind in a given organization or at a given location.
Working standard: A measurement standard that is used routinely to calibrate or verify measuring instruments or measuring systems.
Transfer device: A device used as an intermediary to compare standards.
Travelling standard: A measurement standard, sometimes of special construction, intended for transport between different locations.
Primary standards reside within major international or national metrological institutions (NMI). In pressure (see Part I chap. 4) measurement, this term is used regarding instruments based on physical principles such as mercury barometers or dead weight sensors and not according to the calibration and measurement capability (CMC). These standards should be called Secondary standards often reside in major calibration laboratories and are usually not suitable for field use. These standards are generally called “reference measurement standard”, according to ISO 17025. Working standards are usually laboratory instruments that have been calibrated against a secondary standard. Working standards that may be used in the field are known as travelling standards. Travelling standard instruments may also be used to compare instruments in a laboratory or in the field. All these standards used for a meteorological purpose and relevant to NMHS’ calibration laboratories or RIC’  should be traceable to SI.
4.3.3                Traceability
Traceability is defined by JCGM 200: 2012 as:
“property of a measurement result whereby the result can be related to a reference through a documented unbroken chain of calibrations, each contributing to the measurement uncertainty.”
In meteorology, it is common practice for pressure measurements to be traceable through travelling standards, working standards and secondary standards to national  standards, and the accumulated uncertainties therefore are known (except for those that arise in the field, which have to be determined by field testing). Temperature measurements lend themselves to the same practice.
The same principle must be applied to the measurement of any quantity for which measurements of known uncertainty are required.
4.3.4                Calibration practices
The calibration of meteorological instruments is normally carried out in a laboratory where appropriate measurement standards and calibration devices are located. They may be Regional Instrument Centre, national laboratories, private laboratories, or laboratories established within the Meteorological Service or other user organization. A calibration laboratory is responsible for maintaining the necessary qualities of its measurement standards and for keeping records of their traceability. Such laboratories can also issue calibration certificates that should also contain an estimate of the uncertainty of calibration. In order to guarantee traceability, the calibration laboratory should be recognized and authorized by the appropriate national authorities.
Manufacturers of meteorological instruments should deliver their quality products, for example, standard barometers or thermometers, with calibration certificates or calibration reports. These documents may or may not be included in the basic price of the instrument, but may be available as options. Calibration certificates given by authorized calibration laboratories may be more expensive than factory certificates. As discussed in the previous section, environmental, functional, and possibly additional calibration testing, should be performed.
Users may also purchase calibration devices or measurement standards for their own laboratories. A good calibration device should always be combined with a proper measurement standard, for example, a liquid bath temperature calibration chamber with a set of certified liquid‑in‑glass thermometers, and/or certified resistance thermometers. For the example above, further considerations, such as the use of non‑conductive silicone fluid, should be applied. Thus, if a temperature-measurement device is mounted on an electronic circuit board, the entire board may be immersed in the bath so that the device can be tested in its operating configuration. Not only must the calibration equipment and standards be of high quality, but the engineers and technicians of a calibration laboratory must be well trained in basic metrology and in the use of available calibration devices and measurement standards.
Once instruments have passed initial calibration and testing and are accepted by the user, a programme of regular calibration checks and calibrations should be instituted. Instruments, such as mercury barometers, are easily subject to breakage when transported to field sites. At distant stations, these instruments should be kept stationary as far as possible, and should be calibrated against more robust travelling standards that can be moved from one station to another by inspectors. Travelling standards must be compared frequently against a working standard or reference standard in the calibration laboratory, and before and after each inspection tour.
Details of laboratory calibration procedures of, for example, barometers, thermometers, hygrometers, anemometers and radiation instruments are given in the relevant chapters of this Guide or in specialized handbooks. These publications also contain information concerning recognized international standard instruments and calibration devices. Calibration procedures for automatic weather stations require particular attention, as discussed in Part II, Chapter 1.

      1. Field inspection practices

Field inspection offers the user the ability to check the instrument in place. Leaving the instrument installed at a meteorological station eliminates any down-time that would incur while removing and re-installing the instrument from the field. Inspection is made at one-point against the working standard by placing the working standard as close to the Instrument under inspection (IUI) as possible. Stabilization time must be allowed to reach temperature equilibrium between the working standard and the IUI. Attention must be paid to the proximity of the working standard to the IUI, temperature gradients, air flow, pressure differences, and any other factors that could influence the inspection results. This one point inspection is an effective way to verify the instrument quality. The most important disadvantages of field inspection is that it is limited to one point. The second disadvantage is that if an error is reported the IUI should be removed and replaced by a new calibrated sensor. Then the IUI have to be calibrated and adjusted if possible in a laboratory. It should also be noted that performing field inspection is providing additional valuable information as it is testing the whole instrumental setup in the field, including cabling, etc. When performing field inspections, it is important that the metadata of the conditions at the time the inspection is performed be recorded, including also all details on the changes that were performed on the instrumental set-up. (see also additional details provided in Part II, Chapter 1, § 1.7).

      1. Inter-laboratory comparisons

By definition an inter-laboratory comparison (ILC) is the organization, performance and evaluation of calibrations results for the same instrument by two or more laboratories in accordance with predetermined conditions. A laboratory’s participation in ILC enables the laboratory to assess and demonstrate the reliability of the resultant measurement data by comparison with results from other participating laboratories. Each accredited laboratory will be expected to participate in a minimum of one proficiency test/inter-laboratory comparison for each major sub-discipline of major disciplines of the laboratory’s scope of accreditation at least every four years. Participation in at least one proficiency test/inter-laboratory comparison is required prior to the granting of accreditation. As mentioned in RIC ToR (Part I Ch 1), a RIC must participate in, or organize, inter-laboratory comparisons of standard calibration instruments and methods;

4.4                    Intercomparisons OF INSTRUMENTS
Intercomparisons of instruments and observing systems, together with agreed quality-control procedures, are essential for the establishment of compatible data sets. All intercomparisons should be planned and carried out carefully in order to maintain an adequate and uniform quality level of measurements of each meteorological variable. Many meteorological quantities cannot be directly compared with metrological standards and hence to absolute references — for example, visibility, cloud‑base height and precipitation. For such quantities, intercomparisons are of primary value.
Comparisons or evaluations of instruments and observing systems may be organized and carried out at the following levels:
(a)     International comparisons, in which participants from all interested countries may attend in response to a general invitation;
(b)     Regional intercomparisons, in which participants from countries of a certain region (for example, WMO Regions) may attend in response to a general invitation;
(c)      Multilateral and bilateral intercomparisons, in which participants from two or more countries may agree to attend without a general invitation;
(d)     National intercomparisons, within a country.
Because of the importance of international comparability of measurements, WMO, through one of its constituent bodies, from time to time arranges for international and regional comparisons of instruments. Such intercomparisons or evaluations of instruments and observing systems may be very lengthy and expensive. Rules have therefore been established so that coordination will be effective and assured. These rules are reproduced in Annexes 4.A and 4.B. They contain general guidelines and should, when necessary, be supplemented by specific working rules for each intercomparison (see the relevant chapters of this Guide).
Reports of particular WMO international com-parisons are referenced in other chapters in this Guide (see, for instance, Part I, Chapters 3, 4, 9, 12, 14 and 15). Annex 4.C provides a list of the international comparisons which have been supported by the Commission for Instruments and Methods of Observation and which have been published in the WMO technical document series.
Reports of comparisons at any level should be made known and available to the meteorological community at large.

Annex 4.A
Procedures of WMO global and regional intercomparisons of instruments

1.            A WMO intercomparison of instruments and methods of observation shall be agreed upon by the WMO constituent body concerned so that it is recognized as a WMO intercomparison.
2.            The Executive Council will consider the approval of the intercomparison and its inclusion in the programme and budget of WMO.
3.            When there is an urgent need to carry out a specific intercomparison that was not considered at the session of a constituent body, the president of the relevant body may submit a corresponding proposal to the President of WMO for approval.
4.            In good time before each intercomparison, the Secretary‑General, in cooperation with the president of CIMO and possibly with presidents of other technical commissions or regional associations, or heads of programmes concerned, should make inquiries as to the willingness of one or more Members to act as a host country and as to the interest of Members in participating in the intercomparison.
5.            When at least one Member has agreed to act as host country and a reasonable number of Members have expressed their interest in participating, an international organizing committee should be established by the president of CIMO in consultation with the heads of the constituent bodies concerned, if appropriate.
6.            Before the intercomparison begins, the organizing committee should agree on its organization, for example, at least on the main objectives, place, date and duration of the intercomparison, conditions for participation, data acquisition, processing and analysis methodology, plans for the publication of results, intercomparison rules, and the responsibilities of the host(s) and the participants.
7.            The host should nominate a project leader who will be responsible for the proper conduct of the intercomparison, the data analysis, and the preparation of a final report of the intercomparison as agreed upon by the organizing committee. The project leader will be a member ex officio of the organizing committee.
8.            When the organizing committee has decided to carry out the intercomparison at sites in different host countries, each of these countries should designate a site manager. The responsibilities of the site managers and the overall project management will be specified by the organizing committee.
9.            The Secretary‑General is invited to announce the planned intercomparison to Members as soon as possible after the establishment of the organizing committee. The invitation should include information on the organization and rules of the intercomparison as agreed upon by the organizing committee. Participating Members should observe these rules.
10.          All further communication between the host(s) and the participants concerning organizational matters will be handled by the project leader and possibly by the site managers unless other arrangements are specified by the organizing committee.
11.          Meetings of the organizing committee during the period of the intercomparison could be arranged, if necessary.
12.          After completion of the intercomparison, the organizing committee shall discuss and approve the main results of the data analysis of the intercomparison and shall make proposals for the utilization of the results within the meteorological community.
13.          The final report of the intercomparison, prepared by the project leader and approved by the organizing committee, should be published in the WMO Instruments and Observing Methods Report series.

Annex 4.B
Guidelines for Organizing WMO Intercomparisons of Instruments
1.                      Introduction
1.1          These guidelines are complementary to the procedures of WMO global and regional intercomparisons of meteorological instruments. They assume that an international organizing committee has been set up for the intercomparison and provide guidance to the organizing committee for its conduct. In particular, see Part I, Chapter 12, Annex 12.C.
1.2          However, since all intercomparisons differ to some extent from each other, these guidelines should be considered as a generalized checklist of tasks. They should be modified as situations so warrant, keeping in mind the fact that fairness and scientific validity should be the criteria that govern the conduct of WMO intercomparisons and evaluations.
1.3          Final reports of other WMO intercomparisons and the reports of meetings of organizing committees may serve as examples of the conduct of intercomparisons. These are available from the World Weather Watch Department of the WMO Secretariat.
2.                      Objectives of the intercomparison
The organizing committee should examine the achievements to be expected from the intercomparison and identify the particular problems that may be expected. It should prepare a clear and detailed statement of the main objectives of the intercomparison and agree on any criteria to be used in the evaluation of results. The organizing committee should also investigate how best to guarantee the success of the intercomparison, making use of the accumulated experience of former intercomparisons, as appropriate.
3.                      Place, date and duration
3.1          The host country should be requested by the Secretariat to provide the organizing committee with a description of the proposed intercomparison site and facilities (location(s), environmental and climatological conditions, major topographic features, and so forth). It should also nominate a project leader.
3.2          The organizing committee should examine the suitability of the proposed site and facilities, propose any necessary changes, and agree on the site and facilities to be used. A full site and environmental description should then be prepared by the project leader. The organizing committee, in consultation with the project leader, should decide on the date for the start and the duration of the intercomparison.
3.3          The project leader should propose a date by which the site and its facilities will be available for the installation of equipment and its connection to the data-acquisition system. The schedule should include a period of time to check and test equipment and to familiarize operators with operational and routine procedures.
4.                      Participation in the intercomparison
4.1          The organizing committee should consider technical and operational aspects, desirable features and preferences, restrictions, priorities, and descriptions of different instrument types for the intercomparison.
4.2          Normally, only instruments in operational use or instruments that are considered for operational use in the near future by Members should be admitted. It is the responsibility of the participating Members to calibrate their instruments against recognized standards before shipment and to provide appropriate calibration certificates. Participants may be requested to provide two identical instruments of each type in order to achieve more confidence in the data. However, this should not be a condition for participation.
4.3          The organizing committee should draft a detailed questionnaire in order to obtain the required information on each instrument proposed for the intercomparison. The project leader shall provide further details and complete this questionnaire as soon as possible. Participants will be requested to specify very clearly the hardware connections and software characteristics in their reply and to supply adequate documentation (a questionnaire checklist is available from the WMO Secretariat).
4.4          The chairperson of the organizing committee should then request:
(a)     The Secretary‑General to invite officially Members (who have expressed an interest) to participate in the intercomparison. The invitation shall include all necessary information on the rules of the intercomparison as prepared by the organizing committee and the project leader;
(b)     The project leader to handle all further contact with participants.
5.                      Data acquisition
5.1                    Equipment set-up
5.1.1       The organizing committee should evaluate a proposed layout of the instrument installation prepared by the project leader and agree on a layout of instruments for the intercomparison. Special attention should be paid to fair and proper siting and exposure of instruments, taking into account criteria and standards of WMO and other international organizations. The adopted siting and exposure criteria shall be documented.
5.1.2       Specific requests made by participants for equipment installation should be considered and approved, if acceptable, by the project leader on behalf of the organizing committee.
5.2                    Standards and references
The host country should make every effort to include at least one reference instrument in the intercomparison. The calibration of this instrument should be traceable to national or international standards. A description and specification of the standard should be provided to the organizing committee. If no recognized standard or reference exists for the variable(s) to be measured, the organizing committee should agree on a method to determine a reference for the intercomparison.
5.3                    Related observations and measurements
The organizing committee should agree on a list of meteorological and environmental variables that should be measured or observed at the intercomparison site during the whole intercomparison period. It should prepare a measuring programme for these and request the host country to execute this programme. The results of this programme should be recorded in a format suitable for the intercomparison analysis.
5.4                    Data-acquisition system
5.4.1       Normally the host country should provide the necessary data-acquisition system capable of recording the required analogue, pulse and digital (serial and parallel) signals from all participating instruments. A description and a block diagram of the full measuring chain should be provided by the host country to the organizing committee. The organizing committee, in consultation with the project leader, should decide whether analogue chart records and visual readings from displays will be accepted in the intercomparison for analysis purposes or only for checking the operation.
5.4.2       The data-acquisition system hardware and software should be well tested before the comparison is started and measures should be taken to prevent gaps in the data record during the intercomparison period.
5.5                    Data-acquisition methodology
The organizing committee should agree on appropriate data-acquisition procedures, such as frequency of measurement, data sampling, averaging, data reduction, data formats, real‑time quality control, and so on. When data reports have to be made by participants during the time of the intercomparison or when data are available as chart records or visual observations, the organizing committee should agree on the responsibility for checking these data, on the period within which the data should be submitted to the project leader, and on the formats and media that would allow storage of these data in the database of the host. When possible, direct comparisons should be made against the reference instrument.
5.6                    Schedule of the intercomparison
The organizing committee should agree on an outline of a time schedule for the intercomparison, including normal and specific tasks, and prepare a time chart. Details should be further worked out by the project leader and the project staff.
6.                      Data processing and analysis
6.1                    Database and data availability
6.1.1       All essential data of the intercomparison, including related meteorological and environmental data, should be stored in a database for further analysis under the supervision of the project leader. The organizing committee, in collaboration with the project leader, should propose a common format for all data, including those reported by participants during the intercomparison. The organizing committee should agree on near‑real‑time monitoring and quality-control checks to ensure a valid database.
6.1.2       After completion of the intercomparison, the host country should, on request, provide each participating Member with a data set from its submitted instrument(s). This set should also contain related meteorological, environmental and reference data.
6.2                    Data analysis
6.2.1       The organizing committee should propose a framework for data analysis and processing and for the presentation of results. It should agree on data conversion, calibration and correction algorithms, and prepare a list of terms, definitions, abbreviations and relationships (where these differ from commonly accepted and documented practice). It should elaborate and prepare a comprehensive description of statistical methods to be used that correspond to the intercomparison objectives.
6.2.2       Whenever a direct, time‑synchronized, one‑on‑one comparison would be inappropriate (for example, in the case of spatial separation of the instruments under test), methods of analysis based on statistical distributions should be considered. Where no reference instrument exists (as for cloud base, meteorological optical range, and so on), instruments should be compared against a relative reference selected from the instruments under test, based on median or modal values, with care being taken to exclude unrepresentative values from the selected subset of data.
6.2.3       Whenever a second intercomparison is established some time after the first, or in a subsequent phase of an ongoing intercomparison, the methods of analysis and the presentation should include those used in the original study. This should not preclude the addition of new methods.
6.2.4       Normally the project leader should be responsible for the data-processing and analysis. The project leader should, as early as possible, verify the appropriateness of the selected analysis procedures and, as necessary, prepare interim reports for comment by the members of the organizing committee. Changes should be considered, as necessary, on the basis of these reviews.
6.2.5       After completion of the intercomparison, the organizing committee should review the results and analysis prepared by the project leader. It should pay special attention to recommendations for the utilization of the intercomparison results and to the content of the final report.
7.                      Final report of the intercomparison
7.1          The organizing committee should draft an outline of the final report and request the project leader to prepare a provisional report based on it.
7.2          The final report of the intercomparison should contain, for each instrument, a summary of key performance characteristics and operational factors. Statistical analysis results should be presented in tables and graphs, as appropriate. Time‑series plots should be considered for selected periods containing events of particular significance. The host country should be invited to prepare a chapter describing the database and facilities used for data-processing, analysis and storage.
7.3          The organizing committee should agree on the procedures to be followed for approval of the final report, such as:
(a)     The draft final report will be prepared by the project leader and submitted to all organizing committee members and, if appropriate, also to participating Members;
(b)     Comments and amendments should be sent back to the project leader within a specified time limit, with a copy to the chairperson of the organizing committee;
(c)      When there are only minor amendments proposed, the report can be completed by the project leader and sent to the WMO Secretariat for publication;
(d)     In the case of major amendments or if serious problems arise that cannot be resolved by correspondence, an additional meeting of the organizing committee should be considered (the president of CIMO should be informed of this situation immediately).
7.4          The organizing committee may agree that intermediate and final results may be presented only by the project leader and the project staff at technical conferences.
8.                      Responsibilities
8.1                    Responsibilities of participants
8.1.1       Participants shall be fully responsible for the transportation of all submitted equipment, all import and export arrangements, and any costs arising from these. Correct import/export procedures shall be followed to ensure that no delays are attributable to this process.
8.1.2       Participants shall generally install and remove any equipment under the supervision of the project leader, unless the host country has agreed to do this.
8.1.3       Each participant shall provide all necessary accessories, mounting hardware, signal and power cables and connectors (compatible with the standards of the host country), spare parts and consumables for its equipment. Participants requiring a special or non‑standard power supply shall provide their own converter or adapter. Participants shall provide all detailed instructions and manuals needed for installation, operation, calibration and routine maintenance.
8.2                    Host country support
8.2.1       The host country should provide, if asked, the necessary information to participating Members on temporary and permanent (in the case of consumables) import and export procedures. It should assist with the unpacking and installation of the participants’ equipment and provide rooms or cabinets to house equipment that requires protection from the weather and for the storage of spare parts, manuals, consumables, and so forth.
8.2.2       A reasonable amount of auxiliary equipment or structures, such as towers, shelters, bases or foundations, should be provided by the host country.
8.2.3       The necessary electrical power for all instruments shall be provided. Participants should be informed of the network voltage and frequency and their stability. The connection of instruments to the data-acquisition system and the power supply will be carried out in collaboration with the participants. The project leader should agree with each participant on the provision, by the participant or the host country, of power and signal cables of adequate length (and with appropriate connectors).
8.2.4       The host country should be responsible for obtaining legal authorization related to measurements in the atmosphere, such as the use of frequencies, the transmission of laser radiation, compliance with civil and aeronautical laws, and so forth. Each participant shall submit the necessary documents at the request of the project leader.
8.2.5       The host country may provide information on accommodation, travel, local transport, daily logistic support, and so forth.
8.3                    Host country servicing
8.3.1       Routine operator servicing by the host country will be performed only for long‑term intercomparisons for which absence of participants or their representatives can be justified.
8.3.2       When responsible for operator servicing, the host country should:
(a)     Provide normal operator servicing for each instrument, such as cleaning, chart changing, and routine adjustments as specified in the participant’s operating instructions;
(b)     Check each instrument every day of the intercomparison and inform the nominated contact person representing the participant immediately of any fault that cannot be corrected by routine maintenance;
(c)      Do its utmost to carry out routine calibration checks according to the participant’s specific instructions.
8.3.3       The project leader should maintain in a log regular records of the performance of all equipment participating in the intercomparison. This log should contain notes on everything at the site that may have an effect on the intercomparison, all events concerning participating equipment, and all events concerning equipment and facilities provided by the host country.
9.                      Rules during the Intercomparison
9.1          The project leader shall exercise general control of the intercomparison on behalf of the organizing committee.
9.2          No changes to the equipment hardware or software shall be permitted without the concurrence of the project leader.
9.3          Minor repairs, such as the replacement of fuses, will be allowed with the concurrence of the project leader.
9.4          Calibration checks and equipment servicing by participants, which requires specialist knowledge or specific equipment, will be permitted according to predefined procedures.
9.5          Any problems that arise concerning the participants’ equipment shall be addressed to the project leader.
9.6          The project leader may select a period during the intercomparison in which equipment will be operated with extended intervals between normal routine maintenance in order to assess its susceptibility to environmental conditions. The same extended intervals will be applied to all equipment.

 

Annex 4.C
reports of international comparisons conducted under the auspices of the commission for instruments and methods of observation

Topic

Instruments and Observing Report No.

Title of report

Sunshine duration

16

Radiation and Sunshine Duration Measurements: Comparison of Pyranometers and Electronic Sunshine Duration Recorders of RA VI (Budapest, Hungary, July–December 1984), G. Major, WMO/TD‑No. 146 (1986).

Radiationa

16

Radiation and Sunshine Duration Measurements: Comparison of Pyranometers and Electronic Sunshine Duration Recorders of RA VI (Budapest, Hungary, July–December 1984), G. Major, WMO/TD‑No. 146 (1986).

Precipitation

17

International Comparison of National Precipitation Gauges with a Reference Pit Gauge (1984), B. Sevruk and W.R. Hamon, WMO/TD‑No. 38 (1984).

Radiosondes

28

WMO International Radiosonde Comparison, Phase I (Beaufort Park, United Kingdom, 1984), A.H. Hooper, WMO/TD‑No. 174 (1986).

Radiosondes

29

WMO International Radiosonde Intercomparison, Phase II (Wallops Island, United States, 4 February–15 March 1985), F.J. Schmidlin, WMO/TD‑
No. 312 (1988).

Radiosondes

30

WMO International Radiosonde Comparison (United Kingdom, 1984/United States, 1985), J. Nash and F.J. Schmidlin, WMO/TD‑No. 195 (1987).

Cloud-base height

32

WMO International Ceilometer Intercomparison (United Kingdom, 1986), D.W. Jones, M. Ouldridge and D.J. Painting, WMO/TD‑No. 217 (1988).

Humidity

34

WMO Assmann Aspiration Psychrometer Intercomparison (Potsdam, German Democratic Republic, 1987), D. Sonntag, WMO/TD‑No. 289 (1989).

Humidity

38

WMO International Hygrometer Intercomparison (Oslo, Norway, 1989), J. Skaar, K. Hegg, T. Moe and K. Smedstud, WMO/TD‑No. 316 (1989).

Radiosondes

40

WMO International Radiosonde Comparison, Phase III (Dzhambul, USSR, 1989), A. Ivanov, A. Kats, S. Kurnosenko, J. Nash and N. Zaitseva, WMO/TD‑No. 451 (1991).

Visibility

41

The First WMO Intercomparison of Visibility Measurements (United Kingdom, 1988/1989), D.J. Griggs, D.W. Jones, M. Ouldridge and W.R. Sparks, WMO/TD‑No. 401 (1990).

Radiationa

43

First WMO Regional Pyrheliometer Comparison of RA II and RA V (Tokyo, Japan, 23 January–4 February 1989), Y. Sano, WMO/TD‑No. 308 (1989).

Radiationa

44

First WMO Regional Pyrheliometer Comparison of RA IV (Ensenada, Mexico, 20–27 April 1989), I. Galindo, WMO/TD‑No. 345 (1989).

Pressure

46

The WMO Automatic Digital Barometer Intercomparison (de Bilt, Netherlands, 1989–1991), J.P. van der Meulen, WMO/TD‑No. 474 (1992).

Radiationa

53

Segunda Comparación de la OMM de Pirheliómetros Patrones Nacionales AR III (Buenos Aires, Argentina, 25 November–13 December 1991), M. Ginzburg, WMO/TD‑No. 572 (1992).

Radiosondes

59

WMO International Radiosonde Comparison, Phase IV (Tsukuba, Japan,
15 February–12 March 1993), S. Yagi, A. Mita and N. Inoue, WMO/TD‑No. 742 (1996).

Wind

62

WMO Wind Instrument Intercomparison (Mont Aigoual, France, 1992–1993), P. Gregoire and G. Oualid, WMO/TD‑No. 859 (1997).

Radiationa

64

Tercera Comparación Regional de la OMM de Pirheliómetros Patrones Nacionales AR III – Informe Final (Santiago, Chile, 24 February–7 March 1997), M.V. Muñoz, WMO/TD‑No. 861 (1997).

Precipitation

67

WMO Solid Precipitation Measurement Intercomparison – Final Report, B.E. Goodison, P.Y.T. Louie and D. Yang, WMO/TD‑No. 872 (1998).

Present weather

73

WMO Intercomparison of Present Weather Sensors/Systems – Final Report (Canada and France, 1993–1995), M. Leroy, C. Bellevaux, J.P. Jacob, WMO/TD‑No. 887 (1998)

Radiosondes

76

Executive Summary of the WMO Intercomparison of GPS Radiosondes (Alcantâra, Maranhão, Brazil, 20 May–10 June 2001), R.B. da Silveira, G. Fisch, L.A.T. Machado, A.M. Dall’Antonia Jr., L.F. Sapucci, D. Fernandes and J. Nash, WMO/TD‑No. 1153 (2003).

Radiosondes

83

WMO Intercomparison of Radiosonde Systems, Vacoas, Mauritius, 2–25 February 2005, J. Nash, R. Smout, T. Oakley, B. Pathack and S. Kurnosenko, WMO/TD‑No. 1303 (2006).

Rainfall Intensity

84

WMO Laboratory Intercomparison of  Rainfall Intensity Gauges – Final Report, France, The Netherlands, Italy, September 2004–September 2005, L. Lanza, L. Stagi, M. Leroy, C. Alexandropoulos, W. Wauben, WMO/TD‑No. 1304 (2006)

Humidity

85

WMO Radiosonde Humidity Sensor Intercomparison – Final Report of
Phase I and Phase II
Phase I: Russian Federation, 1995–1997
Phase II: USA, 8–26 September 1995, Phase I: A. Balagurov, A. Kats, N. Krestyannikova, Phase II: F. Schmidlin, WMO/TD‑No. 1305 (2006)

Radiosondes

90

WMO Intercomparison of GPS Radiosondes – Final Report, Alcantâra, Brazil, 20 May to 10 June 2001, R. da Silveira, G. F. Fisc, L.A. Machado,
A.M. Dall’Antonia Jr., L.F. Sapucci, D. Fernandes, R. Marques, J. Nash, WMO/TD-No. 1314 (2006)

Pyrheliometers

91

International Pyrheliometer Comparison – Final Report, Davos, Switzerland, 26 September–14 October 2005, W. Finsterle, WMO/TD‑No. 1320 (2006)

Pyrheliometers

97

Second WMO Regional Pyrheliometer Comparison of RA II (Tokyo,
22 January–2 February 2007), H. Sasaki, WMO/TD‑No. 1494 (2009)

Pyranometers

98

Sub-Regional Pyranometer Intercomparison of the RA VI members from South-Eastern Europe (Split, Croatia, 22 July–6 August 2007), K. Premec, WMO/TD‑No. 1501 (2009)

Rainfall Intensity

99

WMO Field Intercomparison of Rainfall Intensity Gauges (Vigna di Valle, Italy, October 2007–April 2009), E. Vuerich, C. Monesi, L. Lanza, L. Stagi,
E. Lanzinger, WMO/TD‑No. 1504 (2009)

Thermometer Screens and Humidity

106

WMO Field Intercomparison of Thermometer Screens/Shields and Humidity Measuring Instruments, Ghardaïa, Algeria, November 2008–October 2009 Final Report, M. Lacombe, D. Bousri, M. Leroy, M. Mezred, WMO/TD‑
No. 1579 (2011)

Radiosondes

107

WMO Intercomparison of High Quality Radiosonde Systems, Yangjiang, China, 12 July–3 August 2010, J. Nash, T. Oakley, H. Vömel, LI Wei, WMO/TD‑No. 1580 (2011)

_______
a The reports of the WMO International Pyrheliometer Intercomparisons, conducted by the World Radiation Centre at Davos (Switzerland) and carried out at five‑yearly intervals, are also distributed by WMO.

 

References and further reading

International Electrotechnical Commission, 1990: Classification of Environmental Conditions. IEC 721.
International Organization for Standardization, 1989a: Sampling Procedures for Inspection by Attributes. – Part I: Sampling plans indexed by acceptable quality level (AQL) for lot-by-lot inspection. ISO 2859‑1: 1989.
International Organization for Standardization, 1989b: Sampling Procedures and Charts for Inspection by Variables for Percent Nonconforming. ISO 3951: 1989.
Joint Committee for Guides in Metrology, JCGM 200:2012 International vocabulary of metrology – Basic and general concepts and associated terms (VIM)
Hoehne, W.E., 1971: Standardizing Functional Tests. NOAA Technical Memorandum, NWS T&EL‑12, United States Department of Commerce, Sterling, Virginia.
Hoehne, W.E., 1972: Standardizng functional tests. Preprints of the Second Symposium on Meteorological Observations and Instrumentation, American Meteorological Society, pp. 161–165.
Hoehne, W.E., 1977: Progress and Results of Functional Testing. NOAA Technical Memorandum NWS T&EL‑15, United States Department of Commerce, Sterling, Virginia.
National Weather Service, 1980: Natural Environmental Testing Criteria and Recommended Test Methodologies for a Proposed Standard for National Weather Service Equipment. United States Department of Commerce, Sterling, Virginia.
National Weather Service, 1984: NWS Standard Environmental Criteria and Test Procedures. United States Department of Commerce, Sterling, Virginia.
World Meteorological Organization/International Council  of Scientific Unions, 1986: Revised Instruction Manual on Radiation Instruments and Measurements (C. Fröhlich and J. London, eds). World Climate Research Programme Publications Series No. 7, WMO/TD‑No. 149, Geneva.
World Meteorological Organization, 1989: Analysis of Instrument Calibration Methods Used by Members (H. Doering). Instruments and Observing Methods Report No. 45, WMO/TD‑No. 310, Geneva.

     Recommended by the Commission for Instruments and Methods of Observation at its ninth session (1985) through Recommendation 19 (CIMO-IX).

     Recommended by the Joint WMO/IOC Technical Commission for Oceanography and Marine Meteorology at its third session (2009) through Recommendation 1 (JCOMM-III).

     Repeatability definition is computed from different definitions and is not directly copied from VIM

     Recommendations adopted by the Commission for Instruments and Methods of Observation at its eleventh session (1994), through the annex to Recommendation 14 (CIMO-XI) and Annex IX.

     When more than one site is involved, site managers shall be appointed, as required. Some tasks of the project leader, as outlined in this annex, shall be delegated to the site managers.

Source: https://www.wmo.int/pages/prog/www/IMOP/meetings/CB/Ed-Board-2/EdBd-2_P-III_Ch4.doc

Web site to visit: https://www.wmo.int/

Author of the text: indicated on the source document of the above text

If you are the author of the text above and you not agree to share your knowledge for teaching, research, scholarship (for fair use as indicated in the United States copyrigh low) please send us an e-mail and we will remove your text quickly. Fair use is a limitation and exception to the exclusive right granted by copyright law to the author of a creative work. In United States copyright law, fair use is a doctrine that permits limited use of copyrighted material without acquiring permission from the rights holders. Examples of fair use include commentary, search engines, criticism, news reporting, research, teaching, library archiving and scholarship. It provides for the legal, unlicensed citation or incorporation of copyrighted material in another author's work under a four-factor balancing test. (source: http://en.wikipedia.org/wiki/Fair_use)

The information of medicine and health contained in the site are of a general nature and purpose which is purely informative and for this reason may not replace in any case, the council of a doctor or a qualified entity legally to the profession.

 

Metrology

 

The texts are the property of their respective authors and we thank them for giving us the opportunity to share for free to students, teachers and users of the Web their texts will used only for illustrative educational and scientific purposes only.

All the information in our site are given for nonprofit educational purposes

 

Metrology

 

 

Topics and Home
Contacts
Term of use, cookies e privacy

 

Metrology