Skip to content

Quality Assurance and Control for Meteorological Observations: Ensuring Accuracy from Measurement to Dataset

1. Introduction

The scientific usefulness of atmospheric measurements depends critically on their quality. Without robust mechanisms to assure and control data quality, observations may become unreliable, inconsistent, or even misleading. For meteorological and climate research, where long-term trends and inter-comparability across regions and instruments are essential, even small measurement biases can have significant implications.

It is important to note that data quality is not an absolute property. Because different applications have different requirements for uncertainty, coverage, or resolution, it is ultimately the data consumer who determines whether a dataset is “fit for use.” In this sense, high-quality data is data that is intrinsically sound, appropriate for the intended task, clearly documented, and accessible to the user [@Sturtevant2021]. Quality assurance and control (QA/QC) is therefore fundamental at all stages of the measurement process: from the selection of observation sites and the design of measurement systems, to the calibration, operation, and maintenance of instruments, and finally to the handling, processing, and dissemination of data. A comprehensive quality management program not only minimizes problems but also quantifies the uncertainty and reliability of data along the entire generation chain.

This article provides an overview of the principles and practices of data quality in meteorological measurements. It introduces the core concepts of QA/QC, outlines how quality assurance applies to different stages of measurement, reviews methods of post-field quality control, and concludes with a concrete example of QA/QC in the Integrated Carbon Observation System (ICOS).

2. Core principles of data quality in atmospheric measurements

In the context of atmospheric sciences, three interrelated concepts are central: data quality, quality assurance (QA), and quality control (QC). * Data quality refers to the degree to which data can be considered “fit for use.” Since applications vary in their requirements, the assignment of quality is always context-dependent. High-quality data can be described as intrinsically reliable, appropriate for the intended application, transparently represented, and accessible to its consumers [@Sturtevant2021]. * Quality assurance (QA) denotes the systematic set of planned and documented activities within a quality management system. Its purpose is to provide confidence that all influencing factors in the measurement chain are being controlled to meet predefined quality requirements. QA thus addresses the planning, design, and organizational procedures that affect the quality of the final data product [@Sturtevant2021]. * Quality control (QC) refers to the operational techniques and checks that are applied to ensure data quality requirements are actually met. This includes the examination of measurement data itself, but also extends to the continuous monitoring of processes and intermediate products throughout the data generation chain. When quality requirements are not met, QC measures may include flagging data as suspect, applying corrections, or, in some cases, removing invalid data altogether [@Sturtevant2021].

Several key dimensions of data quality are commonly considered alongside these definitions:

  • Accuracy: the closeness of a measurement to the true or reference value.
  • Precision: the repeatability of results under unchanged conditions.
  • Reproducibility: the ability to obtain consistent results across different systems, sites, or operators.
  • Consistency: the stability of measurements over time, even under changing conditions.
  • Comparability: the possibility to use data from different instruments or networks jointly. To systematically address these dimensions, many frameworks adopt a data generation chain perspective. This concept emphasizes that quality is not limited to the instrument itself but extends across the entire lifecycle of data: site selection, system design, instrumentation, calibration, operation, maintenance, data handling, and processing. Each link in this chain can contribute to, or degrade, overall data quality.

3. Quality management across the data generation chain

Quality management in atmospheric measurements begins well before the first observation is made and extends across the entire data generation chain. Each stage contributes to ensuring that the resulting data are reliable, reproducible, and fit for purpose. The following elements are central [@Sturtevant2021].

The foundation of quality management is the planning stage. Here, measurement objectives and quality requirements are defined, and the measurement system is designed accordingly. This includes deciding on the variables to be observed, acceptable levels of uncertainty, and the temporal and spatial resolution required. A clear definition of objectives ensures that the subsequent steps can be aligned with the intended use of the data.

The physical installation of the measurement site and system must be configured to represent the desired environment and minimize local disturbances. Site selection involves assessing representativeness, while configuration requires standardized setups to enable comparability between sites and networks. Guidelines from the WMO Guide to Instruments and Methods of Observation (CIMO Guide) provide widely accepted recommendations for site setup.

Continuous inspection and preventive maintenance are necessary to avoid instrument drift, malfunctions, or data loss. Standard operating procedures (SOPs), routine checks, and thorough documentation help to sustain long-term data quality. Training and competence of personnel are critical, as human error remains a common source of uncertainty.

Quality management also extends into the digital domain. Data acquisition, storage, and management systems must ensure secure handling, backup, and detailed metadata documentation. Processing steps such as filtering, averaging, and correction should be transparent and reproducible. Post-field quality control is particularly important: this involves systematic screening of datasets to identify outliers, detect sensor malfunctions, and apply quality flags or corrections where necessary.

Finally, quality management requires continuous evaluation of system performance. Regular audits, performance assessments, and inter-comparisons between sites or laboratories help to identify weaknesses and ensure compliance with established quality requirements. These activities provide confidence that the entire measurement chain is functioning as intended and that the resulting data meet the needs of the scientific community.

4. Methods of quality control

While quality assurance ensures that the measurement system is designed and operated to meet defined requirements, quality control (QC) provides the operational checks and techniques that verify whether those requirements are actually fulfilled. QC is applied both during and after data acquisition, and it encompasses not only the evaluation of the final dataset but also the monitoring of intermediate processes and products along the data generation chain [@Sturtevant2021].

Depending on the type of observation, different QC methods are applied:

4.1 Instrumented in-situ observations

In-situ measurements, such as temperature, humidity, wind, or greenhouse gas concentrations, are subject to several common QC procedures:

  • Range checks: Verifying that measurements fall within physically plausible limits (e.g. rejecting negative values for relative humidity).
  • Step and spike detection: Identifying abrupt, unrealistic changes caused by sensor errors or electronic noise.
  • Internal consistency checks: Comparing related variables for coherence (e.g. dew point cannot exceed air temperature).
  • Redundancy and cross-checks: Using parallel sensors or co-located instruments to validate measurements.
  • Statistical tests: Applying methods such as standard deviation checks, variance thresholds, or correlation analyses to identify anomalous data.

When QC identifies suspect data, corrective actions may include applying correction factors, flagging values with metadata descriptors, or, in severe cases, excluding invalid records.

4.2 Visual observations

Visual observations (e.g. cloud type, visibility, snow cover) remain important in meteorology, despite the increasing reliance on automated systems. QC methods for such data include:

  • Observer training and certification: Ensuring that personnel apply standardized definitions and protocols.
  • Consistency checks: Comparing repeated or overlapping observations to detect discrepancies.
  • Inter-observer comparisons: Cross-validation between observers or between manual and automated systems.
  • Harmonization protocols: Applying international guidelines (e.g. WMO codes) to standardize reporting practices. These measures aim to reduce subjectivity and improve comparability across observers and sites.

4.3 Remote sensing observations

Remote sensing systems, such as satellites, radar, and lidar, introduce specific QC challenges due to their reliance on indirect measurements and retrieval algorithms. Common QC approaches include:

  • Radiometric calibration: Correcting for sensor drift and instrument degradation over time.
  • Geometric calibration: Ensuring accurate spatial registration of observed features.
  • Cross-validation with in-situ data: Using ground-based reference measurements to evaluate and adjust remotely sensed products.
  • Algorithm performance checks: Testing retrieval algorithms against simulated or reference datasets to assess uncertainties.
  • Quality flagging: Assigning metadata to each observation (e.g. cloud contamination in satellite products) to inform users about confidence levels.

QC in remote sensing is particularly important because derived products are often used over long timescales and large regions, where small biases can accumulate into significant systematic errors.

4.4 Integrated QC strategies

In practice, QC is rarely limited to a single technique. Instead, measurement networks often adopt integrated QC strategies that combine automated screening, expert review, and standardized flagging systems. This layered approach provides both immediate detection of gross errors and more detailed post-field quality assessment. By systematically applying these methods across measurement types, QC ensures that atmospheric datasets meet the quality requirements defined at the planning stage, remain traceable throughout their processing, and are transparent to the end user.

5. Quality management frameworks and best practices

Quality management in atmospheric sciences does not occur in isolation. Instead, it builds on internationally recognized frameworks, community guidelines, and evolving best practices that provide both methodological rigor and comparability across networks. These frameworks help ensure that measurements are not only internally consistent but also interoperable at regional and global scales.

A Quality Management Program (QMP) provides the overarching structure for ensuring and documenting data quality. Such programs typically define: * Quality objectives (e.g. acceptable uncertainty levels, temporal resolution, or data availability). * Standard operating procedures (SOPs) for calibration, operation, and data handling. * Documentation protocols, including metadata standards and version control. * Continuous evaluation mechanisms such as audits and inter-comparisons.

QMPs are designed to cover the full data generation chain, ensuring that all potential sources of uncertainty are addressed systematically.

Several international organizations provide widely used standards for atmospheric measurements: * World Meteorological Organization (WMO): The Guide to Instruments and Methods of Observation (CIMO Guide) is the most comprehensive reference, covering site selection, instrument calibration, operation, and data processing. It sets global standards that national meteorological services and research infrastructures follow. * International Organization for Standardization (ISO): ISO standards provide general frameworks for quality management (e.g. ISO 9001) and calibration procedures traceable to the International System of Units (SI). * European Union and research infrastructures: Large-scale infrastructures such as ICOS or ACTRIS have developed detailed QA/QC protocols, often building upon WMO and ISO guidance while tailoring them to domain-specific needs (e.g. greenhouse gas fluxes, aerosols, or atmospheric composition).

With the increasing importance of open science, the FAIR principles (Findable, Accessible, Interoperable, Reusable) have become a central element of quality management. Ensuring that data are well-documented with standardized metadata, openly available where possible, and interoperable with other datasets enhances their long-term scientific value. Importantly, FAIR is not just about accessibility—it also requires that data are of verifiable quality, which links it directly to QA/QC practices.

Operational networks and research infrastructures have established a number of best practices for maintaining quality: * Centralized calibration laboratories to ensure traceability (e.g. ICOS Atmospheric Thematic Centre). * Standardized instruments and protocols across all sites to maximize comparability. * Tiered data review systems that combine automated screening, expert review, and periodic audits. * Transparent flagging systems that inform users about potential data issues without discarding valuable information. * Capacity building through training workshops, manuals, and intercomparison exercises for operators.

By adhering to these frameworks and best practices, atmospheric measurement systems achieve the dual goal of producing high-quality data for immediate scientific use and ensuring the long-term integrity and interoperability of climate records.

6. Example: QA/QC in ICOS

The Integrated Carbon Observation System (ICOS) is a European research infrastructure that provides high-quality, long-term observations of greenhouse gas fluxes in the atmosphere, ecosystems, and oceans. ICOS is an excellent example of how systematic quality management and QC practices are implemented in operational measurement networks.

ICOS data and associated metadata adhere to the FAIR principles. This ensures that data users can understand the context, limitations, and processing of the datasets both before and after downloading. By embedding metadata with each dataset, ICOS provides transparency regarding measurement methods, instrument calibration, data processing, and quality flags. These practices allow scientists to interpret, combine, and reuse data confidently.

To ensure consistency across sites and observables, ICOS follows international standards for data and metadata management: * INSPIRE standards (Infrastructure for Spatial Information in Europe) provide a harmonized framework for spatial data interoperability. * ISO 19115 metadata standard underpins the INSPIRE framework, defining essential metadata elements such as spatial coverage, temporal resolution, quality indicators, and processing steps. By aligning with these standards, ICOS ensures that its data can be integrated with other datasets, supports cross-site comparability, and meets international quality expectations.

ICOS organizes its data and data products into four levels, representing successive stages of processing: * Level 0 – Raw data: Direct observations from instruments. * Level 1 – Intermediate data: Basic processed observational data. * Level 2 – Final quality-controlled data: Observations that have undergone full QA/QC procedures. * Level 3 – Elaborated products: Derived or non-ICOS data contributed to the ICOS data portal. The ICOS Thematic Centres manage and process observations following standardized procedures. At the Centres, data are checked, quality controlled, and, if necessary, gap-filled. Some data types, such as flux measurements, require extensive processing before they can be made available. Finally, processed data are aggregated into half-hourly or hourly averages to facilitate consistent analysis and use.

The ICOS network implements QA/QC across the entire data chain: 1. Standardized instrumentation at all sites to reduce variability and enhance comparability. 2. Centralized calibration laboratories to maintain traceability to reference standards. 3. Automated and manual QC procedures for both in-situ and derived data products, including flagging, correction, and validation against independent observations. 4. Regular audits and performance monitoring to ensure compliance with quality requirements over time. Together, these measures ensure that ICOS data are scientifically reliable, comparable across sites, and suitable for a wide range of climate and ecosystem studies.

7. Outlook and Challenges

The quality of atmospheric measurements remains a central concern in climate and environmental research. While established networks like ICOS demonstrate robust QA/QC practices, several challenges and opportunities are emerging in the context of modern observational systems.

The expansion of automated and high-frequency measurement systems, as well as the proliferation of remote sensing platforms, has led to a dramatic increase in data volume and complexity. Handling large datasets requires scalable data management solutions, automated quality control algorithms, and efficient storage and retrieval systems. Big data approaches, including machine learning, are increasingly being explored to detect anomalies, fill gaps, and enhance overall data quality.

Modern atmospheric research often relies on the integration of in-situ, visual, and remote sensing data. Differences in temporal resolution, measurement units, and spatial coverage present challenges for quality control and harmonization. Ensuring consistent metadata, applying standardized processing pipelines, and following international frameworks (e.g., ISO, WMO, FAIR) are crucial for enabling interoperability and comparability across datasets.

As networks move toward near-real-time monitoring, automated QC procedures are becoming essential. Automated flagging, outlier detection, and diagnostic tools can rapidly identify potential instrument malfunctions or data inconsistencies. However, fully automated systems must be carefully validated to avoid propagating errors, and human oversight remains an important component of QA/QC.

High-quality, long-term observational records are critical for climate research, model validation, and policy support. Maintaining continuity over decades requires careful instrument maintenance, site management, and adherence to evolving standards. Changes in measurement techniques or instrumentation must be carefully documented and, where necessary, harmonized with historical data to avoid artificial biases.

Sustaining data quality requires well-trained personnel, clear documentation, and active knowledge transfer. Training programs, intercomparison exercises, and collaborative networks help ensure that best practices are shared and consistently applied. International collaboration is especially important for global monitoring initiatives and for the adoption of widely recognized QA/QC standards.

Looking forward, the integration of advanced analytics, standardized metadata, and FAIR-compliant infrastructures offers opportunities to improve both the accessibility and reliability of atmospheric datasets. Emerging technologies such as sensor networks, autonomous platforms, and cloud-based processing may further enhance real-time QA/QC capabilities. Nevertheless, the fundamental principles of systematic planning, rigorous calibration, continuous monitoring, and transparent documentation remain central to the scientific usefulness of atmospheric measurements.

Further Reading

  1. [@wmo-cimo-guide]
  2. [@iso9001-2015]
  3. [@iso19115-2003]
  4. [@icos-data-levels-quality]
  5. [@ec-inspire-2007]
  6. [@wilkinson2016fair]

References