groups:iwwg:activities:amv-intercomparison-studies:amv-intercomparison-study

AMV intercomparison study

by Mary Forsythe — last modified Jul 16, 2014 02:31 AM

This study involves a number of AMV producers deriving AMVs from the same image sequence. The comparison will help to identify best practise in the derivation techniques, ultimately improving the quality and comparability of approaches at different centres.

CGMS-A39.31. IWWG co-chairs and the rapporteur are requested to report to CGMS-40 on the 2nd AMV intercomparison campaign. Deadline: CGMS-40

IWW10.2.    AMV producers to undertake a new AMV derivation inter-comparison using the latest software and a new study period.  Plan to repeat at intervals and report results at the IWWs.

IWW11.2  A second inter-comparison project should be carried out and the results presented at IWW12 in 2014.

 

For further information about the datasets contact:

Manuel Carranza or Regis Borde

For further information about the analysis contact:

Javier Garcia-Pereda

For other enquiries contact the IWWG co-chairs.

 

The first phase of this study was completed in 2009.  The papers below provide further information:

Genkova et al., IWW9 and IWW10 paper - follow links to proceedings from IWW web page

CGMS-36 Report

At IWW10 several centres agreed to participate in a further study using the latest derivation software, a new study period and more in-depth analysis.  It was agreed that this would be useful as a routine check, presenting results at IWWs.  Every 2 years was discussed, but every 4 years may be more manageable. Some other considerations:
•    The test case could be reused as part of routine testing of a derivation upgrade
•    We could provide the test case more widely and encourage research participation in algorithm development 
•    We could additionally use a simulated imagery case for an inter-comparison study

It was agreed at CGMS-39 and IWW11 to undertake a second AMV intercomparison study following the work plan outlined in CGMS-39 EUM-WP-28 - relevant details also provided below.  EUMETSAT prepared 2 datasets that were used by participants in this study.  Details on the datasets and how to access them are provided below.

 

 

Test period:

EUMETSAT have selected a triplet of Meteosat-9 images taken on the 17 September 2012 at 12:00, 12:15 and 12:30 UTC.


Centres involved:

To be confirmed. Centres who expressed an interest at IWW10 include: EUMETSAT, CIMSS/NESDIS, JMA, NWC SAF and possibly CMA and BoM.

 

How to access the data:

The images, corresponding forecast model data (profiles), SCE and CLA products are available from an anonymous ftp server: ftp://ftp.eumetsat.int/pub/EUM/out/MET/

Dataset 1: Full-disk MSG 10.8 image triplet, where each image in the triplet is the same, but artificially shifted by a known amount: 4 pixels west, 2 pixels south for image 2; 8 pixels west, 4 pixels south for image 3.  The displacement between consecutive images, is therefore the same in both cases.

 

Dataset 2: Full-disk MSG 6.3, 7.2, 10.8, 12.0 and 13.4 image triplets, where each image is different.  

The following files are provided:

  • Word document describing the format of forecast files
  • Word document describing the format of the SceneTypeAndQuality files
  • Word document describing the format of the CLAIntm files

 

For each dataset there is a set of data comprising the following:

A tar and GZIPped file containing two forecast files:

- one forecast file for 17 September 2012, 12:00 UTC,

- one forecast file for 17 September 2012, 18:00 UTC.

 

A tar and GZIPped file containing three Scene Type and Quality data files:

- one SceneTypeAndQuality file for 17 September 2012, 12:00 UTC,

- one SceneTypeAndQuality file for 17 September 2012, 12:15 UTC,

- one SceneTypeAndQuality file for 17 September 2012, 12:30 UTC.

 

A tar an GZIPped file containing three CLA Intermediate Product files:

- one CLAIntm file for 17 September 2012, 12:00 UTC,

- one CLAIntm file for 17 September 2012, 12:15 UTC,

- one CLAIntm file for 17 September 2012, 12:30 UTC.

 

A tar and GZIPped file containing SEVIRI image files in the so-called native format:

- 17 September 2012, 12:00 UTC,

- 17 September 2012, 12:15 UTC,

- 17 September 2012, 12:30 UTC.

The MSG Level 1.5 Image Data Format Description document describes in full detail the contents of the SEVIRI image files provided for the 2nd Intercomparison Study.

 

Readers

Small programs that read the data have been put on the ftp server in the 'Readers' directory.

These programs decode:

- Decoded Forecast data

- Image data

- Scene Type and Quality data

- CLA Intermediate Product data 

All of them are written in IDL, but can easily be adapted in C or fortran.
 

Information for participating centres:

For dataset 1:

  1. Track using image triplet provided. The displacement solution produced by each satellite operator can be compared to the expected displacement.

For dataset 2:

  1. Track clouds using 10.8 image triplet and assign heights to AMVs using only the 10.8 channel (enables simpler comparison of target selection, feature tracking and QC). Satellite operators should use the ECMWF model and their standard configurations  (target scene size, search scene size, etc).
  2. As 1. above, but using a prescribed configuration  (target scene size, search scene size, etc) to ensure the same target scenes are used by all centres. This should enable the best possible apples-to-apples comparison of target selection, feature tracking, and quality control algorithms.
  3. As 2. above, but use operational height assignment approach (CO2 slicing, H2O-Intercept, IR-Window). The intent of this test is to assess, to the extent possible, the impact of the different height  assignment methodologies used by the satellite operators.

 

 Prescribed configuration for 2 and 3:

  • Target scene size: 24x24 pixels.
  • Search scene size: 80x80 pixels. 
  • Grid step: 24 
  • The grid points used are in the form (i • grid_size, j • grid_size), where grid_size is 24, and i, j = 1, 2, 3...
  • Each target box is centred in a grid point. Therefore, for instance, the first target box would occupy pixels (13:36, 13:36). 
  • The first target is in the right-bottom corner of the image, and they increase from right to left, then from bottom to top.

 

For each case the results should be provided in ascii format to Mr Javier Garcia-Pereda by 31 December 2012, who will be responsible for the analysis at the NWC SAF.  The format is described below:

1)      integer*4      Target/AMV ID number
2)      real*4           AMV Latitude (deg)
3)      real*4           AMV Longitude (deg)
4)      integer*4      Target box size (pix)
5)      integer*4      Search box size (pix)
6)      real*4           AMV Speed (m/s)
7)      real*4           AMV Direction (deg)  (0-360 degrees, 0 = due North, 90 = due East)
8)      real*4           AMV Height (hPa)
9)      integer*4      Low level correction Method: 
    0 = no correction, 1 = inversion height assignment,  2 = cloud base height assignment
10)     real*4          Model Guess Speed
11)     real*4          Model Guess Direction
12)     real*4          0.8m Albedo at center pixel of target box
13)     real*4          Max from tracking correlation     
14)     integer*4     Tracking Method:
    0 = Cross Correlation , 1 = Sum of squared differences (SSD) , 2 = Other
15)     real*4           Height Error (hPa) 
16)     integer*4      Height Assigmnent Method:
    0 = EBBT(i.e. IR channel method) , 1 = CO2 , 2 = STC(i.e. IR ratioing) , 3 = CCC , 4 = other
17)     integer*4      Quality Indicator (0-100, forecast independent)   
18)     integer*4      Quality Indicator (0-100, forecast dependent) 
19)     integer*4      Horizontal displacement  1st pair (in pixels) Only for dataset1
20)     integer*4      Vertical displacement  1st pair (in pixels) Only for dataset1
21)     integer*4      Horizontal displacement  2nd  pair (in pixels) Only for dataset1
22)     integer*4      Vertical displacement  2nd  pair (in pixels) Only for dataset1
 

Analysis:

For each one of the AMV producers and experiments, a careful estimation of  the distribution of speeds, directions, height levels and quality indices,  and the differences in AMV coverage, amount of AMV data, speeds, directions,  height levels and quality indices will be considered. The collocation of AMV  outputs from the different AMV producers in similar locations will be used as  a tool for the detection of differences. The comparison of the AMV outputs  with NWP model winds, and Height assignment investigations using NWP model  best fit pressure and/or A-train data, will also be used for the verification  of the different AMV outputs. The study can include the AMV analysis   in defined  geographic locations with specific atmospheric or surface conditions,   where the
various AMV algorithms are known to have difficulties when retrieving a cloud  displacement.


Dataset 1:

Statistics: 

  1. Average and range for a number of variables that include AMV speed, direction, height

Plots: 

  1. AMVs plotted over imagery (colour-coded by height: low, middle, high)
  2. Histograms of retrieved speeds and directions

 

Dataset 2:  

Detailed analysis will be performed over various geographic locations capturing different atmospheric (cloud (type, optical depth, height), temperature, moisture) and surface conditions (land, ocean, surface temperature).
 

Statistics:

  1. Average and range for a number of variables that include AMV speed, direction, height
  2. Percentage of low, middle and high level winds
  3. Number of AMVs generated (QI 1-100, QI>50)
  4. Comparison statistics between derived AMVs and collocated analysis fields

Plots:

  1. AMVs plotted over imagery (colour-coded by height: low, middle, high)
  2. Histograms of retrieved speeds, directions, heights, QI score

 

Additional ideas for dataset 2, test 3:

- Compare collocated winds from different centres - how well do they agree in speed, direction, pressure, QI values etc.

- In comparison to model winds, identify cases where different producers agree better/worse.

- Assess height assignment by comparisons to model best-fit pressure, A-train data etc. - identify cases where different producers agree better/worse. 

- Where possible identify reasons for differences e.g. look at imagery, height assignment methodology etc to identify patterns.  Make recommendations for best practise.

 

Results and conclusions:

See final report linked under status - conclusions on p151-153

Some ideas for further work and suggestions for future studies were suggested during discussion at IWW12.

 

 

  • groups/iwwg/activities/amv-intercomparison-studies/amv-intercomparison-study.txt
  • Last modified: 2022/03/03 15:45
  • by 127.0.0.1