|Rita Glowienka-Hense, Sophie Stolzenberger, Christopher Frank, Andreas Hense|
|Meteorologisches Institut, Universität Bonn|
Version from March 16, 2015
This documentation gives a brief description of the VeCAP-Tool. VeCAP also provides a
stand-alone version of its evaluation package on the MiKlip server. The path to the directory
is /miklip/home/b/b305002/vecap. There you can find the latest VeCAP-version and the
If there are any bugs, please contact us!
With this tool, it is possible to download the needed data and, subsequently, to calculate necessary tests for predictability such as ANOVA and Analysis rank histograms/Talagrand-diagrams/ and a number of scores/skill scores and indices (Fig. 1). In Section 2, it is described how the hindcast and historical experiments can be obtained from the DKRZ archive. It is explained how certain lead years or running averages can be selected. The statistical methods are explained in Section 3. Section 4 shows the technical details.
First, the original model data and verifying observations are obtained from the DKRZ archive. For this step it is important to select the desired baseline experiments (b0, b1, hist), model resolution (LR, MR) and subsystem (atmosphere or ocean). It is possible to intend the analyses for horizontal fields at different vertical levels and zonally averaged fields. The seleceted variables can be also detrended.
Furthermore, single lead years and multi-year averages can be selected. The scripts are intended to form time series from the model data which allow to compare model prediction and analysis data for a series of prediction ensembles. Variables from the same lead year of consecutively started predictions are merged to form a time series that can be compared with observation. For each start year an ensemble of predictions is provided: 3 runs for b0 experiments and 10 runs for b1 experiments. In the analyses the predictions called run 1 are combined to form the run 1 ensemble member of the combined predictions. The combination of different runs from different base year ensembles would be as meaningful because each ensemble member of each base year is disturbed independently. Fig. 2 shall explain the data formation.
The data are prepared for our statistical analyses and stored locally (output directory).
The analysis of variance measures the variance due treatments relative to the total variance. This factor is also called coefficient of multiple determination. An unbiased estimator is used following Storch and Zwiers (1999). The significance of this parameter is estimated under the assumption of the F-distribution. In the tool the lead years are considered as treatments. The data show potential predictability if the variability between the ensemble means of the predicted lead years makes up a major part of the total variability. For the bianova the runs are treated as a second treatment. Significant results would indicate marked differences between the models used for the different runs.
Under the title of rms the following indicators are calculated whereby the rms is separated into two parts. The following statistics are calculated:
- Mean observations
- Standard deviation observations
- Mean model data
- Standard deviation model data
- Mean difference : model minus observation(Bias would be squared difference but with loss of information). Confidence test: Student-t-test.
- Quotient (=ensemble mean) of standard deviation.
between model and observational time series. Confidence test: Student-t-test.
- Correlation shows the optimal predictability of observations by the model data.
- It can be achieved by using a regression model.
- Here is the mean correction term and the regression coefficient.
- A Mean Square Error skill score ()
following Murphy and Epstein (1988) has been calculated.
- In their 1989 article the authors do not yet profit from ensemble calculations.
- Their equation 1-4 is used, here averaging is however over ensembles and not over grid points.
- The score compares the mean square difference between predicted values and observation
- with the mean square difference between all model values and observation:
- for perfect prediction.
- if prediction errors are smaller than the error between model climate and abservations
- errors are larger if model prediction is used instead of climate
- In the following denote forcasted and climate values respectively.
This section titled index is a collection of slp indices calculated from model and observational data. Similar to the above analysis using grid points anova and rms statistics are performed. The results are shown as tables. The indices are picked out following a list of Kaplan (2011). The intention is to analyse physical processes with respect to their potential predictability. The following indices area calculated.
- Southern oscillation indices: mean square difference between predicted values and
- Standardized slp difference Tahiti-Darwin
- Standardized slp Darwin
- Equatorial SOI
- NAO indices:
- Standardized slp differences Lisbon-Stykkisholmur
- Gibraltar-Reykjavikmean square difference between predicted values and observation
- A pc-based index over the Atlantic sector (20N-80N,
Deviating from Hurrels (1995) suggestion, the pc are calculated from the correlation matrix and all months of the year are stringed togethter. This reduces the noise for small data series and localizes the pattern.
- North Pacific Index NPI
- SLP (N, E-W)
- Antarctic Oscillation (AAO) indices
- zonal mean SLP difference S minus S
- zonal mean SLP difference S minus S
In addition to the ANOVA, further necessary tests are the Talagrand-diagrams (also known as analysis rank histograms). This is a graphical method to evaluate if the spread of the ensemble is too small (U-shape of the histogram), too large (inverse U-shape) or just perfect (flat histogram). The PDF of a -distribution can be fitted to the histograms and out of the required parameters for this distribution a -score can be calculated (Keller and Hense, 2011).
The -scores (see section 3.4) are calculated at every gridpoint and are displayed as cards and as a function of latitude. In this case the 5%- and the 95%-quantile is calculated with the bootstrap method.
The CRPS is defined as a certain distance between the predictive CDF and the CDF of the observation, the so-called Heaviside function. Thus the CRPS is the integral of the Brier Score at all possible threshold values (Hersbach, 2000). We use the analytical solution of the integral for standard Gaussian after Gneiting et al. (2005). The CRPS is illustrated at every gridpoint. We get a skill score when the CRPS is related to the CRPS climate. These skill scores are illustrated at every gridpoint and in dependence on the latitude. Additonally, the robustness of the CRPSS is calculated by bootstrapping (sample size is 500). The p-value for CRPSS of zero is estimated. If this p-value is smaller than 0.05 (larger than 0.95), we call the CRPSS to be robust in terms of its positive (negative) skill.
Using the input data lona,lone,lata,late a sector of the input data is selected. The model data for the different runs and the observational data are stringed to one data set. Internally the time axis is reset. The bias between model and observational data is corrected. The time series are normed by their common standard deviation. Then the eigenmodes from the correlation matrix are calculated for each month using cdo. The eigenmodes are stored in a scratch directory. In a further step a time dependent is calculated from the mode amplitudes. The approach is the same as for the grid point analysis which is described in the rms module except for the time averaging. Again positive values show that the model predictions are closer to the respective observation (analysis) than the model climate. This is especially helpful for observations which are equal to the climate mean. In this case, the ensemble mean can be correct either because the model predicts this value or otherwise because the model is no longer forced by external parameters or initial conditions and thus the average over a large ensemble is just the climate mean. In this case, however, the spread of the ensemble should be larger than for a real prediction. Tables show the squared correlation between model and observed eof amplitudes.
Reliability diagrams describe the relation between forecasted probabilities and observed relative frequencies (for further details see e.g. Br?cker (2009)). To calculate the observed relative frequencies, binary events out of the observations have to be estimated (default setting is the exceedance of the median). The slope of the reliability diagram is classified into 3 categories: the forecasts are reliable (), potentially reliable () or unreliable (). Additonally, the Brier skill score (reference forecast predictability is p_thr, default: p_thr=0.5) and the area under ROC curves is calculated.
In Tab. 1 the different settings are shown to get the data and prepare them for further
analyses. The default settings are done for b1-LR with 10 ensemble runs. The selected
variable is the geopotential height (zg) at 850, 500 and 200 hPa for prediction year 2-5
(1982-2012). Z contains the trend. ERA-Interim reanalysis is used for verifying the
The selection of other vertical levels is also possible. The unit is Pascal (Pa).
The data can be modified by choosing certain months and/or segments of the global maps (but they will not be saved as extra files). The number of gridpoints for the original fields is necessary. If there are missing values it is also required to fill them in. Tab. 2 summarises these features.
Tab.3 and 4 show the input section for the application of the statistical methods. As all calculations are turned off (default) the user has to activate the test which is of interest. For all tests, the bias correction is performed (default setting which can be turned off). It is possible to start several calculations.
In the first step, the evaluation system has to be loaded:
module load evaluation_system
In the second step, the VeCAP-Too can be started:
analyse --tool vecap run_anova=yes run_rms=yes
In this case, ANOVA and RMS-statistics will be calculated for zg in 850, 500 and 200 hPa, basis is b1-LR with 10 member etc. (see default settings).
|b0orb1||choose baseline experiment, options: ’b0’, ’b1’|
|yz||verical cross section or horizontal field; options: ’yes’, ’no’|
|hind_cast_or_hist||choose hindcasts or historical ensemble; options: ’hind_cast’, ’hist’|
|lmormr||choose model resolution; options: ’LR’, ’MR’|
|sphere||choose options: ’atmos’, ’ocean’|
|ja||choose start year of analysis, e.g. boundary condition for time period of ERA-Interim (1979-2012): min(ja)=1979+nt-1|
|je||choose end year of analysis, e.g. boundary condition for time period of ERA-Interim (1979-2012): max(je)=2012|
|leadyear||choose first predicted year to be selected; options:’1’,...,’10’|
|nt||number of years to be averaged starting with selected year|
|nrun||number of ensemble runs|
|sellev||select vertical levels (’yes’) or surface (’no’); options: ’yes’, ’no’|
|levels||choose levels (in Pa); example for vertical cross sections: 100000,92500,85000,70000,60000,50000,40000,30000,|
|detrend||choose ’yes’ or ’no’|
|observations||choose ’yes’ or ’no’ if observations are available|
|nx||number of gridpoints in x-direction; in case of yz=yes nx is forced to 1|
|ny||number of gridpoints in y-direction|
|lona||western longitude for selecting a field|
|lone||eastern longitude for selecting a field|
|lata||southern latitude for selecting a field|
|late||northern latitude for selecting a field|
|xmiss||value for missing data to fill|
|mona||start month, e.g. 1 = January|
|mone||end month, e.g. 12 = December; if mona = 13, the annual mean is also calculated|
|nbmon||number of months per year which are in the model data files and in the observational data file|
|xmin||minimum of values|
|xmax||maximum of values|
|run_anova||performs analysis of variance (ANOVA)|
|run_yzanova||performs analysis of variance (ANOVA) for height-cross sections of zonally averaged variables (yz)|
|run_bianova||performs bivariate analysis of variance|
|run_rms||performs RMS-Statistics (mean square error skill score, correlations, ...)|
|run_yzrms||performs RMS-Statistics (mean square error skill score, correlations, ...) for height-cross sections of|
|zonally averaged variables (yz)|
|run_slp_index||performs slp index tests. It is only possible for the variable sea level pressure (psl).|
|run_eof||performs EOF (empirical orthogonal function) analyses|
|run_yz_eof||performs EOF (empirical orthogonal function) analyses for height-cross sections of zonally averaged variables (yz)|
|run_talagrand||performs talagrand diagrams (i.e. analysis rank histograms)|
|bias_corr_talagrand||If talagrand diagrams are perfomed, model data can be bias corrected beforehand. Choose ’yes’ or ’no’|
|run_yztalagrand||performs talagrand diagrams (i.e. analysis rank histograms) for height-cross sections of zonally averaged variables (yz)|
|bias_corr_yztalagrand||If talagrand diagrams for yz are perfomed, model data can be bias corrected beforehand. Choose ’yes’ or ’no’|
|run_betascore||performs betascores as a function of latitude and for every gridpoint|
|bias_corr_bs||If betascores are perfomed, model data can be bias corrected beforehand. Choose ’yes’ or ’no’|
|run_yzbetascore||performs betascores for height-cross sections of zonally averaged variables (yz)’)|
|bias_corr_yzbs||If betascores (yz) are perfomed, model data can be bias corrected beforehand. Choose ’yes’ or ’no’|
|run_crpss||performs continuous ranked probability score (crps) for every gridpoint and its skill score (crpss) for every|
|gridpoints and as a function of latitude|
|bias_corr_crps||If crpss is performed, model data can be bias corrected beforehand. Choose ’yes’ or ’no’|
|run_yzcrpss||performs continuous ranked probability score (crps) for every gridpoint and its skill score (crpss) for every|
|gridpoints for height-cross sections of zonally averaged variables (yz).|
|bias_corr_yzcrps||If crpss (yz) is performed, model data can be bias corrected beforehand. Choose ’yes’ or ’no’|
|run_ess||performs ensemble spread score (ess) for every gridpoint and as a function of latitude|
|bias_corr_ess||If ess is performed, model data can be bias corrected beforehand. Choose ’yes’ or ’no’|
|run_yzess||performs ensemble spread score (ess) for every gridpoint for height-cross sections of zonally averaged variables (yz)|
|bias_corr_yzess||If ess (yz) is performed, model data can be bias corrected beforehand. Choose ’yes’ or ’no’|
|run_reliab||performs reliability classifications out of the reliability diagram, brier skill score and area under ROC curve for|
|height-cross sections of zonally averaged variables - anaylses for every gridpoint|
|bias_corr_reliab||If reliab is performed, model data can be bias corrected beforehand. Choose ’yes’ or ’no’|
|p_thr||If reliab is performed, a threshold value between 0 and 1 has to be defined to calculate forecast probabilities (out of|
|the ensemble) and binary events (out of the observations)|
|run_yzreliab||performs reliability classifications out of the reliability diagram, brier skill score and area under ROC curve for|
|height-cross sections of zonally averaged variables (yz) - anaylses for every gridpoint|
|bias_corr_yzreliab||If reliab (yz) is performed, model data can be bias corrected beforehand. Choose ’yes’ or ’no’|
|yz_p_thr||If reliab (yz) is performed, a threshold value between 0 and 1 has to be defined to calculate forecast probabilities (out of|
|the ensemble) and binary events (out of the observations)|
Gneiting, T. and Raftery, A.E. and Westveld III, A.H. and Goldman, T.: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation, Monthly Weather Review, 133,5,1098–1118,2005.