Re: hc map question for Jim or Chris
Posted by Chris in Tampa on 8/5/2012, 1:23 am
While models can be excluded from appearing in the entire model system, I didn't include that feature for the best performing model map. The existing exclusion feature would disregard all model data from appearing anywhere in the system. The only model I do that for is the ZGFS. I forget why, I think maybe it didn't have any data or something. If a model is not excluded in the system itself, I think it should have the opportunity to be on the best performing model map. The map is simply the best performing models averaged over a short period of time. Jim's right, the XTRP will occasionally be the best. Most of the time not, I've seen the five day XTRP ("A simple extrapolation using past 12-hr motion") take a storm to Seattle before, but as the best performing models are based on real time data specific to that storm, sometimes it will be right.







What follows is something I wrote up when I developed the best performing model map and was showing Jim how it works. The first part relates to how the existing error tables in the model system do not match the best performing model map error. The example at the end was for something active at the time.

----------------------------------------------------------------------------------------------

For the map, the average error table is not used. The average error
table is the average error over the entire course of the storm. Early
on, that will match at times. Later in the storm, around 6 days in, it
will start differing as the average error table will have more cases
than the model error calculation for the map. I'll explain that in the
chart below. To actually verify an average calculation for the map you
will need to look at between 2 and 4 single run position error charts.
(making sure you can see every 6 hour interval of error) You need to
use the drop down box "Error Date" to box back between 1 and 3 runs to
get the other values you will need to average. The main thing to note
is that there is nothing in the model error system that will always
match your best performing models map. The best performing models map
is designed to pick up on how well models are doing recently over a
small amount of cases.

And before I get to the chart, as for models that are tied. There is
really no good way to eliminate a model. The SHIPS models are an
example. The track will be the same, but the intensity is different.
Which do you choose and which do you leave out? Sometimes the SHIPS if
it stays over water is better than the SHIPS if it interacts with
land. And I believe the LGEM is like that too sometimes. And I
believe, but I forget, that the SHIPS may use another model for track,
so you can have a lot of models with the same track. The only
reasonable way I could think to handle it was to not eliminate any
model. Intensity error is then used as a tie breaker. The model with
the least intensity error, which is the absolute value of the single
run intensity error for each model divided by the number of cases of
error we are using to get the average. The smallest value wins. Second
smallest value would then be ranked next and so on. If there is still
a tie at some point, then the system just picks one to go ahead of the
other which should be rare.

Any time there is no recent model error data available for a model the
model will not appear on the map. However, the model is still ranked
and another model will not appear in its place. It would be hard to
display this fact on the ArcGIS map. Only if someone loads the Google
Earth file on the main best performing model map page will they be
informed of this. Keep in mind that we can still rank a model's error
data even if the data is not recent because the model forecast we are
comparing to the current position was from a previous run of the
model. The model forecast we want to actually display on the best
performing model map is the latest available model data however, not a
model forecast that is outdated.

Now for the chart...



0 hours since model data was released...
BAMM and LBAR are displayed


6 hours since model data was released...
BAMM and LBAR are displayed


12 hours since model data was released...
6 hour model error data displayed, averaged over 2 cases.
For example, 6 hour single run model error data (from hour 0 to hour
6) averaged with 6 hour single run model error data. (from hour 6 to
hour 12) If there is no model error data available for one of those
cases, then there is just 1 case.


18 hours since model data was released...
12 hour model error data displayed, averaged over 2 cases.
For example, 12 hour single run model error data (from hour 0 to hour
12) averaged with 12 hour single run model error data. (from hour 6 to
hour 18)


24 hours (1 day) since model data was released...
18 hour model error data displayed, averaged over 2 cases.


30 hours since model data was released...
24 hour model error data displayed, averaged over 2 cases.


36 hours since model data was released...
30 hour model error data displayed, averaged over 2 cases.


42 hours since model data was released...
36 hour model error data displayed, averaged over 2 cases.


48 hours (2 days) since model data was released...
42 hour model error data displayed, averaged over 2 cases.


54 hours since model data was released...
48 hour model error data displayed, averaged over 2 cases.


60 hours since model data was released...
54 hour model error data displayed, averaged over 2 cases.


66 hours since model data was released...
60 hour model error data displayed, averaged over 2 cases.


72 hours (3 days) since model data was released...
66 hour model error data displayed, averaged over 2 cases.


78 hours since model data was released...
72 hour model error data displayed, averaged over 2 cases.


84 hours since model data was released...
78 hour model error data displayed, averaged over 2 cases.


90 hours since model data was released...
84 hour model error data displayed, averaged over 2 cases.


96 hours (4 days) since model data was released...
90 hour model error data displayed, averaged over 2 cases.


102 hours since model data was released...
96 hour model error data displayed, averaged over 2 cases.


108 hours since model data was released...
102 hour model error data displayed, averaged over 2 cases.


114 hours since model data was released...
108 hour model error data displayed, averaged over 2 cases.


120 hours (5 days) since model data was released...
114 hour model error data displayed, averaged over 2 cases.


126 hours (5.25 days) since model data was released...
120 hour model error data displayed, averaged over 2 cases.


-------------
Change occurs...
-------------


132 hours (5.5 days) since model data was released...
120 hour model error data displayed, averaged over 3 cases.
For example,
120 hour single run model error data (from hour 0 to hour 120) averaged with
120 hour single run model error data (from hour 6 to hour 126) averaged with
120 hour single run model error data. (from hour 12 to hour 132)
If there is no model error data available for one of those cases, then
there is just 2 cases. If there is no model error data available for
two of those cases, then there is just 1 case.


-------------
Change occurs...
-------------


138 hours (5.75 days) since model data was released...
120 hour model error data displayed, averaged over 4 cases and this
continues on for the rest of the storm.
For example,
120 hour single run model error data (from hour 0 to hour 120) averaged with
120 hour single run model error data (from hour 6 to hour 126) averaged with
120 hour single run model error data (from hour 12 to hour 132) averaged with
120 hour single run model error data. (from hour 18 to hour 138)
If there is no model error data available for one of those cases,
then there is just 3 cases. If there is no model error data available
for two of those cases, then there is just 2 cases. If there is no
model error data available for three of those cases, then there is
just 1 case.


144 hours (6 days) since model data was released...
120 hour model error data displayed, averaged over 4 cases.
Another example,
120 hour single run model error data (from hour 6 to hour 126) averaged with
120 hour single run model error data (from hour 12 to hour 132) averaged with
120 hour single run model error data (from hour 18  to hour 138) averaged with
120 hour single run model error data. (from hour 24 to hour 144)


150 hours (6.25 days) since model data was released...
120 hour model error data displayed, averaged over 4 cases.
Another example,
120 hour single run model error data (from hour 12 to hour 132) averaged with
120 hour single run model error data (from hour 18 to hour 138) averaged with
120 hour single run model error data (from hour 24  to hour 144) averaged with
120 hour single run model error data. (from hour 30 to hour 150)


------------------------------------------------------------------------------


And a description of how it works, on day 6.25 for example.

12 hours after model data was first released a model ABCD has a 120
hour forecast that says the storm will be in a certain position. 120
hours later, at 132 hours, we compare the storm position at 132 hours
with the forecast model ABCD had. The difference, in nautical miles,
is the error. We do this for the other cases, up to four total cases.
We add up the error for each case and divide by the number of cases to
get the average error over this range of cases. But notice in the
example for 6.25 days we left out some cases that may have been able.
What about these two:

120 hour single run model error data (from hour 0 to hour 120)
120 hour single run model error data (from hour 6 to hour 126)

The best performing model map wants the models that are performing
best most recently. So we don't include older cases. However, the
average error table would display all these older cases which is why
if you notice at 6 days we start leaving out older cases, when there
is no:

120 hour single run model error data (from hour 0 to hour 120)


It's a sort of complex little system. However, if you look at the
popup of a line on the model map, you will see some information that
will help you.

At the moment, the HWRF, is the best model. When you click on the
line, which can be a little challenging to do sometimes you get this
in the window:

------

Model:  HWRF

Description:    Hurricane Weather Research and Forecasting model
Type: Multi-layer regional dynamical
Timeliness: Late Cycle
Parameters forecast: Track and Intensity

Rank:   #1 out of 45 models with average position error for this interval

Initialized:    May 22, 2012 6:00 Z

Model Type:     Late Cycle (released 6 hours after initialization date)

Average position error at 72 hour error interval over last 2 available cases is:
105.9 nautical miles (122 miles | 196 kilometers)

------

That last part is important, and is different for each model. Some
models might have 1 case and others 2. Later in the storm as you can
see from the chart I did above, you can 3 cases and then finally up to
4 cases of model error data. It also tells you the hour error interval
that all model error data ranked on the map was ranked for. When we
rank the 72 hour error that means we are currently 78 hours into the
storm. Since we want to try and smooth out somewhat the models that
guessed right once, we want to average model error data over two cases
if possible, so we are using the 72 hour model error data from hour 0
to 72 and from hour 6 to 78. Some models only come out every 12 hours,
so for those models we will only have 1 case. And some other models
may just happen to have only 1 case too.

There were 45 models with 72 hour error data in the above ranking. If
a model did not have any model error data at 72 hours considered in
this calculation, then it is not included in that number of 45. Some
models may not have been run early on and didn't have a 72 hour
forecast so they are not included in that count.
116
In this thread:
hc map question for Jim or Chris - cypresstx, 8/4/2012, 1:53 pm
< Return to the front page of the: message board | monthly archive this page is in
Post A Reply
This thread has been archived and can no longer receive replies.