Very good update. That is why I wrote the image tutorial yesterday, I was putting off revisiting the model error product description page I need to continue to improve on my site over time. The error trend part of the model error is still too unstable to explain. I changed how the intensity error trend text works for the "Single Run" error type and may make more changes to how that part and other parts of how the trend feature works, so best not to use the error trend or error trend heat map. This is a good view, the average of all runs to date: http://www.tropicalatlantic.com/models/data.cgi?basin=al&year=2010&storm=06&display=modelerror&type=table&run=latest&errortype=average&interval=6&showbearing=&positionunit=nm&intensityunit=kts&showcases=&trend=&showtrendtext=&heatmap=1&showzerohour=1&hour=120 When it gets to having a lot of hours of data, then the interval can be adjusted to every 12 hours or even every 24. You can see the GFDL is not doing that great when it comes to intensity so far. The ensemble members when it comes to intensity are all misleading since they don't try to forecast the peak intensity and are not initialized with the current intensity (at least not when the storm is strong I have noticed, perhaps because the global models just don't have the resolution, not sure of the actual reason). That is something I might try and do something about over time, like an option not to include models that are very far off on the initial conditions. The other view is the "Single Run" error type which I'm not sure what is a good name for it. How model error is actually calculated is confusing when considering early and late cycle models, but best seen on the chart on the product description page that is a very rough draft: http://www.tropicalatlantic.com/models/data.cgi?page=modelerror (I would only look at the chart, the other text really needs many more drafts and it bores even me to read it.) The intensity on the single run error type allows you to see if the model thought it would be stronger or weaker. If the value is positive, then the model thought it would be that much stronger in the unit you chose, like knots, than the storm actually is. If it is negative, the model thought the storm would be that much weaker than the storm it actually is. For the average error type however, all values are positive because the absolute value is used. If there are two single run error types available because the storm just recently formed and two intensity values are -5 and then for the other one 5 for a given model at a particular hour, the average I go with is 5. I don't know of a good way to express the negative in the average other than to simply go with the absolute value. It was 5 knots off the first time and 5 knots off the second time so the average should be 5. And then the average error type is simply the combination of every "Single Run" error type. I'm not sure which I like best. The average has much more data in it so you get a good idea over the life of the storm, but the single run data tells you what is happening right now and not factoring in the early life of the storm when conditions might have been different. It is good to know how the model is doing in just the last available data, but if the model has a bad few runs the single run type can give you a false impression of how good that model is. It's complex. I can see why this site, which is the only other one that I know of that does model error, doesn't explain it. I'm not even sure what they consider when calculating the error trend on that site. A model could be doing better at some hours and worse at some hours, so how would you say how it is trending? That is why the trend feature on my site adds the trend into each square, but that just adds a lot more data which confuses things. If anyone finds anything about the model error on my site that they think could be improved, let me know. |