Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Why aren’t the weather models learning from their mistakes?

  • 03-07-2012 9:32am
    #1
    Registered Users, Registered Users 2 Posts: 28


    So far this summer has followed the same pattern as last summer in relation to the weather models, i.e. optimistic for decent weather / high pressure 10 days out but gradually breaking down as time goes by with poor weather / low pressure usually winning out.

    My question is basically is anyone updating these computer models so that they don’t keep making the same errors over and over again?

    Surely if the same pattern constantly repeats itself the people who manage these models should be able to update the models to include the data from the actual weather so as to improve the accuracy of the models in the future?

    Recent weather has basically consisted of a series of low pressure systems rolling in off the Atlantic keeping high pressure to the south but yet the models insist on (often) predicting high pressure 10 days out which rarely comes to fruition??

    It's getting very frustrating / depressing


Comments

  • Registered Users, Registered Users 2 Posts: 5,068 ✭✭✭Iancar29


    Its not that simple... weather models have SO many variables in them that one cant just go " ah thats it there.. change that to this and that and hey presto!"

    Modelling is a whole area of research in itself and ECMWF only had just had one of there main meetings there 2 weeks ago.

    Changing one variable just slightly and being the wrong one could ruin everything .... for all we know a NEW variable altogether may have to be discovered and added in.


    ITs over years that models get better and better, new additions take time and testing before they can be properly implemented. Just like the new model HERMOINE thats based off the HIrlam , those who where at the IMS conference in the botanic gardens this year got to see a example of that working and lets just say we wer all quite pleased at it in action! :)


    So yeah... the simple answer to your question is , they cant, otherwise we may be DOOMED by the T1 virus!! :rolleyes: ;)


  • Registered Users, Registered Users 2 Posts: 28 Munstermac


    In fairness Iancar I never indicated that it would be simple or that changing one variable would 'fix' the problem but I have checked the models every day for the last few years and as I said this issue has been apparent both this summer and last.

    When the models keep repeating the same mistake again and again over a long period of time I would have thought analysis of the what the weather actually turned out to be and the consequent adjustment of the models would be a given.

    So far this summer for the ordinary punter like myself trying to plan work / leisure the models have been practically useless from day 4 out.


  • Closed Accounts Posts: 6,388 ✭✭✭gbee


    Ian gave a great explanation.

    My way of looking at it is, if the model calls for bad weather it will still be be poor weather.

    Also historic weather patterns tend to hold, therefore, as example if there is a great snow event predicted for Ireland, I know it has to be an extreme or rare event to give snow to my area.


  • Closed Accounts Posts: 3,548 ✭✭✭Harps


    As well as the ridiculous number of variables and equations that are in these models, you've got probably millions of input conditions from weather stations, ships, planes etc spread over the entire globe, its inevitable that there'll be a few mistakes in the data as well as some areas that are data sparce and so the models just interpolate starting conditions. I'm sure you've heard the 'if a butterfly flaps its wings' saying. I did a module on CFD modelling at college and if we had just one starting condition slightly off could lead to a very different solution by the end of the run.

    I get what you're saying in that they regularly default to high pressure but then that's the norm for this time of year so I'd imagine the further out you go, the historic average becomes a more important factor. Theres also the fact the low pressure tends to be very shallow at this time of year, rarely below 1000hPa so with no strong signals a trough can just seemingly pop up from nowhere three or four days out on the models.


  • Registered Users, Registered Users 2 Posts: 992 ✭✭✭MrDerp


    I know nothing about weather, but I know a bit about machine learning algorithms.

    At the end of the day, a prediction will be some function of:
    - the Data,
    - the interactions, dependencies between different pieces of data based on various factors (e.g. hot air over cold water)
    - weighting for each of these factors

    A good predictive algorithm will try to learn by providing feedback between the actual results, and the predicted results.

    With machine learning, you introduce a variable called the learning rate. This can be simplified as the weight you will give feedback data (you must have some measures about what is good/bad result vs predictions). If you set the learning rate too high, you can have swings between under predicting and over predicting, as the system will oscillate (and possibly never converge). A lower learning rate is used to control the wild oscillations and assist in converging upon good results. The learning rate here is probably tiny.

    Now, a weather model is (presumably) a massively massively complex algorithm with lots and lots of different data points, which means it has to model lots and lots of different weights. This is probably why (again, I know little about weather modelling) the system needs such awesome computational time and space.

    As such, you cannot allow the system to be trying different weights willy-nilly, as this will cause chaos in the data. Rather you need to experiment with different factors in small numbers. So that means studying the data, examining the weights attached to different factors (e.g. the effect of low pressure moving over a land mass having come from the ocean) and trying to determine where it might tinker.

    Then, and only then, will you allow learning from the machine with that feedback.

    The system is frankly too complex to allow feedback of the nature of (10 day prediction of warm/sunny has failed on the last 5 goes, therefore this current set of conditions probably means cold/wet). You can be confident that this is being fed back into the system, when allowed, but this learning model must takes years to mature, as it is effectively running real-time, unlike the simulated time of the actual model runs.*

    Again, I know nothing of these models, but I presume some of these extra experimental runs of the models referenced on this board, are copies of the model allowed to learn and/or test some new variables/weights.

    *Edited to say: Of course, they probably do test the models and weights by processing through real historic data, to try and predict from previous weather, what was known to have followed.


  • Advertisement
  • Registered Users, Registered Users 2 Posts: 28 Munstermac


    Very interesting read MrDerp.

    You're last line especially was exactly what I was thinking.

    I think if I were involved in one of these models I think I would be running a version of the model on a completely seperate system solely with the purpose of tweaking it so that when you run it for a set of historical data it comes out largely as happened in reality.

    I'm not naive enough to think this would solve all problems but it would be a start.

    I can't help but wonder if even with the benifit of hindsight and up to date data would the current models give an accurate representation of what actually happened?


  • Registered Users, Registered Users 2 Posts: 992 ✭✭✭MrDerp


    Munstermac wrote: »
    Very interesting read MrDerp.

    You're last line especially was exactly what I was thinking.

    I think if I were involved in one of these models I think I would be running a version of the model on a completely seperate system solely with the purpose of tweaking it so that when you run it for a set of historical data it comes out largely as happened in reality.

    I'm not naive enough to think this would solve all problems but it would be a start.

    I can't help but wonder if even with the benifit of hindsight and up to date data would the current models give an accurate representation of what actually happened?

    It's possible that they would give a good result in historical scenarios. It's also entirely possible that it's an emergent bias, created by the very process of using historical data, that's causing the errors today!

    That chaos theory is a bit of a minefield :)


  • Registered Users, Registered Users 2 Posts: 5,068 ✭✭✭Iancar29


    MrDerp wrote: »
    It's possible that they would give a good result in historical scenarios. It's also entirely possible that it's an emergent bias, created by the very process of using historical data, that's causing the errors today!

    That chaos theory is a bit of a minefield
    :)

    Exactly... But is also fascinating all the same :)

    http://en.wikipedia.org/wiki/Chaos_theory


Advertisement