Advertisement
If you have a new account but are having problems posting or verifying your account, please email Niamh on [email protected] for help. Thanks :)
New AMA with a US police officer (he's back!). You can ask your questions here

IBM

Comments

  • Registered Users Posts: 1,591 gabeeg


    Every hour?

    That's me sacked


  • Registered Users Posts: 1,376 ✭✭✭ Reckless Abandonment


    Get ready for the 11z 14z 17z
    Just wouldn t be the same.
    But if it does what they say it will,
    could be a game changer.. looks very interesting..


  • Registered Users Posts: 12,560 ✭✭✭✭ sryanbruen


    I hope it doesn’t become publicly accessible just for the sake of the thoughts of it coming in the hands of hypers on forums like Netweather. It’s bad enough as it is.

    Weather and climate site - https://www.ukandirelandclimate.com/ (advised to view on PC, not optimised for mobile)

    Photography site - https://www.sryanbruenphoto.com/



  • Registered Users Posts: 19,181 ✭✭✭✭ Akrasia


    That's really cool.

    What makes it even cooler is that the model uses Quantum Computing on the new IBM Q System One platform.
    it operates at 20 Qbits. This is something that has been available as a cloud service from IBM since 2017 but now they're selling commercial units to private companies and universities and if the 20qbit service can see improvements as dramatic as this, remember that the first quantum computers were running at 5qbits a couple of years ago, they've already commercialised 20qbits and are working on 50qbit prototypes.

    This is another explosion in processing technology that will make ultra fine resolution weather models possible in the medium term, and with better resolution for real time monitoring, that makes longer range forecasting less inaccurate (although chaos still applies and small pertubations can still have exaggerated consequences)


  • Registered Users Posts: 8,201 ✭✭✭ Gaoth Laidir


    It seems like a bit of a marketing stunt by IBM, to be honest.

    I'd have major problems with including mass data from e.g. smartphones and private stations, as these would be useless in many cases. Take the case of a smartphone barometer. Assuming it is perfectly calibrated (I know mine is a couple of hPa off), the very act of driving in a car causes major fluctuations in the reading depending on the car's speed and whether the window's open or not. But even apart from that, driving along and rising even 4 metres (about the limit of the accuracy of the phone's GPS) will reduce pressure by around 0.5 hPa, therefore the potential error due to GPS altitude and barometer errors is huge compared to a standard station. We already have plenty of those on land, it's over the oceans and in more isolated locations that we need more. The only thing worse than no data is bad data. Garbage in, Garbage out. 4DVAR has greatly improved the assimilation process of the global models.

    The fact that the resolution is non-uniform around the globe is nothing new. The ARPEGE is basically the ECMWF (~9 km) run with the highest resolution (~7.5 km) centred over France, reducing down to ~37 km on the opposite side of the globe (antipode). It's similar with this new model. Up to 3 km over land but much coarser over oceans. This limits the usefulness of its longer range forecasts, so a standard hi-res local model will do the same job for shorter-term forecast of smaller systems.


  • Advertisement
  • Registered Users Posts: 19,181 ✭✭✭✭ Akrasia


    It seems like a bit of a marketing stunt by IBM, to be honest.

    I'd have major problems with including mass data from e.g. smartphones and private stations, as these would be useless in many cases. Take the case of a smartphone barometer. Assuming it is perfectly calibrated (I know mine is a couple of hPa off), the very act of driving in a car causes major fluctuations in the reading depending on the car's speed and whether the window's open or not. But even apart from that, driving along and rising even 4 metres (about the limit of the accuracy of the phone's GPS) will reduce pressure by around 0.5 hPa, therefore the potential error due to GPS altitude and barometer errors is huge compared to a standard station. We already have plenty of those on land, it's over the oceans and in more isolated locations that we need more. The only thing worse than no data is bad data. Garbage in, Garbage out. 4DVAR has greatly improved the assimilation process of the global models.

    The fact that the resolution is non-uniform around the globe is nothing new. The ARPEGE is basically the ECMWF (~9 km) run with the highest resolution (~7.5 km) centred over France, reducing down to ~37 km on the opposite side of the globe (antipode). It's similar with this new model. Up to 3 km over land but much coarser over oceans. This limits the usefulness of its longer range forecasts, so a standard hi-res local model will do the same job for shorter-term forecast of smaller systems.

    Millions of datapoints can be used by machine algorithms to generate a very 'accurate' picture despite inaccuracies on individual recording devices. Think of it like the human brain. Our brain can generate a stable image even when individual photons are phase shifted to different colours, or reflected diffracted and diffused off different surfaces etc and there are changing shadows and light pollution from different sources, and each of our eyes gets a different image which arrives upside down and has to be inverted and collated in our brain to generate a single image, and in that image, noise is filtered out and specific parts of the image get amplified based on internal rules that decide which information is more useful or important depending on the context. We do this about 30 times a second for our entire waking lives.

    The smartphone data could be used to compute anomalies per geographic datapoint linked to each individual smart phone, so if person x has a phone that consistently reads higher than usual at a certain location, if that user travels this route every day, the algorythm can assign that barrometer reading an ID and only report based on the anomaly for that device in that location. Take the average anomaly for thousands of different devices, exclude the extreme outliers and give weightings to datasources for which there is more consistent data, and you can get a picture out of the noise.

    Of course, that takes enormous amounts of processing power, but that's the point of this machine. it can take trillions of pieces of data each day and use it in it's model

    We have already seen how social media data analysts can take users personal data and build profiles of each user (billions of people) that can identify their personality and preferences better than people who they are in daily contact with can.

    With smartphone sensor data, it's taking only a couple of basic location and sensor datapoints and logging them. It's a much less complex problem than identifying what people are like based on which posts they share or pictures they like (although still very complex)

    Anyway, a system like this would use decentralised unverified data to augment and support it's service which would be primarily based on verifiable station networks rather than as a primary data source, at least until there are suitable studies to verify the accuracy of that data.


  • Moderators, Sports Moderators Posts: 40,053 Mod ✭✭✭✭ Sparks


    Akrasia wrote: »
    Millions of datapoints can be used by machine algorithms to generate a very 'accurate' picture despite inaccuracies on individual recording devices.
    But that's (a) not what the new quantum machine they're selling is designed for; (b) not usually true. The norm the vast, vast, vast majority of the time is "garbage in, garbage out".
    Think of it like the human brain. Our brain can generate a stable image even when individual photons are phase shifted to different colours, or reflected diffracted and diffused off different surfaces etc and there are changing shadows and light pollution from different sources, and each of our eyes gets a different image which arrives upside down and has to be inverted and collated in our brain to generate a single image, and in that image, noise is filtered out and specific parts of the image get amplified based on internal rules that decide which information is more useful or important depending on the context. We do this about 30 times a second for our entire waking lives.

    So, that's not how photons work, that's not how human eyes work, that's not how the human visual system works insofar as we currently understand it (at all) and we don't have a single static "refresh rate" on our visual system (it's more distributed than that) but we can detect things that are visible for only 16ms which would give an approximate rate of 60hz for detection (but we can tell the difference between scenes changing at 60hz and scenes changing at 100hz and at higher frequencies, as well as doing a bunch of things that directly imply varying "refresh rates" for different parts of the system, so it's not really a useful number for characterising the human visual system).
    Of course, that takes enormous amounts of processing power, but that's the point of this machine. it can take trillions of pieces of data each day and use it in it's model
    That's not really the point of quantum computing. It's not so much about being able to handle large amounts of data, it's about being able to tackle specific kinds of calculations which with existing machines are prohibitive in cost (where cost is in terms of time rather than money). NP problems, basically. Going from "we can't calculate this within the lifetime of the known universe" to "we can probably calculate this within a finite time".

    In terms of handling large amounts of data, the approaches being developed for monitoring experiments in CERN or the Square Kilometer Array is a lot more advanced because they're looking at more data (CERN generates about a petabyte of data per second during LHC runs and the SKA will be around an exabyte of data per day).
    We have already seen how social media data analysts can take users personal data and build profiles of each user (billions of people) that can identify their personality and preferences better than people who they are in daily contact with can.
    That is not the same thing and is also not really true, though it was a very popular clickbait headline a while back.


  • Registered Users Posts: 1,591 gabeeg


    Sparks wrote: »
    That is not the same thing and is also not really true, though it was a very popular clickbait headline a while back.

    Eh Brexit


  • Registered Users Posts: 7,759 ✭✭✭ Calibos


    Sparks wrote: »
    But that's (a) not what the new quantum machine they're selling is designed for; (b) not usually true. The norm the vast, vast, vast majority of the time is "garbage in, garbage out".



    So, that's not how photons work, that's not how human eyes work, that's not how the human visual system works insofar as we currently understand it (at all) and we don't have a single static "refresh rate" on our visual system (it's more distributed than that) but we can detect things that are visible for only 16ms which would give an approximate rate of 60hz for detection (but we can tell the difference between scenes changing at 60hz and scenes changing at 100hz and at higher frequencies, as well as doing a bunch of things that directly imply varying "refresh rates" for different parts of the system, so it's not really a useful number for characterising the human visual system).


    That's not really the point of quantum computing. It's not so much about being able to handle large amounts of data, it's about being able to tackle specific kinds of calculations which with existing machines are prohibitive in cost (where cost is in terms of time rather than money). NP problems, basically. Going from "we can't calculate this within the lifetime of the known universe" to "we can probably calculate this within a finite time".

    In terms of handling large amounts of data, the approaches being developed for monitoring experiments in CERN or the Square Kilometer Array is a lot more advanced because they're looking at more data (CERN generates about a petabyte of data per second during LHC runs and the SKA will be around an exabyte of data per day).


    That is not the same thing and is also not really true, though it was a very popular clickbait headline a while back.

    What does literally everyone have now that doesn't move that is guaranteed to be connected to the internet??

    A Wifi Router.

    Get manufacturers to install the required hardware and have an opt out instead of opt in for sending the data back to base.


  • Registered Users Posts: 1,591 gabeeg


    Calibos wrote: »
    What does literally everyone have now that doesn't move that is guaranteed to be connected to the internet??

    A Wifi Router.

    Get manufacturers to install the required hardware and have an opt out instead of opt in for sending the data back to base.

    I don't have a wifi router. I've been using my phone as a hotspot for the last year or so.
    Reason being is that I typically get 20mbps+ which is faster than your average wifi router.

    This is more of a money saving tip than an argument against what you're suggesting


  • Advertisement
  • Moderators, Sports Moderators Posts: 40,053 Mod ✭✭✭✭ Sparks


    Calibos wrote: »
    What does literally everyone have now that doesn't move that is guaranteed to be connected to the internet??

    A Wifi Router.

    Get manufacturers to install the required hardware and have an opt out instead of opt in for sending the data back to base.

    That would be a tremendously terrible idea from the point of view of computer security. A single back door, in every single wifi router in the world (and I don't know how you'd even arrange that - just look at the current problems Huawei are having in the US in regard to security concerns), shipping data to a single central spot from your devices. That would be a nightmare to try to do securely, and I'm not sure the payoff is even well-defined, let alone proven.

    There are things that might look like this (like, say, LoRa sensor nets) but those are different in key aspects (like being opt-in, having dedicated sensors and devices and hubs, having dedicated channels and so on).


  • Registered Users Posts: 19,181 ✭✭✭✭ Akrasia


    Sparks wrote: »
    But that's (a) not what the new quantum machine they're selling is designed for; (b) not usually true. The norm the vast, vast, vast majority of the time is "garbage in, garbage out".



    So, that's not how photons work, that's not how human eyes work, that's not how the human visual system works insofar as we currently understand it (at all) and we don't have a single static "refresh rate" on our visual system (it's more distributed than that) but we can detect things that are visible for only 16ms which would give an approximate rate of 60hz for detection (but we can tell the difference between scenes changing at 60hz and scenes changing at 100hz and at higher frequencies, as well as doing a bunch of things that directly imply varying "refresh rates" for different parts of the system, so it's not really a useful number for characterising the human visual system).


    That's not really the point of quantum computing. It's not so much about being able to handle large amounts of data, it's about being able to tackle specific kinds of calculations which with existing machines are prohibitive in cost (where cost is in terms of time rather than money). NP problems, basically. Going from "we can't calculate this within the lifetime of the known universe" to "we can probably calculate this within a finite time".

    In terms of handling large amounts of data, the approaches being developed for monitoring experiments in CERN or the Square Kilometer Array is a lot more advanced because they're looking at more data (CERN generates about a petabyte of data per second during LHC runs and the SKA will be around an exabyte of data per day).


    That is not the same thing and is also not really true, though it was a very popular clickbait headline a while back.

    Well, I got served


  • Moderators, Science, Health & Environment Moderators Posts: 7,132 Mod ✭✭✭✭ pistolpetes11


    IB M had a stand at CES today but upon asking a few very basic questions it was clear there was no answers to be had :eek:

    All looked great , will be interesting to see the results

    470087.JPG


  • Registered Users Posts: 8,201 ✭✭✭ Gaoth Laidir


    IB M had a stand at CES today but upon asking a few very basic questions it was clear there was no answers to be had :eek:

    All looked great , will be interesting to see the results

    470087.JPG

    Without knowing if there were slides previous to that one, I see they conveniently left out the fact that the ECMWF is down to 9-km resolution. Not able to answer basic questions. It's an idea in its infancy stage, unproven as yet but still being marketed as the dog's ball locks. I see the same in my line of work. Unfounded claims of greatness.

    And for that reason, I'm OUT!


  • Registered Users Posts: 1,773 ✭✭✭ dacogawa


    It will be around 12/24 months + before we remotely trust the GRAF, if ever. It needs to run through the seasons and be right a lot, we don't even trust the ECM & GFS properly and they've outputted thousands of charts.

    It could be something amazing and hopefully it will (if we can even view it publicly) and many other new forecasts will come along.

    At the momen I'm a little bit sceptical, but I am hopeful too!


  • Closed Accounts Posts: 7,070 ✭✭✭ Franz Von Peppercorn


    sryanbruen wrote: »
    I hope it doesn’t become publicly accessible just for the sake of the thoughts of it coming in the hands of hypers on forums like Netweather. It’s bad enough as it is.

    They’d never sleep


  • Registered Users Posts: 5,617 ✭✭✭ circadian


    Many moons ago, I worked in the office above D-Wave in Vancouver. Any time I spoke to the lads they always said weather forecasting was the industry they saw the biggest potential for quantum computing.


  • Moderators, Science, Health & Environment Moderators Posts: 7,132 Mod ✭✭✭✭ pistolpetes11


    Without knowing if there were slides previous to that one, I see they conveniently left out the fact that the ECMWF is down to 9-km resolution. Not able to answer basic questions. It's an idea in its infancy stage, unproven as yet but still being marketed as the dog's ball locks. I see the same in my line of work. Unfounded claims of greatness.

    And for that reason, I'm OUT!

    Yeah there were loads of slides/ video , like most of these shows it seemed most on the front line were only really there to bat a few questions , you could speak to some higher ups if they thought you may be a customer.


  • Registered Users Posts: 983 ✭✭✭ MrDerp


    Yeah there were loads of slides/ video , like most of these shows it seemed most on the front line were only really there to bat a few questions , you could speak to some higher ups if they thought you may be a customer.

    I was an IBMer in a past life. Anything shown at CES is just a brand/marketing exercise and I wouldn’t expect a theorist at these desks. If you want tech data pay 3.5k and go to the annual IBM conference. These services will be sold to big oil and (re)insurers via the weather company and a shed load of consultants.

    I happen to know a bit about this from before I left. The target is building level liability information and having near real-time risk information, presumably to trade in risk offset.

    They don’t care about the weather in your garden so much as getting more granular data for your area about how lower res patterns affect the gaffs, businesses and farms on your local road.

    Micro level risk adjustment is the goal, and understanding at longer ranges when to shut a production facility to avoid loss etc


Advertisement