Advertisement
If you have a new account but are having problems posting or verifying your account, please email us on hello@boards.ie for help. Thanks :)
Hello all! Please ensure that you are posting a new thread or question in the appropriate forum. The Feedback forum is overwhelmed with questions that are having to be moved elsewhere. If you need help to verify your account contact hello@boards.ie
Hi there,
There is an issue with role permissions that is being worked on at the moment.
If you are having trouble with access or permissions on regional forums please post here to get access: https://www.boards.ie/discussion/2058365403/you-do-not-have-permission-for-that#latest

Ulster Bank-No Disaster Recovery Plan???

  • 04-07-2012 12:21pm
    #1
    Registered Users, Registered Users 2 Posts: 521 ✭✭✭


    I really cannot understand how a bank does not have a DR plan in place to eliminate the risk of such an event.

    I thought banks were legally required to have mulitple copies of their data in mulitple places so that in the event that a situation like the current one would not impact it's customers/end users.

    Surely their IT systems should be setup so that they can roll-back to a point in time where there data was "clean" and therefore no impact to the customer.

    I find it very hard to comprehend how this is not the case. Compliance requirements should dictate that these types of events are a non-starter.

    Companies such as EMC, IBM, HP excel in this area.


Comments

  • Registered Users, Registered Users 2 Posts: 11,205 ✭✭✭✭hmmm


    Disaster recovery plans are great where something huge goes wrong, e.g. your building burns down. They are not good for dealing with things half going wrong, e.g. in this case. Often it's a lot worse when something half works than if it fails completely, because naturally in the former case you try and fix the problem in situ, which in the case of a massive bank can lead to a backlog building up.

    You have to realise that Financial IT systems are often massively complex. They often have code that is decades old, and thousands of small changes have been made to the original code. These systems process millions of transactions daily, and take feeds from multiple locations. If you miss a few hours of processing time, you can't just reboot the bank and go back to a fresh install.

    In the ideal world, the legacy systems would be replaced by a brand new system purposely designed, but very few banks will invest in something like that.

    In theory testing of the change should have prevented the Natwest/RBS/Ulster meltdown - but from reports I've read it is claimed that an inexperienced employee simply hit a wrong button. If that was the case, why someone inexperienced was in that position is the big question (was it really outsourcing as some claim?)

    Many people would be amazed how often massive consequences can arise from simple IT mistakes.


  • Registered Users, Registered Users 2 Posts: 1,259 ✭✭✭él statutorio


    There was no actual loss of data in this case. From my understanding of it it was a batch schedule that was deleted by mistake, I don't think these schedules are backed up themselves (open to correction though).

    Everything I've read seems to indicate that they're still trying to catch up on that original schedule.


  • Registered Users, Registered Users 2 Posts: 24,924 ✭✭✭✭BuffyBot


    We do not need another thread on this.

    Anyway, if you want to discuss the technical specifics or disaster recovery in general, there are more suitable fora for it on boards.ie (that's way outside the remit of this particular forum).


This discussion has been closed.
Advertisement