If your bank had a systems glitch that prevented you from making any transactions over its network, and I do mean ANY transaction, how would you feel? Let’s put this in some context. You would be unable to do any online banking via the internet or mobile, use an ATM machine, cash a check, make the mortgage repayment or even use your debit card or credit card. How long would you last before getting really annoyed – caught in the middle of shopping at the supermarket, filling your car with gas, paying a road toll – the list goes on? Would it be seven seconds, seven minutes – well how about seven hours!
That’s exactly how long DBS Bank in Singapore was ‘off the air’ early last week and it still can’t explain what happened. After experiencing its worst ever systems related failure that crippled over 1,000 ATM machines and affected many thousands of its customers it still can’t explain what happened. Back-up systems that should have kicked in failed to do so, which could put DBS at odds with Monetary Authority of Singapore guidelines that require banks to have recovery facilities that can correct faults within a very generous four hours.
After a similar experience in 2001 that brought down the network for only one hour, DBS decided to outsource most of its core IT operations to IBM, presumably to prevent another occurrence. At the time, DBS said the arrangement would shave 20 per cent off the cost of the services being outsourced, resulting in savings of about $50 million over the first three years of the agreement. Perhaps another glitch in October last year where internet banking and branch terminals went offline should have been a warning. Apparently not, but this time the finger was pointed, rather hurriedly, at IBM.
The technical meltdown has raised a number of questions about DBS’ strategy of outsourcing critical information technology functions. The critics were not slow in coming out. Nomura analyst Anand Pathmakanthan, was reported commenting that, “Anything that’s not in your hands definitely becomes more risky because if it’s an external party, you have to trust the party to know what he has to do.”
The Outsource Blog was quick to point out that, “a malfunction at the very heart of the bank’s technology infrastructure may have triggered the unprecedented systems failure that DBS Bank suffered….. and the bank appears to have been hit by a mainframe glitch.”
Whatever the problem was it is it still to be disclosed and the repercussions for a number of high-growth industries are being felt.
Outsourcing of key IT functions was the first to come under attack. Then questions arose about business continuity plans (BCM) and risk management policies in place that apparently failed to kick-in. Some were quick to point out that if the failure had been network related and that would have serious repercussions for CSPs pushing their Cloud Computing capabilities.
Whichever way you look at these high-profile ‘outages’ they spell bad news for all associated industries and the longer it takes to find the source of the problem the less confidence the market will have in them. For those senior execs that have achieved major cost savings from outsourcing IT in toto, or neglecting BCM, sleep deprivation may be their main problem right now.
Postscript July 14, Singapore’s Straits Times reported:
A WEEK after a massive systems failure prevented its customers from using the ATM or accessing their accounts, DBS Bank has revealed the root of the problem.
It was a routine computer repair job at 3am on July 5 by IT vendor IBM that went horribly wrong, said chief executive officer Piyush Gupta in a personal letter to customers of DBS and POSB banks.
The IBM team used what he called an ‘outdated procedure’ – a fatal error which crashed the entire system.