U.K. IT professionals are adopting a “Titanic mindset,” a study has found, unable to foresee the upcoming “iceberg” of their insufficient data recovery solutions.
Only 54% expressed confidence in recovering their data and mitigating downtime in a future disaster — despite 78% of the professionals surveyed saying their organisation has lost data at some point over the last year, either due to system failure, human error, or a cyberattack.
Assurestor, a provider of recoverability solutions, surveyed over 250 senior level IT professionals, including CIOs and CTOs, in U.K. organisations in July 2024. Those that had experienced data loss were asked about its impact on their organisation, with 35% citing financial loss as the biggest consequence.
The findings corroborate a June report by Splunk showing that the world’s biggest companies experienced about $9,000 lost for every minute of system failure or service degradation. Contributors included direct revenue loss, diminished shareholder value, stagnant productivity, and reputational damage.
SEE: 1/3 of Companies Suffered a SaaS Data Breach in Last Year
The other two most-cited impacts of data loss for the Assurestor report were customer service implications (30%) and operational downtime (28%). Chillingly, 16% of respondents said that a significant data loss event would likely force the closure of their business.
The proliferation of sensitive data has contributed to the rise in data breaches for businesses. An August report from Perforce found that 74% of those that handle sensitive data increased the volume kept in insecure environments, such as development, testing, analytics, and AI/ML, in the last year.
UK IT pros are not regularly testing their data recovery processes
Despite the well-known and feared risks, IT leaders in the U.K. do not appear to be taking the necessary steps to mitigate them, which could include data recovery testing. Just 5% test monthly, while 20% test only once a year or less, according to the Assurestor report. Among the more regular testers, 60% check that their company’s data is fully recoverable and usable only once every six months.
“What we are seeing is what we call a ‘Titanic mindset’ when it comes to data recovery,” Stephen Young, executive director at Assurestor, said in a press release. “Organisations are thinking they’re unsinkable — until they’re not.”
He provided the CrowdStrike and British Library incidents as examples of how much downtime can cost organisations and the risks of insufficient technology. The former cost Fortune 500 companies at least $5.4 billion in direct financial losses, while “legacy infrastructure contributed to the severity of the impact” of the latter.
SEE: Downtime costs the world’s largest companies $400 billion a year, according to Splunk
Young added: “The fact that only just over half of respondents think their data is recoverable is a concern; this figure should be much nearer to 100%. Otherwise, how can your ‘readiness for recoverability’ be reported confidently to the Board and senior stakeholders?
“Confidence comes from identifying a company’s realistic needs, without compromising on cost — and thoroughly testing, repeatedly.”
The biggest reason for the lack of data recoverability planning? No one else seems to care
The Assurestor report identifies a core reason why businesses are not prioritising their data recovery plans despite knowledge of the risks: lack of internal support.
Execs are simply not providing enough resources to their IT teams, with 29% of respondents citing a lack of financial investment and 39% saying a lack of in-house expertise. Another 28% identified a lack of senior support in this area.
“Lack of top-down support in the way of insufficient funding can foster a culture of complacency, even apathy,” the Assurestor experts said. “If those tasked with protecting the business in the event of a data issue, attack or human error do not feel that threats are taken seriously — or understood — enough, then their approach and attitude may well reflect this.”
5 tips for supporting your data recovery process
Assurestor offered several recommendations to help organisations avoid the steep consequences associated with failing to enhance their data recovery process:
- Ensure a recovery environment is in place that allows for regular recovery testing but does not disrupt day-to-day operations.
- Employ a chief recovery officer whose responsibilities include ensuring sufficient data recovery processes and technologies are in place, and reporting on the business’ recoverability status.
- Redefine the businesses view of “disaster” to include cyberattacks to ensure a backup plan is prioritised.
- Test data recovery plans and backup technologies monthly or as regularly as possible, and adapt them appropriately afterwards.
- Calculate how much downtime would cost the business and what it can afford, then ensure the recovery plan offers enough protection.
“Absolute reliability in your systems and data recovery is non-negotiable,” Young urged. “If there is even an iota of doubt, it’s an open door for challenges. This uncertainty needs to be identified and addressed before disaster strikes.”
Source link