Preparing to deal with a data breach – the role of the DPO

This article appeared in the January 2019 edition of Privacy Laws & Business – www.privacylaws.com.

 

Planning for data breach – the role of the DPO

DPOs have a major role to play in relation to preventing and minimising the impact of data breaches in their organisations.  This is true even if you don’t have a particularly strong IT background: you can still ask the awkward questions which lead to discussions, increased awareness and, with a bit of luck, improvement in the organisation’s breach-related capabilities.

The recent high-profile breaches have increased consumers’ and organisations’ of awareness breaches.  From May to December 2018, the UK ICO has received more than 8000 breach notifications.  The recent report from the Ponemon Institute (Ponemon Institute, 2018 Cost of a Data Breach Study: Global Overview) gives interesting information on data breaches.

 

  • 9% of organisations will suffer a breach in the next 24 months.
  • In the UK, the average time to identify the breach was 163 days, and the average time to contain the breach was 64 days. Both the time to identify and the time to contain were highest for malicious and criminal attacks and much lower for data breaches caused by human error.
  • Companies that identified a breach in less than 100 days saved more than $1 million as compared to those that took more than 100 days. Similarly, companies that contained a breach in less than 30 days saved over $1 million as compared to those that took more than 30 days to resolve.

In other words, the longer the breach goes undetected, and the longer it takes to fix once detected, the worse the financial and reputational impact for the organisation (and the data subjects).  Therefore, a sensible approach to breach puts as much emphasis on recovering from the breach as it does on preventing the breach.

 

  • Workstream 1: Preventing breach occurring.
  • Workstream 2: Breach crisis management (ie. the days immediately following the discovery of the breach).
  • Workstream 3: Breach recovery (ie. the immediate crisis is over, and we are now into clearing up the mess, absorbing the lessons to be learnt, and making the required changes to the organisation).

 

However, bear in mind that there is considerable overlap between the three workstreams (for example, something that reduces the chance of a breach may also allow the organisation to recover faster from the breach) and the main function of splitting the work into three workstreams is to provide a conceptual approach which leads to greater operational success.  In other words, the workstreams are not the solution, they are just tools by which to reach a better solution.

 

One this note: a word of caution.  It’s a common to see various vendors offering emergency response teams and similar.  In my view, this is a dangerous delusion.  When a serious breach hits, even the best prepared organisations will be in a state of controlled panic.  Expecting a third party, with little knowledge of the organisation and little knowledge of its people, to arrive and fix everything is not realistic.  Every euro, pound or hour spent in intelligent preparation is worth a hundred spent after the breach.

 

Workstream 1: Preventing breach occurring.

For these purposes, breaches break down into two main types, breaches caused by internal breaches and breaches caused by external players.

Breaches caused by internal players are usually caused by carelessness (laptop left on train) or malice (disgruntled employee, as happened in the Morrisons case).  Both are preventable by relatively simple measures that most organisations should now have in place: technical restriction on copying personal data to devices that can be taken off premise, technical restrictions on transferring data off-premise (eg. by email or FTP), least privilege access, default encryption, effective leaver policies etc.

However, it is worth reviewing each of these and, in particular, stress-testing them.  User access policies are often under monitored and under controlled (ie. too many people have access to too much data), and leaver policies are often weakly implemented.  For those organisations that move large amounts of personal data in and out of their organisations (eg. outsourcing companies), it also worth making sure that the organisation has clear rules – and actual restrictions – around moving data in and out of the organisation.  For example, most organisations cannot transfer money from their accounts without at least two authorisations: there is no reason for personal data to be treated any less stringently.

When thinking about internal rules and restrictions, it’s helpful to distinguish between behavioural approaches and system approaches.  A behavioural rule looks like this: don’t print personal data unless you have a good reason.  A system approach looks like this: you are not given access to printers, so the question of bad practice printing doesn’t arise.  Each approach has its own costs and benefits.

Breaches caused by external players (ie. hacks) are generally more damaging and generate more publicity.  Although provoked by external players, they usually rely on weaknesses in the organisation’s IT setup.  As mentioned above, the fact that a DPO does not have a strong IT background should not prevent the DPO asking the relevant questions.

Security is a subject is in its own right.  A good starting point is the Open Web Application Security Project (OWASP) at owasp.org.  As its name suggest, OWASP looks mainly at security at the application level.  Different network architectures have different security profiles (there’s usually a trade-off between security and ease of administration), and it is worth becoming familiar with these.

 

Finally (for this section): insurance.  The costs (internal and external) of a data breach can be huge.  At the moment the cost of insurance cover is relatively cheap (expect this to change over the new few years), but does it cover all the costs you are likely to face?  Will it cover the fines from the Supervisory Authority?  And is cover large enough to cover class actions?  More on insurance in a separate article.

 

Workstream 2: Breach crisis management

There are two things that make breach crisis management particularly difficult.  The first is that the organisation will have to make a number of quick decisions based on insufficient information.  You will know that there has been a breach, but you may know how many data subjects are affected, you may not know exactly what data has been taken, and you almost certainly will not know who took the data and what they have done with it.

Nevertheless, despite this lack of information, a number of decisions will often have to be taken and taken quickly (of which the notification to the Supervisory Authority is likely to be the easiest).  IBM make the point that most C-level decision makers find this very difficult.  They are used to taking decisions based on lots of data, intelligently organised: taking important decisions based on shortage of data is not something that comes naturally to them.

The second thing that make breach crisis management particularly difficult is the number of players that need to work closely together if there is to be a successful response to the breach.  Internal players will typically be: CEO, CISO and various C-levels, IT, Legal, DPO, public relations/marketing, chief customer officer (or equivalent), vendor manager (if relevant), Compliance, HR/internal comms, Board directors.  External players are likely to be: Police (a hack is a crime), external PR agency (to manage perception of the breach), external law firm (to manage your legal risk), IT forensics (to analyse the breach), insurers.

If you are a controller and the hack affects data that you have outsourced, your processor will be fielding a similar line-up, so multiply the number of people involved by two.  If you are a processor and the affected data involves a number of customers, then multiple teams are likely to be involved: multiply the number of people involved by three or more.

 

There’s only two ways to make this manageable.

 

First, you have to a clear policy setting out who does what in the event of a breach.  It needs to list all the players (internal and external), their deputies, phone numbers (mobile, home and work), and process for notifying everyone, and a clear indication of who is in charge of running the breach response (ie. identify the Breach Boss their deputy).  The policy should also set out what the initial steps are for any breach: having an existing checklist for the first hours will be helpful in the crisis.

If you are a controller with a major processor, or if you are a processor with a major customer, you will need to agree a common breach policy which allows both organisations to work effectively together.  A policy which ignore your dependency is not going to be realistic.

 

Second, the relevant people have to have training, and more importantly, regular rehearsals to make sure that the policy works as intended (and so you can change the policy to the extent that it does not work).  The more realistic the rehearsal (eg. breach notified to the team at 10pm on a Friday night) the more relevant and useful it will be.  What is not realistic is to have no rehearsals and then expect the team to perform to an acceptable level.  The FCA was particularly scathing on this point when it fined Tesco Bank £16 million (reduced from £33 million), for a breach affecting only approximately 8000 customers.  Here’s what the FCA said.

Having well documented crisis management procedures is an essential element of a bank’s (or any financial institution’s) cyber-resilience procedures. It is equally important to ensure that the individuals responsible for implementing crisis management procedures understand the procedures and have the appropriate training to understand how to use the policies and procedures and that banks rehearse these procedures using a variety of scenarios.“[emphasis added]

By a “a variety of scenarios”, I’m assuming that the FCA not only means different factual backgrounds, but also workshops, dry runs, war gaming and the like. If we assume that the breach, when it comes, will not come in the form we expect it (and this is the only safe assumption), then the key skill is the ability to think quickly and on your feet.  This is the “no plan survives first contact with the enemy” point. Along the same lines is Eisenhower (who turned out to be pretty good at crisis management): “plans are useless but planning is indispensable”.

So, in my view, the main lesson from Tesco Bank is – don’t rely on the paper procedures. If you want to be ready, make sure you organise regular rehearsals, workshops, dry runs, war gaming and the like. Build some institutional savvy and resilience.

 

Workstream 3: Breach recovery

Most organisations focus on preventing breach.  They do not put enough effort into thinking about and planning for recovery from the breach once it has happened (even though, as the Ponemon Institute study shows, a faster recovery will save large amounts of money).  Often, the functions of recovering the system and running the system are allocated to two different departments, with the result that organisational dysfunctions emerge precisely at the point that the organisation can least tolerate them.

For an account of how it feels to be an organisation in meltdown because of an aggressive hack, read this account of how Maersk was impacted by the Notpetya virus: https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/

 

I have set out below a checklist that at DPO can use to explore how well the organisation is set up for recovery.  Thanks to Felicity March of IBM for this list:

 

Question Commentary
1.     Does your organisation have one defined person that is responsible for system resilience? If not, then recovering from a breach is likely to be more difficult.  See next.
2.     Does your CISO work hand in hand with your disaster recovery manager and business continuity manager? Even if there is no single person responsible for system resilience (ie. if the different elements of resilience are shared amongst different people), things are still workable if they work together cohesively.
3.     Does your back up strategy include point in time copies, air gapped and WORM storage, forensic analysis, and continuous switch over testing? PIT copies allow the organisation to go back to pre-virus point.

 

Air-gapped means that they are not connected to the system, and so are protected from a virus that travels through the network.

 

Write Once Read Many times: write once reduces the chance of propagating the virus.

 

 

4.     Does your DR plan get tested regularly (including relevant parts of the supply chain?) If it’s not tested regularly (which means testing all of it, not just parts of it), including relevant suppliers, then there is a good chance that the system will not work as it should when needed.

 

5.     Do you run your production environment from your DR environment on a regular basis? Most organisations are reluctant to DR test the production environment, because it’s the heart of the system.  But if you are reluctant to test production in a benign environment (ie. no breach detected), then you can’t reasonably expect it to work in an environment that’s not benign.

 

6.     Is resilience at the core of your network design? Expect a breach, and design for it.  27% of companies are likely to suffer a breach in the next 24 months.
7.     Identify you crown jewels and put in particular protection around them. Not call information is created equal: some files and some data are more important than others.  It’s important to work out which is, and to protect it appropriately.  See the Maersk story above to see how Maersk managed to recover the only remaining copy of the network’s domain controller.