Discover more from Ciaran's Crispy Cogitations
Five Lessons from Down Under's Data Disasters: Part One
The lessons from the Medibank and Optus data breaches are vast, varied and vital. The first in a five part series argues that we lost our way figuring out why and when data loss matters
Maybe we’d forgotten about data breaches. Maybe we’d started to think of them old-fashioned and passé.
In Western cyber security circles we (understandably) spent most of 2022 studying the digital dimensions of Russia’s murderous invasion of Ukraine (more on that in future posts). We (again, understandably) spent 2021 fretting about an explosion of disruptive ransomware when systems didn’t work, often with potentially dangerous consequences (think of the serious disruption to Irish healthcare, or the disorderly lines queuing for gas on the US eastern seaboard). We spent (yes, understandably) 2020 trying to figure out how to work remotely safely, and cope with all these new Covid scams, as well as how to prevent vaccine espionage and misinformation.
But data breaches? It’s been a while…
Recent events in Australia should remind us of the centrality of data protection to the health of the digital domain. The lucky country’s extended period of luck in cyberspace finally ran out. Two major breaches have not just shaken one of the world’s richest, happiest and digitally advanced countries but have raised just about every major public policy issue relating to cyber security that there is. There are no easy answers, but we have to pay attention to the questions.
Taken together, the two incidents are among the most consequential in recent cyber security history. The lessons arising from them are vast, varied and vital.
This series identifies five of the most important. They are by no means exclusive to the incidents in Australia, but each feature in the cyber security crisis there.
The series starts today with a look at how our failure to understand the differing severity of harm caused by different data breaches has hampered our ability to understand the problem and tackle it.
It will be followed by four further posts over the coming weeks on other crucial lessons of events on the other side of the world.
But first, given that the incidents have received surprisingly little attention in the northern hemisphere, even among cyber geeks, a quick recap is necessary:
on 22 September, Optus, one of Australia’s largest telcos, announced a data breach affecting up to 10 million customers (a very good early timeline of the Optus case is here). In keeping with tradition, the company called it “a sophisticated attack”. In a striking (and welcome) break with said tradition, Clare O’Neil, Australia’s new(ish) Minister of Home Affairs and Cyber Security (the last part of her title being completely new) pointedly rejected this characterisation, accusing the company of leaving the door open by way of a public API endpoint which required no authenitication and was therefore open to anyone. Just over 10,000 records were published (and then taken down) on the dark web, removed by an apparently penitent actor. The behaviour of whoever accessed the data was, on the whole, a bit weird: an apparent ransom demand for US$1m in crypto, in return for not publishing the information, was made and then dropped.
It later emerged that while most of the exposed records were the classic basic trove of email records, phone numbers, dates of birth and not much more. However, for thousands of Australians the breach involved the compromise of much more important identifiers, principally driving licenses and passport numbers. Under pressure from the Federal Government, Optus agreed to pay for the replacement of the passports of those affected. There has not, at the time of writing, been any further evidence of data leakage resulting from the Optus breach.
if Optus took the form of the classic commercial data breach, with a bit of passport data to make it more serious than most, then the Medibank affair was of a different order, quickly turning into one of the most serious and sickening data breaches in history (a good timeline of the early stages in the saga is here).
In mid October, Australia’s largest health insurer detected a significant breach. Throughout the rest of October the company seems to have gone through the classic process of data breaches from denial to acceptance. Its early statements played down the incident, but by the week of 7 November they’d been contacted by the malicious actor and it had become clear that 9.7 million medical records had been stolen and it was time to come clean about the severity of the incident.
The behaviour of the criminals behind this attack showed much more planning and foresight. Technically, this did at least have some indicators of a sophisticated operation. The hackers knew what they were doing and used conventional methods of demanding a ransom (pricing it at a dollar per customer, or $9.7m) in return for not publishing the information. On 7 November the company announced publicly both the extent of the breach and its refusal to pay the ransom. Two days later, the hackers began to release sensitive medical records relating to thousands of people onto the darkweb. The first files apparently related to women’s reproductive health and patients with alcohol or mental health problems. Yes, it really was that bad.
As a result, the case became headline news in Australia (it is as rare for a cyber incident to lead the news in Australia as it is in the UK or US). On 11 November, the Australian Federal Police held a remarkable press conference where they announced that the attackers were criminals based in Russia whose identities were known to the Australian authorities but were not publicly disclosed. They announced a plan to pursue the criminals through the Russian authorities and Interpol. The Federal Government announced a new offensive cyber unit, to be staffed jointly by the AFP and the Australian Signals Directorate (think the NSA or GCHQ for Australia) to go after the criminals’ digital infrastructure. At the end of November, after further disclosures of apparently intelligible information, the hackers published an odd message saying “case closed” and purporting to dump all the data online onto the darkweb; however the data was, in the seemingly accurate view of Medibank itself, “incomplete and hard to understand”.
Why do these hacks matter? (beyond the obvious)
So for a period of nearly three months, and particularly in November, serious and scary data breaches have been at the top of the political agenda in Australia in a way not seen in other major Western economies for some years. The new Labor administration in Canberra have taken the incidents extraordinarily seriously. The Government seemingly plans a wide ranging reviews of Australia’s cyber security laws and strategic posture.
Clearly, and for obvious reasons, these incidents matter to the millions of Australians affected. But their implications go well beyond Australian shores (there is absolutely no reason to believe that Australia, a wealthy Five Eyes country with excellent security services, is somehow uniquely bad cyber security).
As well as reminding us that data breaches haven’t gone away, the incidents throw up some of the most important and unsolved challenges of doing cyber security in advanced digital economies. These incidents therefore have global resonance and importance.
The five lessons from Australia
Here are the five lessons from Australia’s recent experience that will be the focus of this series:
first, data breaches still matter. But some matter way more than others. Our failure to distinguish between incidents in terms of severity is helping fuel complacency about data security. That is the focus of this first post in the series.
second, Governments are going to be far more involved in cyber incidents and cyber regulation in most countries, whether the private sector likes it or not. The Australian Government has shown significant and innovative leadership in its response. It’s used its public platform well. And it seems to be embarking on a wide-ranging review of its posture. All that is likely to have resonance further afield.
third, we still really, really need to talk about ransoms. We need to talk about whether companies should pay, and, yes, if they should be allowed to. That discussion is paused, not finished.
fourth, and related, how we talk about cyber incidents, and how the media reports it, really matters. There’s a public interest in reporting serious problems with cyber security. There’s also a public interest in not spreading fear that fuels a pro-criminal business model. Australia has valuable lessons here in responsible public discussion.
finally, the safe haven problem is the biggest and hardest challenge we face in cyber security. The ability of criminals to hang out in various countries with impunity, especially Russia, is cyber security’s least technical but most pressing and yet intractable problem.
Data breaches still matter, but some way more than others
…and headline numbers of records is a terrible and damagingly misleading measure.
This week’s inaugural post focuses on how we think about data breaches and why that matters, and in what ways we need to get better.
When we talk about data loss there is still a tendency to think of all data breaches as being more or less the same, with the differentiator being how many records have been stolen. This is particularly true of broader public discourse, though it still happens a bit in specialist cyber security circles too.
That’s wrong, and it’s a problem.
Many data breaches lead to no direct harm, and the indirect harm they do cause is hard to prove or, sometimes, even to observe. So many data breaches aren’t particularly scary, and it’s right that we don’t scare people about them.
But some data breaches are extremely serious and it’s vital we’re able to distinguish between those that are and those that aren’t so bad.
But we’re not very good at doing so.
How data breaches all came to look the same
A decade or so ago, data breaches with very large and scary headline numbers were the dominant theme in cyber security stories that hit the media and therefore registered in the public consciousness.
The world’s professional class got a jolt as early as 2012 when LinkedIn got popped, losing the personal data of 700 million users (ten years on, cyber security trainers on courses still tell students to put their email into Troy Hunt’s epically useful data breach notification service haveibeenpwnd.com and many of the positives come from this breach). A year later, Target lost 70 million customer records and 40 million (at least partial) credit and debit card records. Yahoo! trumped even that, with three billion (three billion!) accounts compromised in the same year. Equifax (2017) carried a headline of 147 million affected Americans and more beyond. Facebook’s headline figure in the 2018 breach was 533 million users. And so on.
But we struggled to relate this to actual harm resulting from these breaches. The ‘technical’ reason for this - if it can be called that - is that in most of these breaches the number of human beings involved in the dataset was huge, but the actual content and therefore value of the data involved about each person was often relatively modest. Put bluntly, your email and home address and date of birth isn’t worth that much to criminals if that’s all they get, even if they get hundreds of millions of such records. They can’t do that much ‘harm’ - directly at least - with this information.
Therefore, ‘headline’ numbers of ‘victims’ was, and remains, a terrible way of talking about data breaches.
It gives no indication of the severity of the harm done, or likely to ensue, to the people whose data was on the stolen list. If it was just a name, email address, physical address and last four digits of a bank card then the risk to the victim is not negligible but it is low, and nothing more than a bit of online financial vigilance is required. Add in a password and you’ve got a wider fraud or impersonation risk given the inevitable reuse of passwords and you need to do something (at least change the password you’ve been using anywhere else you use it). Add in a passport and you’re getting into serious risk of identity fraud and/or the onward sale of high value personal data. Add in a dataset involving myriad sensitive personal details - think mortgage application form or security clearance details, or, of course, your medical history - and some serious damage has been done.
But the way we’ve talked about data breaches doesn’t capture this nuance. In many of the big headline cases (Equifax is a great example) the severity of the breach in terms of the actual data stolen was at the milder end of the scale.
The consequence of this is that people got used to being notified that their data had been lost in a breach. But they then often did not suffer any obvious detriment in the aftermath. Contrary to some opinion in cyber security, people are generally smart and rational, and they reacted accordingly. So many people stopped being all that worried about data breaches.
In a seminal moment in the UK in October 2014, the country went into a blind panic for a weekend about what appeared to be a major breach at the telco TalkTalk. That was before the country realised that actually it wasn’t very serious at all, affecting 150,000 quite old and limited records, not the full, up-to-date accounts of the company’s four million customers. The company was fined nearly the maximum under the pre-GDPR data laws but that was because of negligent security practices; there was no evidence customers actually suffered any harm.
As a result, public perceptions of data breaches began slowly to change. In many cases, they became no big deal. For the most part, despite the constant headlines of spectacular data breaches, for the most part we learned to live with risk and move on.
Consider this official UK Government advice from 2018 following a breach at Ticketmaster, the online concert and sports tickets service. I declare an interest as both the head of the organisation that produced the guidance at the time, and as a Ticketmaster customer who got a notification. It’s cited here to illustrate a typical, late 2010s UK Government response to a data breach. In essence it says: if you’re affected, then it’s not great but don’t panic. The type of data stored can’t really directly be used to harm or embarrass you, so don’t do anything (except, as in this case change your password for this site and anywhere else you use the same one) and watch out for suspicious activity.
Or take this live example from the UK. It’s a low key news story about the breach of a water company in Staffordshire. The company has taken steps to prove that the provision of safe water is unaffected, which is true, and to be expected. Then on the data breach itself, the company is saying that’s it’s complicated to assess the damage done by the criminals when they accessed customer data. There is no sense of panic or any massive harm done to customers.
The subliminal message in both cases is: get on with your life and don’t worry too much.
And this is a good thing. We have enough FUD - fear, uncertainty and doubt - in cyber security without terrifying people even further and undermining trust in our vital digital economy. If people aren’t really at risk, then we should reassure them.
But a downside of our - rightly - grown up discussion about the damage of data loss is that we’ve lost sight of when it gets really bad. Ten years or so ago we scared people about the risk of very big data losses. We then told people most of the time they weren’t that serious. But we lost track of which ones mattered, and how to talk about that to help people and organisations manage risk.
The really bad cases were long ago, or far away
For an example of a data breach with really bad consequences, we can go back to the hack of the Office of Personnel Management in Washington in 2015. The admission by the US Government that the Chinese state had copied the data of more than 20 million Americans who had sought a security clearance in the first fourteen years of this century was - justifiably - a spine-chilling moment for that nation because a security clearance form requires telling the Government pretty much everything about you. The headline number for the Equifax breach may have been more than seven times that of the OPM but the level of detail in each OPM dataset was far, far greater, and it had gone to a hostile state. It remains, in my view, the most strategically significant digital data breach in known history.
In very different examples, the hacks of the hook-up services Ashley Madison (2014) and Adult Friend Finder (2016) caused great distress to many on the list just by virtue of the embarrassment of being on it (and, sadly, at least one suicide has been at least speculatively linked to the Ashley Madison hack).
For a harbinger of the horrors of Medibank, we can look to the Vastaamo compromise in Finland in 2020. Vastaamo was a private provider of psychotherapy. A cyber criminal stole the records of, it seems, some 30,000 patients. As well as demanding payment from the company, the attackers also sent demands to thousands of individual patients, threatening to disclose details of their consultations with their doctors about their mental health problems.
But these cases seemed to be exceptions. The American examples were dated. Events in Finland tend not to get a great deal of attention in places like the US or UK. So, even as Europe, including the UK, hardened regulation of data security with the advent of the General Data Protection Regulation (GDPR), ‘traditional’ data breaches stopped dominating concerns about cyber security.
Even GDPR didn’t really help us understand data harm
Indeed, the problem in assessing harm permeated the new regulatory environment. Before the pandemic, the UK’s Information Commissioner (the country’s data protection regulator) imposed swingeing fines of £183m and £99m on British Airways and Marriot Hotel Group respectively under new GDPR powers. Both incidents had what might be called ‘aggravating factors’ that took them beyond the ‘normal’ data breach: Marriott had millions of passport numbers retained, while in BA’s case 77,000 of the 380,000 records included full bank details including the CVV number which, most unusually in a data breach, made them fully usable by the criminals. They could spend actual money on the cards, not just sell them to other criminals on the dark web, or use them as the basis for identity fraud. That’s relatively rare in data breaches, and obviously serious.
Even in the BA case, however, it was hard to find examples of real, direct harm caused by the breach. And it was harder in the Marriott case even though passport numbers, a valuable criminal commodity, were included in the dataset. Eventually, the fines were reduced to £20m and £18m respectively, but this was explicitly done as a ‘mercy’ decision to help two companies in sectors almost wholly shut down by Covid-19. Had the cases been litigated, which appeared likely before the pandemic, the case would no doubt have centred on the companies demanding that the regulator justified the severity of the punishments in terms of actual harm caused.
That would have required a British court to assess the harm done by a data breach. And that would have been a tough question. With no case taking place, we do not have a legal precedent on which to base future planning about how British courts view data harm.
In the absence of any such ruling, how we assess and price data harm is clearly something governments, regulators, the cyber security industry and large data holders need to think about.
Ideally, we’d have spent more time recently thinking through these problems and coming up with solutions. But, set against the ransomware horrors of 2021, with a major pipeline shut in the US, hospital administrative systems in chaos in Ireland, France and elsewhere, fresh food being thrown away in Sweden because it couldn’t be sold, and so many other deeply damaging ransomware cases, all of a sudden data dumps didn’t seem to be so big a deal. And as the West began to worry about the potential - and thankfully this remains potential - uptick in disruptive cyber aggression from Russia in the context of the invasion of Ukraine, data security slipped further down the consciousness of many parts of cyber security’s body politic. In many ways it was rational to begin to downplay data in the hierarchy of cyber risks.
Why what happened in Australia matters
But Australia’s painful experience reminds us that data security as a problem hasn’t gone away. It can cause real damage to an advanced digital society, and as a result needs the attention of the cyber security community and of governments.
Initially, the Optus case looked like it could have been just another data breach with one of those very large headline numbers of victims (estimated at 9.7 million), with data of limited value. But this swiftly changed. What became known as the ‘crucial subset’ - those with more important identifiers like passports and driving licenses - took longer to work out. Eventually the company disclosed that some 1.2million valid records of this type, and a further 900,000 expired ones, had been stolen. Such a large scale breach of important personal identifiers is what turned Optus into more than the usual common data breach.
Medibank, a few weeks later, required little explanation of its complexity to hit home with the general public. The problem with the theft of nearly ten million medical records being out in the wild was obvious: this was the Vastaamo crisis in Finland at population level scale.
Sadly, comparing Optus and Medibank provides a really good way of thinking about how we assess harm because the headline numbers involved in the dataset are almost identical at 9.7 million each. But the entirety of the Medibank dataset involves personal medical histories, so its impact is much, much worse. One only has to look at the Australian media when the details emerged to see the sort of impact that had in the general population. Many more instances of this and we could have a serious crisis of confidence in the digital economy.
That said, the Australian public reacted calmly and the company put in place what has been mostly regarded as a helpful set of communications to concerned customers about what may or may not be happening to their data. Moreover, and as we shall discuss further in the fourth post of this series, the way in which the breach has been reported and discussed in Australia and beyond has generally been responsible, which has helped limit the damage. But the Medibank affair shows us that data breaches can cause national scale concern and even fear in some parts in the population of a wealthy, highly digitised country.
Where do we go from here?
If Australia’s experience forces us to reprioritise basic data security, what should we do?
The most important thing we can do is really quite a geeky, technocratic thing.
We need to find ways of measuring harm that helps us distinguish between different levels of severity in data breaches.
This matters in getting regulation right.
It matters for the fledgling but important cyber insurance market.
It matters for corporate risk management. Companies need ways of calculating what they need to protect most.
Most of all it matters profoundly for the public understanding of cyber risk. People manage risk better, and react more rationally when problems occur, when they understand what’s going on.
So all of this should matter to those looking after our cyber security.
Where we need to get to is a situation- not just a regulatory regime but an accepted understanding in society at large - where the relative value of different datasets and therefore the difference in harm done by their compromise is widely understood.
A situation where we automatically recognise that the compromise of healthcare records is demonstrably more serious than a basic customer dataset. That we take into account cyber security when designing data storage policies and practices with all these nuances and subtleties in mind.
Ultimately, the destination is one where our reaction to a data breach isn’t determined by the headline number of records compromised. Instead, there’s a general understanding of what happens when data disappears. How often there’s not much to worry about, but that sometimes it’s more serious. And even then we understand that it’s not clearcut: that even in the worst case scenario it’s unlikely that health records will end up on an easily available website but we that we do need to take steps to make sure we’re not susceptible to extortion, identity fraud and other forms of harm.
This involves a lot of hard, unglamorous but vital work. And it’s another example of how solving cyber security’s major problems sometimes involves doing things that aren’t really about the technical aspects of computer network security.
In short, we need better ways of understanding the price of data loss. A lot of progress could flow from that. And serious data breaches which damage public confidence in the digital economy still matter, even with all the other problems in cyber security.
Next up: Governments and cyber incidents after the events in Australia
Perhaps inevitably, given the obvious public concerns about passport data and especially healthcare records, the Australian Government’s role in the response to both Optus and Medibank has been notably activist. But it’s also been distinctive and innovative in some areas. That is likely to have wider ramifications for how Governments approach not just major cyber incidents, but cyber security strategy as a whole.
That issue - the probable increasing role of Governments in cyber security in the light of Australia’s experience - is the focus of the next post.