Lessons from Down Under's Data Disasters, Pt 2
Many of the problems in cyber security are economic and social, not technical. And digitisation privatises public risk like never before. That's why Governments are increasingly required to step in
Recap
This is the second in a five part series looking at the vast and varied implications of two major data breaches in Australia in the period September to November 2022.
The first post, which can be found here, covered the background to the two breaches at Optus, a major telecoms company, and Medibank, a health insurer which covers nearly 10 million of Australia’s 26 million population. It also discussed how our failure to distinguish between the severity of different types of data breach has bred a degree of complacency about data security.
The remaining posts in this series will cover:
the payment of ransoms in cyber incidents (Part 3);
how we discuss and report cyber harms, and why that matters (Part 4);
the safe haven problem in cyber crime (Part 5).
Part 2: Governments and the cyber security problem
This post covers the role of Government in cyber security.
Over the past decade, Governments have become increasingly active in this space. State operational capabilities have been improved in many countries. Critical infrastructure is increasingly regulated for cyber safety and resilience. Data protection laws abound.
But there’s a lot more to cyber security than this.
Many of the deeply entrenched problems in cyber security are rooted in economic, social and behavioural factors as well as technical and legal ones. Governments have, by and large, steered clear of many of these challenges or confined their interventions to the occasional toe dipped in the water.
A key underlying tension has been where public and privately owned risk collide, whether before or after an incident. This is where recent events in Australia are important.
In dealing with the Optus and Medibank breaches in late 2022, the Australian Government has been strikingly active. It has since signalled its desire for far reaching reform of the way the country does cyber security.
This potentially accelerates a trend of increasingly interventionist Governmental behaviour across the West in recent years. That trend is uneven (and the United States, with its size, constitutional framework and the luxury of having most of its own tech run from its own jurisdiction, will always have its own, often different story).
But the wider trend towards Government activism in cyber security is unmistakable and important. And Australia is going to be a country to watch because of the apparent breadth of its ambitions, and its status as a key Five Eyes cyber power.
The strikingly activist response of the Australian Government
First, it is worth taking a look at what this activist response has involved so far.
It is an age old ritual, as old as cyber security itself and much mocked in the cyber security community, for a victim organisation to describe itself as being the target of “an unprecedently sophisticated attack”, or variants on those words. True enough, Optus duly used this line at press events on 22 and 23 September.
An essential part of the ritual is that it is not contradicted by Governments or the mainstream press. Cyber security professionals might poke fun at the ‘sophisticated’ narrative on social media, but, by and large, it goes uncontested, feeding a sense that organisations are powerless when faced with hostile cyber activity.
This is where Australia broke with tradition.
On 26 September, Clare O’Neil, Australia’s Minister for Home Affairs and Cyber Security (one of the most senior positions in the Federal Government, which has seen the phrase ‘Cyber Security’ added to it for the first time upon Ms O’Neil’s appointment in June) made a direct, forceful and accurate intervention. She said bluntly that the operation was not sophisticated, adding that Australia “should not have a telecommunications provider in this country that has effectively left the window open for data of this nature to be stolen.”
Australia’s Minister for Home Affairs and Cyber Security. Clare O’Neil
Moreover, as the incident developed, Optus came under increasing pressure to organise and pay for replacement passports for those affected by the compromise, some of the pressure coming publicly from the Federal Government. Canberra had no power to compel Optus to do this, but the company eventually announced it would, and agreed a process with the Government (as the issuer of passport documents) to arrange it.
Subsequently, Medibank provided a different challenge. The operation against the company appears to have been much more sophisticated and the company made no questionable claims about the sophistication of its provenance. There was no comparable issue to replacing passports. But given the extent of public concern about health records being publicised the Government, and Ms O’Neil specifically, took a very active role in communicating with Australians about who was responsible, what course the incident was likely to take, what risks people were exposed to, and what the Government would do in response (some of these points are explored later in this series).
Finally, taking both cases together, Ms O’Neil and her colleagues have signalled a willingness to revamp Australia’s national management of cyber security and develop a new strategy to deal with the problem. On 8 December, she announced a process to develop a new cyber security strategy (full disclosure: I have accepted an invitation to be part of a group of unpaid international advisers supporting an Australian-led expert panel in developing this process. The Australian Government does not, and does not seek to, approve or influence anything I write on this or any other topic).
The terms of reference of this strategic review are, as is the way with these things, reasonably short and therefore open to interpretation. However to date they imply a far reaching review that goes way beyond just ‘fighting the last war’ and confining the process to a review of data laws.
The announcement speaks - intriguingly - of “increasing whole-of-nation cyber security efforts to protect Australians and our economy”.
This leaves in scope pretty much anything. But the direct reference to economic interests implies it will go beyond a narrow interpretation of national security or public protection interests.
Why and how Governments are getting involved
Why has it come to this?
In short, because we’ve struggled, mostly in vain, to delineate private from public risk in cyber security. And incentives are working against good cyber security in too many ways. And we’re suffering as a result.
In the middle part of the last decade, when setting up and then running the UK’s National Cyber Security Strategy, my team and I wrestled for months - years - with the question of who we should help, and when. Part of the challenge was to avoid, in the words of a senior Conservative Minister to me privately - “nationalising cyber security”. So we started a big programme of work mapping out when we’d intervene, based on separating private and public risk.
Eventually we gave up.
It was just too hard. In a sophisticated digital economy, the interplay between privately owned damage and public harm was ubiquitous. Should we offer to help a company if it supplied a key public service, but not if it didn’t? If passports were lost because they’re government documents, but not bank cards? If the attacker was a nation state, but not a criminal? If the victim organisation’s cyber security was strong but not if it was faulty? We couldn’t answer any of these questions conclusively.
An illustration in the UK from earlier this year proves the point. In August, a private sector software company suffered a crippling ransomware attack. Among its customers was the publicly run National Health Service (details are here). As a result, the health service’s helpline, and some mental health services, were badly affected. The National Cyber Security Centre confirmed it was assisting. When I was asked live on radio by a former senior politician why the state should help a private company in this sort of situation, my reply was simple: public health provision was being disrupted.
This might not be a neat delineation of risk. It might even be effectively subsidising the consequences of weak cyber security practice in the company. But it is what the British public would expect and what the health service needed at that moment in time.
Therefore, at the NCSC, out triage efforts rationed our interventions based on an assessment of the harm to the nation, whether it originated in Government or the private sector.
This is very difficult to judge. But, in our view, it made for a more sensible set of interventions than trying to work out whether or not the Government should get involved based on who owned or operated the network.
It’s the consequences that matter.
The Optus and Medibank breaches show the same point.
Neither are first order national security crises. Reasonable people can debate and differ on how damaging they are.
But both involve clear public risk and potential harm in some way.
Medibank is the more obvious and serious example. Medical records are deeply sensitive and the company stored not far the records of two fifths of the national population. In such a scenario it is inevitable the population will look to its government for a narrative about what is likely to happen, including what risk they’re at. They will look to Government to do what it can to prevent wholesale disclosure of the dataset. And they will look to Government for the pursuit of the guilty and the prevention of a recurrence of such a data disaster on a national scale.
Optus is a more subtle example. There is a debate to be had about whether Governments should engage in a public debate about whether or not an attack was sophisticated or not (though for what it’s worth I would argue that Canberra’s leaders were right and courageous in this case to do so.)
But where the public interest is obviously engaged is in the part of the dataset that involved passports and driving licenses. These are two of the primary identifiers issued by the state. When, as the last post in this series set out, 1.2 million of the 9.7 million records compromised (and a further 900,000 expired ones), it falls to the state to make an assessment as to the risk of harm arising from this, whether the documents are still usable, and, if not, whether any special process needs to be established to replace them. Some might take the view that this is not a particularly serious issue, and that might be correct. But it still falls to the state to make the assessment.
Therefore, in both the Optus and Medibank cases, privately held risk materialised in a way that engaged the wider public interest. This is increasingly inevitable in advanced digital economies. And that in turn leads to a requirement for the state in many cases to take a leading role in incident response.
One of the reasons the UK’s National Cyber Security Centre was set up was as a result of the breach at the telco TalkTalk in October 2014. This was the first time a cyber breach had led the British national news. The case is instructive because, over time, it emerged that the breach was considerably less serious than initially thought. The early narrative around the case assumed, as always, a sophisticated attack that left the company’s four million customers at direct risk of serious financial fraud. In fact, a basic hack had accessed the dated and limited records of 150,000 customers. But it took several days for this to emerge, and the Government stayed mostly silent amidst the (unnecessary) panic.
Though no national security risks arose in the TalkTalk case, the Government at the time later concluded that the state needed a national cyber security incident management function akin to that for terrorism, floods or public health outbreaks. What was needed, they argued, was a system where someone in authority would give an official assessment of what has happened, who is at risk, from what, and how they might go about managing their risk. This because one of the core functions of the NCSC.
The UK’s more activist approach to managing incidents was one of a number of significant developments in state involvement in cyber security in the West over the past decade or so. Others have broadly fallen into three categories.
The first, and most extensive, is regulation of critical national infrastructure (CNI). This is unsurprising given what is at stake. Moreover, it is where the privatisation of serious national security risk is most acute (for more on that, see here). Australia is among several countries to have acquired regulatory powers over cyber security in CNI. The UK has recently taken powers for telecommunications akin to those already in place for financial services. The EU’s two Network Information Systems (NIS) Directives of 2016 and 2022 set out sweeping requirements for a range of critical sectors. The problem here is that critical infrastructure is very difficult to define, and what we regard as critical changes all the time.
A second, and growing, area is in the safety of technology itself. The EU (through its 2019 Cybersecurity Act) and the UK (through the just enacted Product Security and Telecommunications Infrastructure Act) have followed Singapore in introducing detailed regulation of Internet of Things (IoT) products. This takes advantage of the change in the internet economy represented by IoT: instead of web-based services where the price of entry is data, these are products and services that can be more easily evaluated and, therefore, where security standards can be more easily specified. These reforms are hugely important, but they do not address long-standing defects in network security.
The third is the trickiest and most underdeveloped but seeks to address the limitations of the existing approaches: trying to use the power and influence of Government to improve general network security across all sectors of the economy.
That is where the reference to the ‘whole of nation’ approach in Australia’s strategic review is so intriguing, and it’s one that presents a plethora of opportunities and risks.
Governments and cyber security: opportunities and risks
The great conundrum of Western cyber security over the years is why so much money has been spent on a clearly recognised problem, and yet so many areas of fundamental difficulty remain.
It is hardly new to talk of market failure in cyber security, but it’s a huge part of a difficult story. There are some areas where the ordinary functioning of market economics has led to the development of powerful capabilities to combat cyber security threats - threat intelligence is an obvious example. But there are many other examples of weak security practices and enduring digital pollutants which illustrate the problem.
Here are a number of ways in which the state can shape a nation’s posture around cyber security and address market failures (they are absolutely not specific to Australia or linked to its current strategy development process, or to the UK, or anywhere else). They are instead a list of possible areas for Governments to consider what their appropriate policy is:
First, setting the national risk appetite. In less technocratic language, this means deciding what a country and its companies and citizens need to care about. Key to this is understanding harm. How much should it care about data breaches? Which ones? Which threat actor matter more than others? Which aspects of the problem can only be left to Government?
Second is the cyber security environment for businesses. What risks are businesses to be reasonably expected to manage? Once that’s decided, how are the rules to be implemented? Does Government have a role to play in nudging corporate governance rules to be adapted to incentivise better cyber security? Why hasn’t cyber insurance - insurance being a regulated industry - worked to anything like its full potential? Can the Government use its convening power to help fix these problems? And if not, does further regulation become the default?
Third, the regulatory framework. If the answer, at least in part, is more law, then the key is a balanced set of regulations that seek to take care of the totality of the problem. Two examples: first, what we think of as critical infrastructure changes all the time; and second, if data protection is the only legal obligation, organisations won’t be incentivised to protect against more disruptive threats.
Fourth, the Government as an active defender. The Government can harden its own networks by direct action, not just by persuasion and/or law. That not only shows leadership by example, but also takes away a large part of the national attack surface given then size of the state in modern economies.
Fifth, direct intervention in areas of clear market failure. If no-one has a commercial incentive to try to take down malicious websites, the Government can try. If there are ways to block automatically, it might be within the Government’s powers to do so. (Some of these types of initiatives have been tried by the UK in the NCSC’s Active Cyber Defence programme).
Finally, the deterrence and pursuit of wrongdoers. A key role of the state is to prevent and deter harm against its citizens and punish those who do. This becomes harder with regard to cyber attacks because, for the first time in human history, large-scale, persistent harm can be initiated without the adversary ever setting foot on national territory. In a domain where impunity is easier than before, to what extent can the state reasonably be expected to thwart malevolent actors? To what extent can taxpaying businesses and citizens rely on such protections?
Inherent in these questions are many of the risks and challenges for Governments:
the UK Minister who worried about ‘nationalising’ cyber security might not find himself reassured by this list. Governments need to navigate the obvious moral hazard associated with a more activist approach. But, arguably inactivity is worse.
Getting the right regulatory posture is hard and the consequences of getting it wrong can be severe. One striking feature of the Irish healthcare crisis of 2021, when most of the state’s healthcare commissioning system was taken offline, leading to severe consequences for patient care, was that the main law incentivising the Health Services Executive to take cyber security seriously was the General Data Protection Regulation (GDPR). This had skewed incentives prior to the crisis; however serious health data is, it’s not as serious as losing access to the cancer consultations scheduling system. But the regulations incentivised data protection, not service continuity.
Bringing the business community along with Government reforms is essential. It can be done - the new UK telecommunications security act, for example, was brought in with industry encouragement and input. However, GDPR provides an opposite case study. In the UK, it led to a temporary chilling effect in Government cooperation with business over data breaches: the Chief Information Security Officer (CISO) was replaced by the General Counsel in key operational meetings with Government agencies. As a result, the NCSC came together with the data protection regulator, publishing a memorandum designed to provide regulatory incentives to victim organisations to work with the Government to help manage the problem.
Finally, greater Government involvement increases some of the risks to the Government itself. If stiff penalties are inflicted on the private sector for cyber security failures, it is reasonable to expect commensurate accountability in Government, and the challenges in protecting networks are no less acute in the public sector. Moreover, some of the tensions in public policy become more acute; for example around Governments’ real and perceived stance around end-to-end encryption.
At the heart of realising the opportunities and managing the risks is changing the way in which incentives work. Incentives have been broken in cyber security for some time. Hope for the best has been a perfectly rational strategy for many organisations. That has to change.
Here, a key choice for countries is the balance between compulsion at one extreme and exhortation on the other. This will vary according to local circumstance, governing systems, and the political preferences of those in power. The United States is likely to remain at the more exhortatory end of the spectrum for all sorts of reasons: the cultural and constitutional reluctance to compel businesses; the difficulty of passing effective regulation through Congress; the location of many tech and cyber security giants within the United States, and the sheer power and scale of America’s state cyber security agencies.
Yet, all that said, the underlying diagnosis in this article of more Government activism does not, to me anyway, seem out of step with some of the thinking currently driving US policy. One only has to look at the brilliant article by President Biden’s National Cyber Director, Chris Inglis, in February of this year arguing for a new Cyber Social Contract to see how the difficulty in delineating risk between the private and public spheres is at the heart of the Administration’s considerable efforts to ramp up partnership between government and the private sector in cyber security.
In any case, most countries, even the US’s close Five Eyes partners like the UK and Australia, do not have any of the specific factors driving the US’s less directly interventionist approach. So it will not be surprising if most of the US’s allies end up taking a more direct role, including through legislation, than America does. Whilst it will be some time before Australia’s policy review process is completed, it could be an early indicator of an important trend.
Conclusion
Modern highly digitised societies share cyber security risk between the private and public sectors. It is often impossible to work out exactly where private risk ends and public risk starts.
Capable Governments can develop their own tools to take care of their side of the bargain. But a modern Government also has a key role in shaping the incentives of the private sector to do its part.
All this is taking the state beyond its traditional areas of intervention in cyber security - like critical infrastructure, data protection and product regulation - and into a more complex area of incentives, economics, general corporate regulation and so on.
And that leads to some very important public policy challenges.
Coming up next in the series
One very specific policy challenge is the vexed question of the payment of ransoms. This has been a feature of many of the most serious cyber security incidents of the 2020s, and has prompted some of the most heated debates within the cyber security community of recent times.
This question will be examined in Part 3 of this series. It will be published on Tuesday 17 January, following the Christmas and New Year break.
In the meantime, there may be one or two other posts on other subjects on this Substack page.
I wonder if the need for government cyber activism is greatest amongst the micro and small businesses that fall below the cyber poverty line (organisations that cannot afford to defend themselves against the threat they face) where appropriate market solutions do not exist.