Ultimate Risk Question

William HymanWilliam A. Hyman
Professor Emeritus, Biomedical Engineering
Texas A&M University, w-hyman@tamu.edu
Read other articles by this author

Readers in this space are well aware of cybersecurity threats, and the need for affirmative security activities to prevent threats from becoming crises. Risk assessment is part of this agenda because you cannot know what it is exactly that you are trying to protect yourself from without analysis nor can you rationally prioritize your protective efforts. In this regard, the classic two elements of risk assessment are severity and probability. Severity asks how bad it will be if a specific thing happens. Probability asks what is the likelihood that this will occur. Of the two probability might be the more challenging since its consideration implies that we can do less, or nothing, to protect against at least some low probability events. This controverts a “if we can save just one…” type of argument which in effect suggests that unlimited resources can be applied to every risk, or whatever your favorite risk is, or whatever you are being told is the specific risk that you must prevent, often by people who are selling prevention. As an extreme example of the role of probability, we do not build our homes to be meteor proof even though a meteor strike would be catastrophic. The reason is in part that being struck by a meteor is very unlikely. In addition, it would be prohibitively expensive to build a meteor proof home, and it probably wouldn’t be very pleasant to live in. Similarly, we don’t have a policeman directing traffic at every intersection even though vehicle/pedestrian accidents kill people. And we don’t have a nurse at every bedside, or an ambulance waiting on every block. I won’t make any comparison here to any of the hype associated with allegedly hackable medical devices and which, if any, present a real risk that requires real action.

While various computer network risks are much discussed, the focus is often on protecting the system. What I do not see being discussed as much is what are we going to do when the system in whole or in part becomes unavailable. A possible exception is the advice to back up data on a regular if not continuous basis, but even here there is a tacit assumption that the backed-up data will be accessible when the primary data is not, using the same system we always used to look at it.

My question then is can your organization function if the network goes down, or some components become unavailable. More precisely what parts (if any) of your functions will be useable and which will not, and most importantly, what will you do when it happens other then lock the door (assuming you can) and going home. Related is when and how will you get back to functioning after the system fails. This is disaster planning. For natural disasters, we generally do not rely on preventing the underlying event but instead work on mitigating the consequences, both in advance and after the disaster occurs.

In the medical device arena, there are devices that can run in standalone mode and others that cannot. Of the latter, some may need only the local network and others might need web connectivity to function at all or in part. What will we do when some of these devices cannot be used? And do you know which devices these are? What will you do without access to records, or the ability to create new records? Can you call people or page people? Will alarms be received at remote locations? There may be a generational divide in appreciating these questions. Some will remember when various devices were self-contained, or had their own dedicated server that did not depend on the internet (which didn’t exist as a public resource). This lack of connectivity had its advantages which are now being undone by the internet of things in which many things are network dependent and there is the potential for nothing to work all at once. Note that “the cloud” is not a universal answer because the cloud also relies on computers, servers and connectivity. Standalone devices and systems also sharply reduce hacking risks, provided you can educate the users not to compromise them.

Knowing what will happen is an essential part of creating a contingency plan. I also suggest that there be specific policy that covers extraordinary situations such as unavailability of the computer network. One element of that policy should be that all normal operational policies are void during a crisis. While this may seem self-evident, it serves the purpose of enabling people—within policy—to react to crises, without fear or retroactive assertions that they didn’t follow policy.