Data breaches grab headlines on a daily basis and arise from a number of different scenarios. However, one question that is not necessarily examined closely (at least in news articles), is whether encryption was in place and why the encryption did not prevent the breach. That rhetorical question does not get into the finding in a number of resolutions through the HHS Office for Civil Rights where lack of appropriately or properly implemented encryption was part of the reason for a penalty.
Some HIPAA Definitions
Before diving into encryption specifically, it is helpful to remember how a breach is defined by HIPAA. Under the breach notification rule (45 CFR 164.402), a breach is:
the acquisition, access, use, or disclosure of protected health information in a manner not permitted under subpart E [the Privacy Rule] of this part which compromises the security or privacy of the protected health information.
The rule goes on to state that a breach of any unsecured protected health information is subject to notification. What is unsecured protected health information (PHI) though? The regulation again offers a definition, stating that unsecured PHI is:
protected health information that is not rendered unusable, unreadable, or indecipherable to unauthorized persons through the use of a technology or methodology specified by the Secretary in the guidance issued under section 13402(h)(2) of Public Law 111-5.
Implicitly that means unsecured PHI can be viewed, accessed, used, or otherwise interacted with by an unauthorized individual. That is where encryption comes into the picture. Encryption is set out as pretty much the only tool that can render information unusable, unreadable and indecipherable, at least so long as someone does not have the key or can get into a system when the system decrypts the data. The explanations included with the 2013 HIPAA Omnibus Rule that introduced the breach notification rule references destruction as the only other means of rendering PHI unusable, unreadable, and indecipherable.
The HIPAA security rule includes encryption as an addressable element, which means that implementation is not necessarily required exactly as set out in the rule. While encryption may have been somewhat more difficult or expensive when the security rule initially went into effect, that is not really the case right now. In fact, many popular computers, smartphones, and other mobile devices will automatically encrypt data.
How does the encryption work though? While encryption could be a default setting, for example an iPhone has built in encryption for the hard drive, the scope and timing of the encryption can change. Specifically, many devices will stop encryption when the device has been accessed with an appropriate password or other log in feature and in active use. While it is possible to still encrypt data while the data are being used, that technology is not necessarily widespread or easily adopted.
Another facet to encryption is that an authorized user, or someone pretending to be an authorized user, can validly disable the encryption. That could be a result of having the key or just that entry as an authorized user enables use of the entire system or network.
Now that some ground rules have been established around the high level definition of a breach and encryption, some scenarios can further flesh out some of the nuances of whether a breach occurred. Some of the scenarios come from a ransomware fact sheet prepared by the Office for Civil Rights as a reminder that encryption is not necessarily the end of the story when determining whether a breach occurred.
- Encrypted Device Remains Encrypted – As identified by OCR, sometimes a device may be encrypted when in a powered down mode or otherwise not logged in. If an unauthorized individual obtains the device in this state, then the data are likely encrypted with no easy means of breaking the encryption. In those circumstances, the PHI is secured and no breach occurred.
- Encrypted Device Is On and Logged In – Some devices may automatically decrypt data when a user logs into the device and has it powered on. If an unauthorized user takes the device in that state, then the PHI is unsecure because it is readily accessible and can be taken in any number of ways. This scenario presents a fairly easy determination of a breach.
- Encrypted Device Infected With Ransomware – A device could be well encrypted, but then ransomware or other malware is introduced when the device is in use. This scenario is where things begin to get interesting. As described above, if the device decrypts data when in use, then the infection could enable access to PHI at a point in time when the PHI is not secure. If the PHI is not secure when accessed or potentially accessible, then it becomes more certain again that a breach occurred. The key is to determine the status of the PHI when the unauthorized individual gains access.
- Encrypted Data are Further Encrypted – Taking a twist on the earlier scenarios, it is possible for data to remain encrypted, but a system to be further encrypted that results in an inability to access the PHI. Turning to the guidance from OCR, the government would like to presume that a breach occurred in this instance because it views the ransomware encryption as the unauthorized individuals taking control of the PHI, which is in turn interpreted as having acquired the PHI. The government’s preference for a presumed breach can be refuted through a risk assessment though. If the PHI is double encrypted, with the first encryption being the encryption from the entity subject to HIPAA, then determining a low probability of compromise is a potentially fair outcome. The specific facts of each scenario will give the exact answer though.
How Much Encryption is Needed?
A recent appellate decision about a HIPAA fine introduces a new question about encryption. Specifically, how in-depth and comprehensive do organizations need to go in putting encryption into place. The decision involving MD Anderson Cancer Center had the Fifth Circuit Court of Appeals determining that the HIPAA security rule only calls for a mechanism for encryption. The Circuit Court said that that does not mean having the best encryption or ensuring that every device is actively using the encryption. From that perspective, the Circuit Court seemed to call for reasonable efforts.
If only reasonable efforts are needed, then that could provide a new basis for organizations to push back during an investigation following a breach notification. Time will tell if the argument will hold weight or if OCR will modify the requirement, but the interpretation could be a risk proposition.
What Next for Encryption?
Given the clear benefits of using encryption, the question becomes how can it be better implemented? Answering that question may not be easy or even the right question to ask. As the scenarios demonstrate, encryption could be good, but the surrounding practices and processes not so much. From that perspective, encryption alone will not solve any problems. Instead, the issue for encryption is to approach it holistically as part of overall security.
This article was originally published on The Pulse blog and is republished here with permission.