Cybersecurity and Privacy in the Age of COVID: The More Things Change, the More They Stay the Same
This Featured Article is co-published with The Governance Institute.
- January 22, 2021
- Nathan A. Kottkamp , Williams Mullen

The advent of COVID has resulted in myriad revolutionary shifts in health care technology. Telehealth is exploding. Connected devices are widely available and are rapidly increasing in their range of applications. More people are taking advantage of working at home. Data is moving more and more, between providers and into the “cloud.” And artificial intelligence is becoming better by the moment. Layered on top of these considerations is the fundamental issue that health information remains highly valuable. Therefore, health information will always be a target of bad actors, but it is also at increasing risk as a basic function of its wider use and all the implications of having more devices with more data in more places doing more things for more patients. Responding to the threats and the inherent risks of this environment will almost certainly be a team effort.
Despite the expansion in use and scope of health care technology, in many respects the challenges of cybersecurity have not fundamentally changed. Instead, they are being applied to a new utilization paradigm with a dramatically greater volume of data at issue. As a result, a key response to this evolution is to remain ever mindful of the core foundations of privacy and security. To support this generally and with respect to governance specifically, it is essential that that everyone in organizational leadership needs to be aware of these considerations. This is true, if not even more important, if leadership does not know much about technology. Indeed, the governing body of any organization need not be involved in the operational aspects of cybersecurity, but it should be certain that the issues are being well-addressed.
Federal Regulation and Guidance That Increases Utilization, Access, and Risk
The 21st Century Cures Act[1] introduced Information Blocking and Interoperability Rules that are intended to enhance the flow of health information from provider to provider. The benefits of such information sharing are obvious, but so are the risks. Among other things, from a cybersecurity perspective, greater data flow creates a risk of misdirected information, expanded hacking vulnerability, and expanded repercussions of file corruption. With respect to interoperability, there are inherent risks with application programming interfaces (APIs) that enable health information systems to “talk” with one another. Beyond the provider-to-provider risks, there are trickle-down security and privacy implications as more health information is on more devices held by more people.
The U.S. Department of Health and Human Services Office for Civil Rights’ (OCR) recent Notice of Proposed Rule Making[2] (NPRM), which proposes a variety of revisions to the Health Insurance Portability and Accountability Act (HIPAA) regulations, would support the 21st Century Cures Act initiatives. Furthermore, in support of the OCR’s highly enforced Patient Right of Access initiative,[3] the NPRM recommends additional access benefits for patients. Once again, broader access results in more risk on numerous fronts.
Separately, the U.S. Food and Drug Administration (FDA) recently released an “Artificial Intelligence/Machine Learning Action Plan.”[4] Among its five action items, the FDA included “fostering a patient-centered approach.” Therefore, aside from the inherent security risks of more health information in larger data pools, as is necessary for artificial intelligence, system designers will need to consider both privacy and accessibility of health information in the design of their systems. This guidance is separate from existing initiatives in support of drug and device tracking, both of which present security risks.[5]
Remarkably, despite all the advancements in technology over the last two decades, the HIPAA Security Rule Risk Assessment regulations have remained untouched since their inception in 2003.[6] In many regards, this is not a surprise given that the Security Rule Risk Assessment framework is expressly designed to be flexible[7] and sets forth a broad array of security standards, but it generally does not require any specific method of compliance. This approach sits in contrast to the rest of the HIPAA regulations, specifically the Privacy and Breach Notification Rules, which are generally very prescriptive. Unfortunately, the lack of regulatory revisions and the absence of a specific update to timing requirements has resulted in many Covered Entities and Business Associates operating with outdated and otherwise deficient Risk Assessments. It is axiomatic, of course, that if an entity does not know the types and locations of its health information, the entity cannot appropriately protect that information. As a result, an entity is not only at risk of a HIPAA violation but also of a real-world data disaster.
People Are as Much a Threat as Any Technology
Regardless of the quality of technical cybersecurity features, people pose one of the largest threats to security. Although malicious insider threats abound and major breach events tend to dominate the media’s attention, unintentional actions by workforce members are likely to be far more common and can have catastrophic effects. Indeed, carelessness, laziness, and human error are huge and persistent threats, with limited technological remedies. Therefore, technology developers and organizations need to consider and balance things like password protocols (e.g., a single sign-on is easy, but the implications of a compromise can be enormous, whereas the use of multiple passwords increases security but may create operational issues and inefficiencies by increasing the complexity of the system) and whether email systems have screening tools to reduce the risk that protected or otherwise sensitive information is not externally disclosed without being encrypted.
Another of the pervasive human threats is hubris. In two recent examples observed by the author, hubris was to blame for cybersecurity incidents. In one case, the organization’s Chief Information Officer was extremely skilled with yesterday’s cybersecurity practices, and the failure to update the organization’s protocols resulted in a nearly predictable breach. Specifically, the organization never updated its forgotten password feature from the old classic “security” feature that uses things like mother’s maiden name, pet’s name, and other “challenge” questions. Not too long ago, those security measures were adequate and appropriate, but now bad actors really only require some quick searching on social media and genealogy websites to obtain access to the (outdated) account. In the other case, a client was surprised to learn that it had suffered three separate breaches in a six-month period, but its outsourced tech support team did not report the breaches because it was convinced it could “fix” them before management found out, which, of course, it did not.
As patients access more of their information from a broader array of devices, the risks of compromised data inherently increase. Therefore, even if patient-related risks are out of their hands, providers and organizations should consider ways to encourage patients to be mindful of how they access their information and where they subsequently store it. In addition to increasing security overall, these efforts can have the secondary benefit of bolstering patient relationships and reducing the risk that providers will be blamed for patient errors.
Ultimately, the best way to manage all of these issues is to create and nurture a culture of compliance. A solid cybersecurity program needs to eliminate weak links, anticipate fundamental human behaviors/errors, and be firmly supported by the leadership of the organization.
Telehealth—Compressing Years of Organic Growth into a Few Months of Trial by Fire from Use by Necessity
Without a doubt, the COVID pandemic triggered extraordinary growth in the use of telehealth. It has been said, for example, that 2020 compressed several years of telehealth growth into just a few weeks. Undoubtedly, telehealth applications were already on the rise, but as a result of the pivoting required by the pandemic, it is as if telehealth utilization went from 10x to 100x with unprecedented speed.
The OCR’s exercise of enforcement discretion on popular telecommunication applications that are not (necessarily) HIPAA compliant, such as Zoom, Skype, and Teams, has furthered telehealth growth.[8] Specifically, the OCR announced early in the COVID pandemic that it would not pursue enforcement actions against entities that used these platforms. Although it certainly remains to be seen, OCR may make its current position permanent. It seems reasonable to assume that if industry-wide experience since the beginning of the pandemic reveals no major security issues with these widely available applications, we may see continued acceptance of these platforms by OCR. In the meantime, providers should explore contingency plans because OCR’s enforcement discretion could be canceled at any point. Regardless of what particular platform(s) are used for telehealth, providers should consider these applications from a HIPAA Security Rule Risk Assessment perspective and account for the risks associated with them.
Separate from the technical issues with cybersecurity for telehealth, providers need to avoid the temptation to over-rely on telehealth technology and to ignore other aspects of risk management and regulatory compliance. For example, providers need to be mindful of the limitations of the cameras used by patients and the risks of not being able to see movements that may be outside of the viewing frame (e.g., involuntary tics or anxious fidgeting). Furthermore, under current regulatory structures, each state maintains its own telehealth rules. Therefore, expanded telehealth practices may increase regulatory risks as well (e.g., a physician may unwittingly violate a telehealth law by providing telehealth services while traveling to a state in which he/she does not have a license).
Connected Devices—the Mixed Blessings of the Internet of Things
There has been extraordinary growth in connected consumer devices, particularly “smart watches” that can monitor pulse, sleep patterns, falls, and more. While many of these features are simply for personal use, providers may also use them for clinical purposes. Additionally, there is an ever-growing array of true “medical devices” that can remotely measure blood sugar levels and deliver insulin, provide cardiac measurements and corrections (i.e., connected pacemakers), and ingestible sensors, with more of these types of devices being introduced all the time. While these technological advancements can be highly beneficial, particularly when it comes to treatment adherence and with patients who have challenges accessing health care services in person, they present some of the classic risks of all things digital: privacy and security.
On the privacy front, there are significant issues of who will have access to the data collected from devices and what they may do with it. On the security front, there are issues of both accessibility and functionality. Specifically, a key risk of any of these devices is ensuring that only the patient and authorized providers have access to the device. Additionally, the more vital the device, the more critical the risk. To illustrate this point, consider the implications of a compromised pacemaker: “I’ve hacked your heart, and I want 100 bitcoin not to shut it down.” Obviously, this type of breach will produce a much greater response than discovering that someone’s step-count or pulse monitor has been hacked.
The timing and nature of security within the design process can also have significant implications. The most common design methods entail cybersecurity “by design” and its opposite, cybersecurity as an “add-on” feature. With the latter method, there is a risk that a particular technology will be developed with a myopic focus on functionality, with cybersecurity features folded in as a last step rather than being part of the fundamental architecture of the devices. The chances of security failures with the add-on approach are almost certain to be greater.
Finally, consideration needs to be given to failsafe features, which often revolve around the software platforms used to operate the device. Using open source software (OSS) enables swifter development and the prospect of fewer glitches. This is because OSS is based on technology for which there is a large community of similarly situated individuals who have equal interests in glitch-free operation and high levels of security. Aside from being free, the use of OSS also affords the possibility of taking advantage of broad resources to correct problems. Specifically, benevolent hackers have come to the rescue of other programmers to help fix glitches that may have major health implications if they go unfixed. Yet, a key downside of using OSS is that if a vulnerability is discovered, then all devices using such software are at risk. As a result, a relatively minor security issue could affect a huge array of devices. By contrast, the use of proprietary software is great for reducing the risks of being a hacking target (i.e., it is not worth the bother of trying to hack a one-off), but it can prevent or dramatically impede the ability to harness others in the technology community if there is an emergency (i.e., there may be no ability to use beneficial hackers because a medical device is developed on a unique platform).
Remote Work Arrangements—Great for Convenience, Not So Great for Data Governance
As with telehealth, the COVID pandemic has triggered an extremely swift shift to remote working arrangements. It is not clear, however, that cybersecurity practices have kept up with the associated changes. For example, the proliferation of smartphone apps and the use of home computers for remote work substantially increase the challenges of basic data management. Furthermore, the proliferation of data on various devices, including personal devices, inherently complicates data retention practices as well as the protocols necessary for terminating access and return or deletion of information in the event of separation for any reason. These issues demand a feedback loop that will enable organizations to swiftly respond to market and practical applications of technology. In other words, today’s rapidly evolving technology environment demands constant vigilance and a rejection of any notions of “set it and forget it.”
Faxing—The Health Care Industry’s Fantasy Land
The health care industry has remained one of last hold outs in the use of fax machines. Among other things, a key reason for this is that HIPAA provided the equivalent of an exception from the security rule for fax machines, and the rule has yet to be updated. The fax “rule” derives from the previous paradigm in which faxes were conveyed over landlines and the transmission data disappeared into the ether upon delivery. Now, however, faxes are increasingly transmitted online, and copies are stored both at the point of sending and the point of receipt. Nevertheless, this author’s experience has shown that many providers have not incorporated the new era of fax technology into their Security Rule Risk Assessments because they continue to hold on to the fantasy notion that faxing is exempt from HIPAA. Obviously, this is not the case, and security plans need to reflect the current reality in which fax information often resides in no fewer than the sender and recipient, but often in email systems as well, when a copy of a fax is automatically delivered to an individual’s inbox.
Conclusion
Although it is hard to predict the speed or specific direction of tomorrow’s technology changes, the challenges of maintaining privacy and security, without compromising functionality, will be a perpetual issue. Among other things, an effective compliance plan requires staying up-to-date, if not ahead of the curve, on security systems. It also requires ensuring that cybersecurity is an initial consideration of design of both devices and systems but also organizations and their associated cultures. Finally, it is vital to remember that one of the greatest threats to information security is human fallibility. Of course, none of these considerations are new. Instead, the more things change—and they are certainly changing a lot—the more that privacy and security practices, at their core, remain the same.
About the Author
Nathan A. Kottkamp, CIPP is a Partner on Waller's health care industry team. Nathan provides counsel on compliance with federal and state health care regulations and day-to-day operational issues. His experience includes hospitals and health systems, academic medical centers, behavioral health care services providers, specialty physician practices, post-acute and long-term care providers, and higher education institutions. Additionally, Nathan represents health care providers before professional and licensing boards, and he has successfully obtained certificates of public need for dozens of critical projects ranging from service expansions to the development of new hospitals and ambulatory surgery centers. Nathan has earned the CIPP/US designation as a Certified Information Privacy Professional from the International Association of Privacy Professionals, and clients rely on his insight and experience with HIPAA and other data privacy and security matters.
[1] See https://www.fda.gov/regulatory-information/selected-amendments-fdc-act/21st-century-cures-act (visited Jan. 11, 2021).
[2] See https://www.hhs.gov/about/news/2020/12/10/hhs-proposes-modifications-hipaa-privacy-rule-empower-patients-improve-coordinated-care-reduce-regulatory-burdens.html (visited Jan. 11, 2021).
[3] See, e.g., https://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/agreements/elite-primary-care/index.html (visited Jan. 11, 2021) and https://www.hhs.gov/hipaa/for-professionals/compliance-enforcement/agreements/banner/index.html (visited Jan. 14, 2021).
[4] See https://www.fda.gov/news-events/press-announcements/fda-releases-artificial-intelligencemachine-learning-action-plan (visited Jan. 14, 2021).
[5] See https://www.fda.gov/drugs/drug-supply-chain-security-act-dscsa/drug-supply-chain-security-act-law-and-policies (visited Jan. 14, 2021) and https://www.fda.gov/medical-devices/postmarket-requirements-devices/medical-device-tracking (visited Jan. 14, 2021).
[6] See https://www.hhs.gov/hipaa/for-professionals/security/index.html (visited Jan. 11, 2021).
[7] See 45 C.F.R. § 164.306(b).