Cybersecurity Awareness Month Week 3: AI Enabled Threats and Responsible AI

October is Cybersecurity Awareness Month

Each week this month we will take on a new cybersecurity subject and ask our experts in the healthcare industry to weigh in.

Week 3: AI Enabled Threats and Responsible AI in healthcare
According to CyberArk’s 2023 Identity Security Threat Landscape Report, more than 9 out of 10 security professionals surveyed expect AI-enabled threats to affect their organizations in 2023.

David Finn, Vice President, College of Healthcare Information Management Executives (CHIME)
Twitter: @DavidSFinn

The hysteria and much of the security reaction to AI has been fascinating to watch. This is a new technology – – we introduce new technology every day in the world, we use existing technology in new ways. Yes, it presents risk but if you have an enterprise risk management program that includes experts from across the organization and addresses all the risks about data ownership/use, technology, security, care, patient care risk, cost, privacy and so one. Then it is just another tool/product/service that needs to be assessed and evaluated. The only rule should be: Do it right. Assuming you’ve built the processes, have the right people involved and don’t make “emotional” decisions, AI can be introduced into a production environment. Doing it right, AI or anything else just means you understand the problem, you realize the value chain you’re delivering and impacting, that the solution is designed with a deep understanding of the issues you are addressing – – and introducing. And finally, you should have identified the metrics and rubrics for measuring performance, safety, and bias.

Kyle Neuman, Director of Trust Framework Development, DirectTrust
X: @DirectTrustorg

In the context of cybersecurity, AI is like a hammer. It can be used to construct better defenses and it can be used to destroy existing defenses. In the digital identity landscape, AI can be used to detect fraudulent identity evidence better and prevent bad actors from obtaining legitimate and trusted digital identity credentials. On the other hand, it can also be used to generate more convincing deep fakes and fraudulent identity evidence on larger scales than what was previously possible. On both the cybersecurity front and the related digital identity front, constructive AI will most likely be used to combat destructive AI, and anyone who’s late to the party will become an easy target.

Shannon Hastings, Chief Technology Officer, Project Ronin
X: @project_ronin

Protecting sensitive patient data is of utmost importance for every healthcare stakeholder, particularly those embracing AI technology to enhance patient outcomes, reduce costs, and improve the healthcare experience. Therefore, organizations must remain committed to continuously enhancing their security practices by routinely reviewing and updating security controls, policies, and procedures to adapt to evolving threats, industry best practices, and changing regulatory requirements. Additionally, they must continually educate and train their workforce to stay abreast of changes in threats, vulnerabilities, and empower employees to take an active role in mitigating security threats. Implementing the SOC 2 Type 2 framework provides an essential level of compliance required to safeguard patient data, signaling to all stakeholders that stringent privacy and security procedures are in place. It’s crucial for organizations to leverage both internal and external audits to assess their security, availability, processing integrity, confidentiality, and privacy controls while ensuring compliance with HIPAA regulations. In an era where malicious actors increasingly target healthcare data, organizations must demonstrate commitment and heightened vigilance, especially when implementing emerging technologies. This commitment is upheld through comprehensive risk assessments, robust security controls, continuous monitoring, and ongoing improvement efforts—all while maintaining the confidentiality, integrity, and availability of electronic protected health information.

Blair Cohen, Founder & President, AuthenticID
X: @AuthenticID

AI and cybersecurity go hand-in-hand when it comes to the ongoing fight against cyberattacks. AI bolsters cybersecurity by offering advanced threat detection and rapid response capabilities, enabling organizations to identify and mitigate threats at exceptional speeds. With the number of cybersecurity threats growing daily, organizations cannot afford to let their guard down – especially considering the rapid progression of technology. Bad actors continue to find new ways to use AI in their attacks, but cybersecurity professionals can also unlock the speed and precision AI offers to safeguard their businesses from threats.

Lesley Berkeyheiser, CCSFP, CHQP, Senior Assessor, DirectTrust
X: @DirectTrustorg

It seems it was just yesterday our industry was struggling with the compliance of and threats related to adopting cloud computing and the use of Blockchain technologies. Today the hot topic is Artificial Intelligence. Whatever the topic de jour, it is vital our healthcare organizations are ready with a solid, repeatable enterprise risk management process to identify and handle the potential cybersecurity/data-related threats to sensitive information. One way to make sure your organization is ready – is to undergo an independent third-party review of your privacy, security, and cybersecurity controls such as the 20+ accreditation programs unique to your business model, and offered by DirectTrust.

Heather Randall, Chief Compliance Officer, Sphere
X: @SphereCommerce

I see AI as a two-vector threat. The first is the use of generative AI to create targeted threats against organizations. Using sophisticated data analysis, threat actors can more specifically target an organization’s vulnerabilities using sophisticated data analysis tools. AI is increasingly used both to develop more sophisticated malware and to effectively target organizations. This is best addressed by organizations by continuing to evolve their own security posture through testing and monitoring, risk assessments, and the adoption of new protective technologies. The second vector of threat is less malicious but no less dangerous and that is the use of AI by service providers. Many service providers are incorporating AI into their software and services to provide a better product. It is vital to both security and privacy that the way in which AI will be used within your environment, and what data will be accessed and analyzed, is understood. All organizations should audit their service providers to understand whether and how they are using AI. Organizations may unknowingly be putting sensitive data at risk through the use of AI-enabled services that were not properly vetted.

Mohan Badkundri, vice president of development at HSBlox

In the era of rapid digitization and adoption of cloud technologies, AI-enabled cyberattacks are on the rise. Organizations are struggling to deal with security challenges brought on by these sophisticated attacks. Generative AI isn’t just boosting the hackers’ speed, it’s also expanding their reach. The best way to adapt and evolve to address new emerging cyberthreats is to adhere to industry standard best practices, while also layering in new technologies and strategies of responsible AI to fortify defenses and create proactive elements into enterprise security.

Russell Teague, Vice President, Advisory Services & Threat Operations, Fortified Health Security
X: @FortifiedHITSec

AI is a NextGen innovation that many consider to be the most disruptive technology that we have seen. With rapid advancements in AI technology, the full impact on business sectors like healthcare, life sciences, and medicine are not well understood at this time. There are both benefits and risks to this technology. As organizations start to leverage AI in new ways, we’ll start to have a better understanding of what’s possible as well as what’s safe. The AI evolution also applies to threat groups and attackers. We have already seen early indications that threat actors are using AI to create more sophisticated attacks, ones that are more dynamic and can morph or change attack parameters based on the situation or what malware encounters within a victim’s environment.

Daniel Clayton, VP of Security Operations, Expel
X: @ExpelSecurity

It has always been true that our greatest technological innovations renew the old tension between risk and reward, progress and protection and innovation versus injury. AI, like many technological advancements before it, will make our lives easier and more convenient. It will make our businesses more efficient, more effective and ultimately more profitable. But, inherent in any new capability is the risk that the very same innovation will be used against us.

We are already seeing this today; deep fakes are now being used routinely in cyber attacks, seemingly perfect spear phishing attacks are being deployed with the help of AI-driven social engineering. AI makes common password crackers infinitely more effective and it drives the evolution of malware to stay ahead of the tools we deploy to protect us. In addition, with so many companies seeking to take advantage of the many generative AI apps available today, the risk of oversharing sensitive intellectual property, or sensitive information is significant.

Nonetheless, just as this new capability supercharges cyber adversaries, it also gives security teams more tools in the fight. AI is helping us close the skills gap, it powers up our ability to identify, prioritize and patch vulnerabilities and it can help us predict how bad actors will evolve their tools and their tactics against us. Technology is never simply good or bad, but every innovation brings a set of new and unknown risks; if we are to realize the potential benefits of AI within our businesses, those risks (like any other risks) need to be quickly understood, measured and managed.

Zandy McAllister, Virtual Chief Information Security Officer, Anatomy IT

The biggest risk with generative AI’s ability to create more legitimate-looking phishing emails is that they increase the risk of human error. Everyone in healthcare is so busy and working so many hours, which could mean that their guard is lowered, and they end up inadvertently opening an attachment or link that contains malware. It is completely understandable given the number of messages they receive and the demand to respond in a timely manner. That’s why it is crucial that everyone continues with security awareness training, no matter what role they hold in a healthcare organization.

Rodman Ramezanian, Global Cloud Threat Lead, Skyhigh Security
X: @SkyhighSecurity

We’ve all seen very fast growth rates of services like social media, streaming, and cloud storage platforms, but in the case of Artificial Intelligence services alone, we’ve seen an increase of over 400% in just the past six months! It’s definitely a domain we’re continuing to see rapid expansion and adoption of. However, adoption of these powerful AI services will largely depend on how organizations are able to minimize associated risks for their own uses. Some may take the more heavy-handed approach of “block all AI,” while others may decide to coach and guide users with guardrails and compensating security controls.

Ultimately, I think this realm of Generative AI will continue to grow and advance to offer richer services and features to users and businesses. The question will then become how mature and well-equipped these organizations are to embrace and benefit from these AI advancements without introducing risks in the form of data loss or misuse. For many organizations in the healthcare industry, the nature of their work and sensitivity of their data may mean that AI services just aren’t suitable or appropriate. However, with the right protections in place, AI can bring about incredible efficiencies and benefits to help all organizations grow and thrive.

Toby Eadelman, Chief Technology Officer, AvaSure
X: @AvaSure

AI can be used by malicious attackers to develop more sophisticated schemes or to evade detection. The very tools used to protect our systems are the root of the weapons formed to infiltrate and/or destroy them. Healthcare should continue to invest in cybersecurity infrastructure and training to enculturate a security-first view of data and technology.

Privacy, data security and the ethical use of technology are of the upmost importance. AI can be a valuable tool used to prevent, detect and respond to cyberattacks. However, we run the risk of becoming too passive and reliant on AI cybersecurity measures. That is why AvaSure will never remove human decision making from its cybersecurity processes. Having a human in the loop ensures that the data used for AI and machine learning is accurate and unbiased.

Jim Hundemer, CISO, Kalderos
X: @KalderosInc

It’s a given that security and privacy professionals should be concerned with external forces using AI to generate new threat vectors. A less obvious, but equally concerning, threat is posed by internal training of AI large language models (LLMs) on internal corporate information and data. This creates risks that confidential corporate data, such as employee salaries and bonus amounts, could unintentionally be exposed to the entire corporate workforce.

Rick Passero, Chief Information Security Officer, Anatomy IT

Along with generating convincing content for phishing emails, AI is also able to generate code for malware that could hold a hospital’s network hostage. The malware created so far has not been very sophisticated, but that could change over time. It’s important to remember though that while AI is enabling threat actors to scale out their operations in this way, it likewise helps us as defenders scale our ability to capture and analyze data so we can identify suspicious activity earlier and with greater precision.

Mark W. Dill, Chief Information Security Officer, MedAllies
X: @MedAllies

Organizations need to adopt a privacy-first, security-first perspective to ensure that the way they want to use AI aligns with their business strategies. It starts with a high-level policy within the organization that puts in writing what is the acceptable use of all forms of AI. Companies need to determine what the rules of the road are for employees who may want to use AI-based services such as ChatGPT, workers who may acquire technology with embedded AI, and even software developers who may want to use AI to generate code.

Wes Wright, Chief Healthcare Officer, Ordr
X: @ordrofthings

Cybersecurity is as much about speed as it is about numbers, and we are long past the point of expecting human beings to be able to keep pace with the volume and velocity of attacks. According to Accenture, the average enterprise is attacked 270 times each year, and even the most prepared organizations suffer a breach 17% of the time, while the least prepared are breached 54%. AI technologies make it even easier for attackers to operate faster and at a larger scale.

So, let’s use it too, to automate our defenses as much as possible. There is a clear need to embrace automation as a key element of your defenses. Without automated security tools, it will simply be impossible for organizations to even have a chance of keeping up.
Automation tools can help security teams in both being better prepared to fight and respond to attacks. After all, time to response is one of the biggest determinants of how successful you’ll be in defending your organization. Really though, almost everything in cyber-defense is about time (speed), and that’s where AI and automation should be your best friend. Manual investigation of what is susceptible to a newly-found vulnerability or zero-day attack takes too long. Intelligent automated security tools can help teams quickly identify vulnerable areas and apply security policies — such as segmentation — to mitigate the impact an attack has on the organization as a whole, and to move quicker to mitigate an issue when one happens.

‘Good’ AI with automation can be the difference-maker in the fight against modern AI-powered attacks.

AJ Nash, VP of Intelligence, ZeroFox
X: @ZeroFox

The healthcare industry was hit hard this past year, with breaches constantly making headlines across the United States. In fact, our research team assesses that ransomware and digital extortion attacks targeting the healthcare sector are likely on an upward trajectory – with attacks in Q2 2023 at their highest since 2021. The value of patient data continues to drive attacks on the healthcare sector, and recent advancements in AI have enabled faster deployments for threat actors keen on expediting their attacks. With the rise of generative AI tools like ChatGPT, even less sophisticated cybercriminals are able to mimic voices, create deepfake videos, write more sophisticated phishing emails, and disseminate malware with greater ease. All of this puts the healthcare industry at a higher risk of harm from successful attacks. This rising risk requires an equal rise in the use of intelligence – and ongoing cybersecurity awareness and training – to identify and disrupt AI-enabled attacks before they cause harm to patients and information systems, or result in more data breaches.