How Hospitals Can Prepare for AI-Generated Phishing

By Zac Amos, Features Editor, ReHack
LinkedIn: Zachary Amos
LinkedIn: ReHack Magazine

Hospitals run on email, shared portals and ticket queues. Attackers now use generative artificial intelligence to send convincing messages that mimic colleagues, patients, payers or IT to push rushed logins. Language errors used to give such scams away, but AI reduces those tells and scales personalization across thousands of targets. The result is a convincing con that aims at credentials and clinical workflows.

AI Phishing vs. Traditional Phishing

Phishing takes many forms. Email is the most common channel because it is inexpensive and simple to blast. Deceitful text messages and phone calls also impersonate real companies or people. It pays to know the difference between AI-generated and traditional phishing schemes.

  • Personalization at scale: Traditional phishing relies on generic templates, while AI uses public and breached data to tailor names, roles, projects and timing so messages feel expected. Generative models can raise believability across text, voice and even cloned sites.
  • Cleaner language: Old phishing scams often exposed themselves with grammar slips. Modern models produce fluent and on-brand copy across languages, erasing common red flags.
  • Faster campaign assembly: Old-school criminals once stitched kits together by hand, but AI now drafts emails, builds lookalike pages and spins SMS scripts all at once.
  • Multichannel consistency: Attackers coordinate email, voice and SMS with the same AI-crafted narrative. This way, a user who ignores one prompt may hear a follow-up call that repeats the story convincingly.
  • Defense evasion: AI makes up new phrases and URLs that bypass basic filters.

How Is AI Changing Phishing Attacks?

Scammers can create very personal communications that seem like they come from a bank, an employer or a friend. Some even contain links meant to steal vital information. Hospitals, in particular, face a planning gap — 60% of healthcare organizations haven’t set clear security goals and built a plan to address them. This leaves teams reactive when attack patterns change.

Scale matters. In late 2024, a study reported a 703% surge in credential-phishing incidents tied to generative AI, which can spin convincing emails and fake login pages. That intense increase overwhelms basic filters, threatens clinical apps and exposes financial operations.

How Hospitals Can Fight AI-Generated Phishing

Hospitals that map where AI phishing causes the most harm, such as access, identity and payments, can target solutions and prove resilience in drills and real incidents.

  1. Raise Identity Assurance
    Adopt phishing-resistant multifactor authentication (MFA) for staff and vendors using fast identity online 2 or personal identity verification authenticators. Require step-up verification for privileged actions, like electronic health record admin changes or wire approvals. Guidance from the National Institute of Standards and Technology explains why cryptographic factors block relay kits and MFA fatigue better than codes and pushes.
  2. Harden Email and Web Trust
    Set email authentication methods, including SPF, DKIM and DMARC, to reject spoofed mail. Use a secure email gateway with behavioral analysis, link isolation and attachment sandboxing. Rewrite links and scan first-seen domains before delivery, then monitor lookalike domains and set up fast takedowns for brand abuse.
  3. Test People and Processes With Purpose
    Run phishing simulations that mirror AI-polished lures across email, SMS and voice. Measure click rates, credential capture attempts and reports to the abuse mailbox, and fix weak handoffs between security teams. A 2025 Reuters investigation with a Harvard researcher revealed how mainstream chatbots helped design a full phishing ruse that performed alarmingly well in a real-world test.
  4. Close the Help Desk Gap
    When anyone asks to enroll a new MFA device or change payment details, call back on a known number and verify with two different factors from separate systems. Log the verification steps in the ticket. For vendor changes, require a second person to approve. The U.S. Department of Health and Human Services maintains public advisories that describe current social-engineering patterns and mitigation steps hospitals can adopt.
  5. Educate Using Realistic Cues
    Ditch the “spot the typo” posters and train on messages that spoof real workflows, like payer notices, missed taxes or DocuSign links. Teach the habit of reporting early. Make the report button obvious and route it to tooling that clusters similar lures and blocks them for everyone.

Closing the Gap Against AI Phishing

Imagine a nurse approving a fake MFA prompt while rushing to clear a bed or a finance lead changing a vendor’s banking details after a convincing voice call. Neither is careless, but both are working in ways that are designed for a world that no longer exists. The question isn’t whether AI will craft the next lure — it’s whether leadership will change the system and behaviors it targets. If every hospital enhanced its security by just one step today, tomorrow’s phishing attempts would land with far less force.