By Jim Younkin, MBA, Senior Director, Audacious Inquiry, a PointClickCare company
LinkedIn: Jim Younkin, MBA
LinkedIn: PointClickCare
While Americans have generally embraced consumer technologies like smartphones and social media, healthcare AI is proving to be different.
Consumer technologies typically gain widespread adoption despite initial concerns, but healthcare AI faces deeper skepticism from both providers and patients. This cautious approach reflects the unique challenges and higher stakes involved when AI intersects with patient care and sensitive health data. This skepticism is well-documented in recent research.
In a 2024 American Medical Association survey, 41% of physicians said they were equally excited and concerned about potential uses of AI. Their top concerns were the impact on the patient-physician relationship and patient privacy.
In a separate University of Minnesota survey, patients reported low trust in their healthcare systems to use AI responsibly (66%) and low trust in system assurance that an AI tool would not harm them (58%).
That’s a high level of ambivalence about technology that promises to reshape healthcare. And that’s a good thing. Because of its use of personal data, the complexity of its operations, and its role in affecting care decisions, healthcare AI should be approached with caution.
This healthy skepticism compels us to approach healthcare AI solutions with the rigor and responsibility it demands, ensuring that its development and deployment are anchored in ethical considerations, data security, and patient safety from the outset. The stakes are simply too high to get it wrong. That’s why much of the early adoption of AI has been in relatively low-risk areas like passive listening and documentation analysis.
Despite growing awareness of these challenges, implementation remains complex. While recent federal guidance like the Office of Management and Budget’s AI memorandums provide direction, implementation challenges remain as organizations navigate this evolving landscape. As each hospital system and organization implements its own approach to AI, its use must be guided by a sense of responsibility toward patients, providers and healthcare itself.
The five pillars of responsible AI use
While AI capabilities evolve rapidly, with today’s applications quickly surpassed by tomorrow’s innovations, certain foundational principles remain constant. These five pillars provide a stable framework for responsible AI technology adoption:
- Transparency – Healthcare organizations must provide clear explanations about how their AI systems work, the data they use, and the methods they employ to reach conclusions. Transparency requires providing appropriate explanations tailored to different audiences: detailed technical information for IT teams and regulators, practical operational insights for clinicians, and clear understandable information for patients about how AI affects their care. Organizations should document and share information about data sources, system limitations, performance metrics, and the role of AI in clinical workflows. This openness builds trust among clinicians and patients, allows effective oversight, and enables continuous improvement of AI systems.
- Accountability – Organizations must establish clear accountability structures for their AI implementations. That includes designating leaders responsible for AI outcomes, maintaining transparent documentation of AI decisions, conducting regular performance evaluations across diverse patient populations, implementing prompt-response protocols for correcting errors, and establishing accessible channels for clinician and patient feedback.
- Human oversight – AI systems must augment and enhance human decision-making rather than substitute for clinical judgment. Healthcare professionals must be able to review, validate, and override AI recommendations based on their clinical expertise and understanding of individual patient contexts.
- Privacy and security – AI systems create unique privacy and security challenges that extend beyond conventional healthcare data protection requirements. These systems require much larger datasets to function effectively, increasing both the scope of data collection and potential exposure risk. AI can unintentionally reveal patterns in data that might compromise patient privacy in ways traditional systems don’t. Additionally, the same pattern-recognition capabilities that make AI valuable can, if not properly secured, create new ways for protected information to be exposed or pieced together from seemingly anonymous data. Organizations must implement specialized safeguards and governance protocols specifically designed for their AI systems to ensure patient information remains protected throughout the entire process, from system development through daily use.
- Fairness – AI systems must be designed to deliver equitable care and outcomes for all patients, regardless of race, gender, age, socioeconomic status, or geography. Achieving fairness requires proactively examining how historical healthcare disparities may influence training data, continuously testing AI performance across diverse populations, and making necessary adjustments when disparities emerge. Organizations should establish clear fairness metrics and review processes to ensure AI systems reduce rather than reinforce healthcare inequities.
Applying AI responsibly
To see how these principles work in practice, consider a common healthcare challenge: preventing unnecessary hospital readmissions of patients discharged to skilled nursing facilities (SNFs). Hospital readmissions cost the U.S. healthcare system approximately $26 billion annually. They not only drive up costs but also lead to poorer patient outcomes and lower quality ratings for SNFs.
AI technologies address this challenge by analyzing patient data in real-time to identify which newly admitted SNF patients face the highest readmission or mortality risks, along with the contributing factors. This allows care teams to intervene early with targeted support, improving patient outcomes and reducing avoidable readmissions.
This application demonstrates all five principles: the AI system operates transparently, maintains accountability through clear performance metrics, preserves human oversight of care decisions, protects patient privacy through secure data handling, and promotes fairness by identifying risks across diverse populations.
These same principles apply to other AI applications in post-acute care, such as improving clinical documentation. Medical records, particularly discharge summaries, are typically lengthy, poorly structured documents that challenge healthcare providers trying to quickly locate critical information. When patients are transferred from hospitals to SNFs, these summaries are typically printed and handed off, leaving SNF staff to manually sift through dozens of pages to find crucial details about diagnoses, medications, and care plans.
AI technologies transform this workflow by automatically extracting and summarizing essential information, enabling caregivers to quickly understand patient needs and provide more timely, targeted care. Like AI-powered readmission prediction, this is another example of using AI responsibly to enhance healthcare delivery.
Adopting AI with caution and purpose
Healthcare leaders ready to embrace AI responsibly should start by conducting an organizational readiness assessment, identifying 2-3 high-impact pilot opportunities, and establishing governance structures that embed these five principles from day one. While implementing all five pillars may seem daunting, organizations can begin with transparency and accountability – the foundation upon which successful AI programs are built. The healthcare organizations that master responsible AI implementation today will be the ones that transform patient care and operational efficiency tomorrow – safely, effectively, and equitably.