Advancing Algorithms: The AMA’s Augmented Intelligence Policy

By Michael Maylahn, Co-founder & President, Stasis Labs
Twitter: @stasislabs

Someone, pop the bubbly. After fifteen years, the American Medical Association (AMA) has released an official statement on AI — a technology that has been gaining steam since 2005. Healthcare AI, once dismissed as a collection of hyper-glamorized algorithms, has won the AMA’s tough-earned recognition. But before tossing confetti, health-techies, quick to err on the side of “machine-everything”, should read the AMA’s statement carefully. While addressing everything from patient outcomes to legal implications, the document concentrates to a fundamental point: If AI is here to stay, so are the physicians that helped train it.

The statement first distinguishes between competing definitions. There are two popular terms for advanced algorithms — artificial and augmented intelligence. While both refer to the same technology, they have dramatically different connotations. Media gaslighting has given artificial intelligence a bad-rep — making it synonymous with a dystopian Robot Revolution. Headlines like “The Machine Will See you Now” and “The Robot is In” don’t help, fueling the mass hysteria surrounding artificial intelligence. To avoid fanning flames, the AMA avoids this term altogether. If artificial and augmented intelligence are twins, the AMA has picked a favorite child.

Augmented intelligence is meant to serve “irreplaceable human clinicians” without replacing them. As defined by the AMA, this breed of AI is designed with physicians at its center. In pairing machine accuracy with human empathy, augmented intelligence can advance clinical care. However, a Black Box cannot ask how a patient is feeling and care about the answer. Lacking this empathy, the AMA asserts that AI is a clinical tool, not a caregiver.

After defining augmented intelligence, the AMA debunks the myth of plodding, tech-averse “Physician Fossils.” Research shows that physicians are acutely receptive of new technology provided it meets their clinical needs. However, healthtech is rarely designed with end-user input — resulting in unintuitive, complicated systems. For a timely-example of the severe consequences of poor clinical design, the AMA points to the EHR nightmare. Once hailed as an interoperability tool, EHR’s time-sucking checklists have made it the leading cause of physician burnout. In short, health tech’s Golden-Child has devolved into a digital Grim Reaper.

The broken engine behind EHR’s clinical crash-landing is stilted physician-developer dialogue. Poor clinical communication breeds a “flash-over-functionality” mindset. Too often, the result is high-tech glitter — technology that is cool, distracting, and not particularly useful. While developers love “aesthetic platforms” and “just-in-case alerts,” these features complicate care-delivery and bog down workflow. Physicians need functional tools capable of streamlining care — not biweekly software updates. If AI wants to avoid an EHR-fate, clinical input must drive product design.

In addition to product design, the AMA’s statement addresses smart integration. EHR proved that an automation-for-automation’s sake mindset can be disastrous in healthcare. AI should not be treated as a cure-all, capable of improving care in a pixelated puff of smoke. Rather, AI should be integrated to alleviate specific hospital pain points — such as helping identify at-risk patients and reducing alarm fatigue. Developers should rigorously validate the AI’s clinical success — and use feedback to improve product design and performance.

At its best, healthtech maximizes quality-of-care while minimizing stray-wires. To transform the AMA’s recognition into approval, AI developers would be wise to internalize this care-centric mindset — prioritizing patients over packaging.

This article was originally published on Stasis Labs and is republished here with permission.