By Quentin Chu, Healthcare Executive, Investor, Leader
LinkedIn: Quentin Chu
OpenAI’s entry into healthcare is good news, unambiguously.
When the company behind ChatGPT announces a dedicated health product, it validates what many of us have known: AI has crossed the threshold of genuine clinical utility. Patient expectations are shifting. Health systems are finally ready to engage seriously with AI. The consumer health moment is real and worth celebrating.
But the headlines about AI transforming healthcare often miss what transformation actually requires. Consider the specialist access crisis: Patients now wait an average of 31 days to see a specialist, the longest on record. Gastroenterology averages 40 days. Rheumatology can stretch to six months. The Association of American Medical Colleges projects a shortfall of up to 86,000 physicians by 2036.
ChatGPT can already help a patient describe their symptoms more clearly. However, helping that patient see a rheumatologist before their joints deteriorate requires something more.
Let’s be clear about AI’s genuine capabilities. Modern AI can coordinate complex processes, follow up persistently until tasks complete, structure clinical information for specialist review, and match expert-level accuracy on narrow diagnostic tasks. Agentic AI, systems that take actions, not just answer questions, is real and powerful.
Some companies are already using these capabilities to build transformative tools. AI embedded in clinical workflows can effectively stretch specialist capacity, enabling one expert to guide many more cases than traditional models allow. A specialist reviewing a well-prepared case asynchronously can provide guidance in 15 minutes that would otherwise require a 45-minute visit scheduled three months out.
The question isn’t whether AI can improve healthcare; it’s how AI must be paired to actually move the curve.
AI can help one specialist cover far more patients, but only if workflows exist to prepare cases properly, validate AI recommendations with appropriate clinical oversight, and track whether those recommendations actually worked. The technology is necessary, but not sufficient. We also need redesigned care pathways and payment models that compensate specialists for guidance, not just visits.
Then there are incentives: Fee-for-service rewards volume. Prior authorization exists because friction saves payers money. AI doesn’t change who gets paid for what. But AI can be the lever that makes incentive change viable, by proving ROI, reducing coordination costs, and generating the outcome data that value-based contracts demand. The infrastructure for new payment models is being built by companies that track what happens after a recommendation, not just companies that deliver the recommendation.
No one owns the patient between the hospital discharge and the PCP follow-up. No one owns the diagnostic journey when symptoms take months to resolve. Abnormal results sit in inboxes because no individual’s job depends on acting on them. AI can flag problems relentlessly. Agentic AI can even create de facto accountability through persistent follow-up.
But formal accountability, where compensation and performance depend on resolution, requires organizational change, not just better software. This brings us to a key distinction that matters more than most coverage acknowledges: Consumer AI versus workflow AI.
Consumer AI helps patients navigate a broken system. It answers questions, improves health literacy, and helps people articulate concerns before appointments. This is valuable, and OpenAI will likely do it well.
But workflow AI helps fix the system itself. It embeds in clinical operations rather than sitting on top of them. It tracks outcomes so it learns what actually works. It creates feedback loops between generalists and specialists. It captures data that no internet scrape can replicate: What was recommended, what the specialist thought, what happened to the patient.
Both matter. Only one changes the curve.
The companies building workflow AI aren’t just deploying sophisticated models; they’re redesigning how care gets delivered. They’re proving which protocols work and which don’t. They’re building the data infrastructure that makes value-based care viable. They’re creating the “unsexy” plumbing that makes the whole system function differently.
OpenAI’s healthcare announcement raises the tide for everyone building in this space. The conversation in health system boardrooms shifts from “should we use AI?” to “which AI, deployed where, to solve which problem?”
The answer that excites me most isn’t the AI patients interact with directly, it’s the infrastructure underneath. The companies that will actually solve the specialist crisis are building workflows that let experts guide 10 cases in the time one visit takes, outcome tracking that proves what works, and data assets that make both better AI and new payment models possible.
ChatGPT Health is a welcome addition to healthcare. But the builders to watch most closely are those quietly fixing the plumbing while everyone else admires the faucet.