Tread Carefully with New Free Technology

By Matt Fisher, General Counsel, Carium
Twitter: @matt_r_fisher
Twitter: @cariumcares
Host of Healthcare de Jure#HCdeJure

When a new form of technology hits the market, many people will rush to use it in an attempt to tap into the perceived new worlds being opened. Healthcare has experienced that rush many times, especially in the current age of technology explosion. However, rushing into the use of new technology does not come without risk. The high stakes of healthcare, whether considering patient impact or regulatory compliance, call for assessing how to most appropriately use the technology before rolling it out.

The New Kid on the Block

An artificial intelligence based system, ChatGPT, has garnered the greatest amount of attention in the past few months on the technology front. ChatGPT produces content that can be easy to understand and very closely mimics what an actual individual could produce. Further, ChatGPT functions by responding to conversational language, which means no knowledge of coding is required or even really any deep level of technical skill.

The model works by processing a vast amount of existing content that informs what ChatGPT will produce. An interesting, high level overview (this is said somewhat tongue in cheek as the discussion still gets fairly technical) of what ChatGPT is by Stephen Wolfram breaks it down as the system going word by word and determining what is the word that best fits next in the line based upon what is already there and then continuing to iterate and utilize different odels. Arguably individuals do not write in that manner, though there is somewhat of a curious question posed by that approach. Is it something that our brains do automatically by intuitively building sentences and overall works by subconsciously processing what is already there? It is a potentially endless thought loop and one that can be distracting while actively trying to write.

As ChatGPT is going through all of the seemingly random analysis, it is likely using a temperature parameter to assess what should or could come next as it builds the final content. That is part of the reason why the same input will generate different responses without ChatGPT merely reproducing the same thing time after time.

Leaving the technical details to the side, mostly because those details are admittedly over my head, the results from ChatGPT are stunning. The content produced could, for the most part, fool a reader into believing that it was created by a human. Further, the content fits into almost every field and every scenario.

The flexibility of the content being created is where the imagination and possible danger lies. Users could try to produce content in situations where information is shared inappropriately or utilized to cut corners.

Enter Healthcare

In light of the easy-to-produce, ready-to-go content from ChatGPT, it was only a matter of time before use cases in healthcare were identified. A couple of the quickest uses were to create prior authorization requests and appeal letters that were relatively convincing. The content being created gained a lot of social media attention and interest in experimenting with the possibilities.

But wait, is the content accurate? Can patient information be entered? How do the creators of ChatGPT feel about delving into healthcare? Those are only a couple of questions with a whole host waiting in the wings behind them.

Privacy and ChatGPT

Before rushing into flooding ChatGPT with information, healthcare users should know that the operator of ChatGPT acknowledges that personal data can be processed through ChatGPT, in which case it would be necessary to execute appropriate agreements. Executing agreements means entering into an arrangement with ChatGPT, which would go beyond any potential free use of the tool. A review of the terms of use only finds reference to GDPR and the California Consumer Privacy Act, not HIPAA. That implies the operator of ChatGPT does not have measures in place to ensure the protection of data as required in the healthcare setting.

If HIPAA is not respected, then patient information should not be entered into ChatGPT. That means healthcare users cannot create personalized content because the entry of any patient specific information would run afoul of the privacy requirements imposed by HIPAA.

Enter tools purporting to layer HIPAA compliance into use of ChatGPT. One such tool was announced by Doximity with an assertion that the content is housed within a HIPAA-compliant space of Doximity. It is possible that the assertion is true, which could occur if data are only entered in the secure space operated by Doximity. That could mean basic prompts that pull in ChatGPT created content that is editable only within the Doximity platform. That scenario could be a two step process, which would call for restricting the chance to input patient data that would flow into ChatGPT.

Even if a query to ChatGPT can be protected in a way that meets HIPAA’s requirements, use should still be done carefully. Individuals working for an organization most likely could not enter into an agreement by themselves that binds the larger organization. Putting it a bit more plainly, an employed physician in a large group could not sign up for a service as an individual and put patient information in the service. Why not? Because in most scenarios the individual physician does not have the authority to create a legal obligation on behalf of the employer and, while employed, the patient information is subject to the employer’s compliance with HIPAA. As always, it is necessary to consider all of the layers of compliance.

Given the sensitivity of healthcare information, being very clear on the privacy ramifications is essential. Giving away information without appropriate protections is a recipe for future problems.

Accuracy of Information

Another potential complication for healthcare is ensuring that the ChatGPT produced content is actually accurate. Possibly spreading misinformation because a response is presented with confidence by the tool is problematic. An appropriate professional should carefully review any content that is generated through ChatGPT because simple wording changes can have a big impact.

Paying attention to the details is very important since one proposed use of ChaptGPT is to create arguably easier to understand patient instructions, discharge papers, or other patient facing materials. If those materials lead a patient down the wrong path, liability will quickly follow. Assuming that any tool, but especially new ones that are still being proven out, can be fully trusted will lead to trouble.

Promise Ahead

Despite the caution on running into use of a tool like ChatGPT, it is not an argument to avoid such use. Instead, the creation of the tools and ongoing refinement should be seen as creating promise. It is impossible to fully know what the future will hold, but it will clearly be exciting and filled with the unexpected.

This article was originally published on The Pulse blog and is republished here with permission.