What Are the Ethical Considerations of AI Use in Clinical Trials?

By Emily Newton, Editor-in-Chief, Revolutionized
Twitter: @ReadRevMag

Like many other industries, the medical sector is ramping up its artificial intelligence (AI) adoption. This technology has shown significant potential in many use cases, particularly clinical trials, but some concerns remain. As more organizations use AI in medical research, more ethical questions emerge.

AI can identify ideal test groups, streamline reporting and automate routine tasks to make clinical trials faster, more accurate and cheaper. However, it’s easy to overlook its ethical implications in the excitement over this potential. Before medical researchers embrace AI, they should consider the following issues.

Data Privacy

AI models require vast amounts of data to work reliably. Training health care AI may involve feeding it terabytes of medical records. Implementing one in a clinical trial could put thousands of participants’ information in a single database to analyze.

Having this much sensitive data in one place poses significant cybersecurity and privacy risks. Health care is already the most targeted industry for ransomware, and these vast databases are valuable targets for cybercriminals.

Many organizations also use “black box” AI, which makes it unclear how the model uses data. That means businesses running clinical trials may be unable to tell participants how they’ll use their information, creating an obstacle to informed consent.

Bias in Training Data

AI can be a promising tool for clinical trials because it could theoretically reduce bias by removing the human element. However, in some cases, AI can amplify human prejudices rather than avoid them.

Machine learning learns from data, and many records contain or reflect implicit, societally ingrained bias. Consequently, AI analyzing these records can pick up on trends and train itself to act on them, exaggerating the prejudices within its data. This phenomenon has led to legal AI that’s twice as likely to predict recidivism in Black defendants and hiring models that rank women’s resumes lower.

This bias problem can be challenging because AI learns from subtle things humans may not immediately recognize in data. Developers and end users must ensure they train clinical trial models on diverse data sets and actively monitor for and correct biased tendencies.

Lack of Transparency

It’s not always clear how AI models work, and clinical trials using this technology may struggle to remain transparent. That lack of transparency can lead to hefty regulatory fines when clinical trials already cost $2.6 billion on average and infringe on participants’ right to information.

Using black-box AI in clinical trials makes it difficult to show how it reached its conclusions. Even if the device or drug passes regulatory hurdles with this lack of transparency, it poses an ethical concern. Should an organization sell a medical treatment based on results it’s unsure of how it arrived at?

Trial participants should also be able to request and access information about what data of theirs the study collects and how it uses that information. Businesses may be unable to provide that transparency with some AI models.

Unclear Liability

It’s not always clear who’s responsible for anything that happens during a trial with AI. This technology is so new and relevant legal guidance is relatively sparse. That leaves the matter of liability up in the air.

Building a proprietary AI model can cost millions of dollars, so many organizations opt for off-the-shelf or custom solutions from third-party vendors. As a result, the company using the AI isn’t always the one that built it. Introducing more parties into the equation raises more liability questions.

If something goes wrong during a clinical trial relying heavily on AI, who’s responsible for the failure and resulting damage? Is it the team that built the model, the model itself or the company that deployed it? The lack of clear answers to these questions poses ethical risks to participants who these mistakes would affect.

Clinical Trial AI Is Promising but Poses Ethical Questions

These ethical considerations don’t necessarily mean AI is too risky to use in clinical trials, but organizations should approach it carefully. Before rushing into AI implementation, medical researchers should consider how to account for these issues.

Clinical trial AI requires considerable oversight and slow, limited adoption to avoid ethical problems. Optimal paths will emerge as more businesses ask these questions, but that will take time. Until then, medical companies should temper their expectations and take a slower, more careful approach.