How far do we need to go and what can we do to accelerate progress?

How far do we need to go and what can we do to accelerate progress?

How far do we need to go and what can we do to accelerate progress?

Healthcare organizations of all types are increasingly signaling their full commitment to generative artificial intelligence (GenAI), the kind of AI that can quickly classify data, summarize information, and create new audio, image, and text content. While there are hundreds of exciting use cases for generative AI across the payer, provider, and patient communities, there is still much work to be done before stakeholders have broad access to safe, trusted, and effective AI tools at scale.

As we navigate the challenges of this critical problem-solving phase in the evolution of AI, leaders in the field must work closely with each other and regulators to create appropriate guardrails, set expectations, and drive innovation.

As Chief Technology Officer, I firmly believe that a collaborative approach and a focus on patient-centric outcomes are essential to support more effective care through AI, especially in chronic disease-related services. Recently, I had the opportunity to reflect on the impact generative AI is already having – and what the industry needs to do to move further closer to the promise of an AI-driven healthcare ecosystem.

How mature is AI in terms of care support decisions?

There is no doubt that AI for care support decisions is becoming more sophisticated by the day, especially in the area of ​​generative AI. We are beginning to better understand how AI tools can identify patterns and help us use those insights to make decisions in everything from radiology reports to sepsis detection to chronic disease management.

However, none of these algorithms are mature or reliable enough to be used entirely on their own. These tools support clinicians’ decision-making, but do not replace it. Keeping humans informed remains a must as we work through the teething issues of AI, including the possibility of bias, hallucinations, and inaccuracies.

I’m optimistic that we will resolve these issues quickly, but right now we’re still at the very beginning of the maturation phase. Right now, people still need to make the final decisions about care so we can make sure we’re providing the best possible care to the people we’re responsible for treating.

Where do you see the greatest short-term potential for AI to disrupt current approaches to chronic disease treatment?

What excites me most is the potential of AI to support long-term treatment adherence, particularly adherence to continuous glucose monitors (CGMs) in people with diabetes. AI has already demonstrated its power in predicting treatment refusals based on clinical and administrative data. But adding new data sets such as patient behavioral data and socioeconomic data will allow us to truly understand how to personalize interventions based on risk classifications and provide patient-specific education and support at precise points in their self-care journey—up to several months earlier than we can currently.

This is critical for adherence to CGM therapy, which requires diabetics to analyze their own data several times a day and use this information to make decisions about diet, exercise and other lifestyle factors.

If someone is showing signs that therapy is about to stop working, and we can use AI to proactively intervene to keep them on track, that makes a huge difference in how we can change the traditional relationships around chronic disease management, which are often extremely reactive rather than proactive. We know we can improve outcomes and experiences using this method, and we’re already proving we can save money too: up to $2,200 per patient per year through improved CGM adherence and better glycemic control.

In your opinion, what positive impact will AI have on diabetes patients and the doctors who care for them?

AI will help us accelerate the transition from “disease care” to more proactive, personalized, and preventative care. AI will help physicians “get ahead of the curve” by providing predictive capabilities based on data sets that are far too large and complex for a human brain to process alone. Knowing more, and knowing sooner, will allow physicians to truly begin working toward the long-standing Quadruple Aim goals: better experiences (for physicians and patients), lower costs, and better outcomes—which has been an extremely challenging endeavor in the diabetes space. The key to success will be creating a seamless, interoperable, and reliable data ecosystem to inform our AI tools and equitably distribute access to high-quality outcomes across populations so that all patients with diabetes have access to personalized, preventative, and holistic chronic disease management.

In your opinion, what positive impact could AI have on the specialists who care for diabetics?

The shortage of care providers is hitting the diabetes world particularly hard. There are not enough endocrinologists to offer specialty support and nowhere near enough primary care physicians to fill these gaps for the 38.4 million people with diabetes, not to mention the 97.6 million with prediabetes. It will become imperative to use AI to augment the capacity of our human care providers so they can practice confidently and meet the needs of this growing population.

When patients win, specialists and physicians of all kinds win, so better outcomes and lower costs are mutually beneficial. More specifically, we have an opportunity to use AI as a workflow enhancer and intelligent assistant to help overburdened providers identify potential problems earlier and more frequently before they become full-blown crisis events.

As a CTO, what concerns do you have about bias towards AI in healthcare?

We need to be aware that our systems and data sets will have some bias depending on how the data is collected and managed. I think the most important thing is to develop a test and learn approach as an organization so that the impact of bias can be identified early and incorporated into the model’s recommendations.

Regulators, nonprofits, and industry consortia are currently issuing specific regulations for AI in healthcare. Are there any areas where you feel these groups are “missing the forest for the trees”?

The field of AI in healthcare is evolving rapidly, so it is critical for regulators to provide a comprehensive perspective on how AI can and should be used safely and effectively in healthcare. It is vital that regulations move beyond their current state, particularly in relation to privacy, security, bias and transparency.

In the United States, AI adoption has been more measured, with a focus on implementing regulations and frameworks to implement future regulations. In contrast, the European Union (EU) has been quicker to introduce regulatory guardrails that may be difficult or burdensome to implement.

The protection and security of personal information is of paramount importance in healthcare, and it will become important to use generative tools in a way that preserves privacy. The key here is to establish basic and pragmatic guidelines that specify where and how generative AI is used for the benefit of the patient, as well as some requirements for transparency and bias in the use of AI tools. Given the speed at which some of these tools are being deployed, it is important that regulation in the United States is accelerated for the benefit of the patient – without necessarily mirroring the EU’s approach, which has the potential to stifle rather than encourage AI innovation.

By staying united on regulatory issues while ensuring effective collaboration between developers and users, we can create an environment that balances safety, security and accessibility so that everyone has the opportunity to reap the benefits of AI.

Richard Mackey is Chief Technology Officer at CCS, a company revolutionizing chronic care management by combining medical devices and assistive technology with comprehensive patient education and counseling on a single platform.

Leave a Reply

Your email address will not be published. Required fields are marked *