Artificial intelligence (AI) in healthcare is a rapidly growing field that has the potential to revolutionize the way medical professionals diagnose and treat patients. The use of AI algorithms and machine learning techniques can help healthcare providers to analyze large amounts of patient data and identify patterns that may not be immediately apparent to human clinicians. 

This can lead to more accurate diagnoses, personalized treatment plans and improved patient outcomes. However, the implementation of AI in healthcare also raises important ethical and regulatory considerations that must be carefully addressed to ensure patient safety and privacy. 

I didn’t write that paragraph. I asked an AI paragraph-generator to write it. Pretty good, right? And I agree with it. Those of us working in healthcare do the best we can with our education, research and practical experience, but a program that eliminates human error might be better for clinicians and patients. 

AI is not a big Skynet-type Terminator-overlord omnipresent and omnipotent entity designed to rule and supplant humans. That’s dystopian sci-fi popular culture. As currently used, AI just takes a problem and scours the internet, aka all human uploaded knowledge, and filters to find the right answer and the best practice. It doesn’t think for itself (yet), it just takes what we know, compiles it and delivers it. 

My husband was wracking his brain to find the name of a book he read in his youth. He gave AI the basics of the book, and it gave him the name of the book, the author and the plot. All of it was wrong. AI made it all up and smugly delivered it like it was fact. It will not always give the answers we’re looking for. 

Last fall, CNN commentator Jake Tapper’s daughter came to the ER and was diagnosed (by humans) with a viral infection, which was wrong and turned out to be life-threatening appendicitis. She was in the ICU for hypovolemic shock and sepsis, and finally got the treatment that saved her life. The use of AI to analyze her symptoms would have saved precious time. If they had sent her home with the diagnosis of “viral infection,” this story would have ended differently. 

Why are we afraid of AI? We are already using it. When we plug our vacation destination into our smart phone maps, it doesn’t just plan our route, it’s determining whether there’s construction and accidents, and changing the route accordingly. When you ask Google a complex and multi-layered question, like “It’s Tuesday. Where can I get the best tacos near my hotel?” the AI gets your location, finds taco restaurants nearby, filters through reviews and menus, finds your tacos and gives you directions. When you get a complicated patient admitted on a Friday night, with orders for treatments and medications your staff hasn’t seen since school, you can bet they’re on the computer looking for best options and best practices. While that may not be the purest definition of AI, the algorithm that got them to the answer is. 

According to the Royal Council of Physicians, a number of research studies suggest that AI can perform better than humans at key tasks, like diagnosing, and are outperforming radiologists at interpreting tests and finding malignancies. But the human element of diagnosing, treating and broader medicine will always be required. Not just to interpret, but to separate the possibly erroneous findings from the truth. Our doctors, nurses, therapists, dieticians and every other clinician will use AI as an adjunct to 

practice, but must be discerning enough to know what to use and what to discard. While AI can see a fracture on an x-ray, it’s pulling that ability from all the x-rays and all the human radiologists who have read x-rays correctly before. 

As clinicians and caregivers, we pride ourselves on our ability to determine the best course of action for our clients and patients. We used to be wary of telemedicine because of the purported impersonal nature of delivering care from a screen. No more. I love getting blood work done close to home, and seeing my Zoom doctor a week later without having to drive anywhere, park, wait in a cold treatment room, and then drive home. 

Going forward, AI is likely to be our practice companion, our assistant when we need one, and our reference for how to treat for best outcomes. Looking to the past as “the good old days” has never helped anyone, and with the leaps being made in the sciences and technology, our best practices lie in the future.

Jean Wendland Porter, PT, CCI, WCC, CKTP, CDP, TWD, is the regional director of therapy operations at Diversified Health Partners in Ohio.

The opinions expressed in McKnight’s Long-Term Care News guest submissions are the author’s and are not necessarily those of McKnight’s Long-Term Care News or its editors.

Have a column idea? See our submission guidelines here.