Artificial Intelligence, or AI, is having a moment. By now, you must have heard something about how artificial intelligence is going to revolutionize this or that. Maybe you’re excited about that, maybe you’re apprehensive or maybe you think it’s all hype. 

My background is engineering, but I’ve worked in healthcare for many years now. One thing I can say, with love and respect, is that healthcare workers are, on average, under-equipped to understand AI. What it is, what it can do, how it works, etc. That’s understandable. However, that’s not going to stop AI from affecting you, your patients and the way you work. 

Let me offer a few things I think are useful to understand about AI. Consider this the 2-minute survival guide for healthcare workers.

  1. AI really boils down to a math problem. 

The math might be complex, but the principles are simple: We provide examples to a network to “train” it and adjust parameters of the network until it does an acceptable job of doing predictions without memorizing. (Fun fact: Some types of AI are excellent at memorization. Also, memorization is bad in this case.)

  1. AI can be trained to do nearly anything we ask, as long as we can provide enough examples. 

Reading x-rays, CT scans or medical documents to look for conditions that are active and relevant? Yes, absolutely and happening today. Looking at patient documentation to make determinations about levels of care? Yes, there was an article here very recently describing this very situation. If you can provide examples, AI will mimic what you do, only faster and with more consistency. In the time it takes you to read this sentence, AI can simultaneously read hundreds or even thousands of documents and make recommendations.

  1. When I say “enough examples,” I mean a huge number of examples — millions, often more. 

This is the barrier to entry for most organizations. Years ago when someone said, “Data is the new oil,” this is what was meant. If you don’t have a large amount of data to train your artificial intelligence, you aren’t getting very far. It’s worse than that, though. All your data must be correctly classified before you can even start. Ask any of my coworkers, this task is a huge, tedious process. (They’ve done it several times and most of them are still very friendly with me at company gatherings. They really are great people.) 

  1. Nearly all AI is going to be proprietary, for now, for at least two reasons. 

First, AI is still expensive to develop so most organizations need to keep the technology private to recoup the cost. Second, it’s often the case that it’s not exactly clear how AI arrived at its decision. This makes some people uncomfortable. We call this “the black box.” We pass in the input information and get an answer out, but it’s not always clear how. The black box just works. (I’m oversimplifying here. There are things we can do to better understand what’s important.)

AI is not appropriate for every problem. Recently my team asked if we could build a neural network (a type of AI) to scan our documentation to look for a particular type of problem. After careful analysis of the issue, we determined the problem occurred so infrequently that we’d need to manually score thousands of additional examples to have any chance of accurately classifying examples. In this case, the cost probably outweighed the benefit. Lots of problems in healthcare look like this. Any problem that occurs very rarely is difficult for AI.

Bias is another problem. There are plenty of opportunities for bias to find its way into AI. The data can be biased by over or under representing some class. We have ways of compensating, but they aren’t perfect. Your data source might not represent the population you will ultimately use your AI with. Generating training data could be a biased process.

Despite these problems, the benefits of AI are so far outweighing the concerns. You are going to experience AI much more in the workplace as time goes on. Some questions you should be asking when you hear AI is going to assist with some task:

  • What data was used for training?
  • How were the targets defined? How did we decide which samples were “good”? Is there consensus on that training data?
  • How accurate is this AI? Is it biased more towards “no” than “yes”? Speaking from experience, you can build two AI systems, using the same training data, and get very different types of biases. 
  • Did this AI replace human decision making? Does a human review the results? How often does the human override the AI decision?

That last one is important. I would be uncomfortable deploying any modern artificial intelligence system into a healthcare environment without a human reviewing all the results. It’s not that AI isn’t good. I’m still amazed at how good it can be. My concern is that the stakes are so high. Fortunately, I have influence over that and require all our AI results to be reviewed by clinicians. 

There is and will continue to be great temptation to abdicate decision making in healthcare to algorithms. Some will. It’s cheaper and faster. But that’s always been the case. We’re just using new algorithms now.

These new artificial intelligence tools, at their best, are just that — tools. Tools to assist experts in making better decisions. It should be about productivity and accuracy. It should be about missing fewer details. At its worst, AI is used as cover for decisions someone isn’t proud of. Remember this: AI reflects the values of its creators. Our goal as builders of AI should be to empower clinicians to be better. 

As a fan of irony, I was tempted to use ChatGPT to write this article. Out of respect, I didn’t. All the words here are my opinion and not my employer or the kind people who are allowing me to use this space.

Joe Eaton is an electrical engineer by training and has spent the last 20 years in healthcare developing software. He’s the only RAC certified electrical engineer in the world. (Yes, he asked.) The past eight have been largely focused on using artificial intelligence to improve compliance and accuracy. He’s also a huge fan of probabilistic modeling. Some might say insufferable, but Joe thinks that’s harsh. He works for Broad River Rehab. You can email him.

The opinions expressed in McKnight’s Long-Term Care News guest submissions are the author’s and are not necessarily those of McKnight’s Long-Term Care News or its editors.