Monday, July 27, 2020

Full benefits of AI in radiology requires evaluating human impacts

Elizabeth Krupinski, PhD, is Professor and Vice Chair for Research in the Department of Radiology and Imaging Sciences at Emory University School of Medicine. She is a member of the Academy for Radiology & Biomedical Imaging Research. Prior to joining Emory, Dr. Krupinski was a Professor at the University of Arizona in the Departments of Radiology, Psychology and Public Health, and was also Vice Chair of Research in Radiology. She is past chair of the SPIE Medical Imaging Conference, past president of the American Telemedicine Association, and past chair of the Society for Imaging Informatics in Medicine. Dr. Krupinski’s
research interests are in medical image perception, observer performance, medical decision making, and human factors as they pertain to radiology and telemedicine. Dr. Krupinski received the Academy of Radiology Research Distinguished Investigator Award in 2014 and the ATA President's Award for Individual Leadership in 2017. She serves on a number of editorial boards for both radiology and telemedicine journals and is the Co-Editor of the Journal of Telemedicine & Telecare. She serves regularly as a grant reviewer for the NIH, DoD, TATRC and other federal, state and international funding agencies and has served as a member of a number of FDA review panels. She is a frequent and sought after speaker at medical imaging conferences. PARCA-eNews spoke to her by phone about her perceptions and observations about artificial intelligence (AI) implementations in radiology.

Q. I wanted to start with an “AI in radiology” status update. Where are we with AI? How is it being implemented in radiology and what are some examples of good or positive impacts?

 

In terms of implementation I would say, sporadic and scattered. It is important to realize there are a lot of different types of AI applications out there. I mean, it's not just image analysis, image segmentation, and identification of lesions. There are algorithms for reducing dose, for better patient positioning when being imaged, for improving the work flow through departments, for helping with billing and scheduling and all sorts of stuff, so AI is rather pervasive in our healthcare system already. It's not always called AI. 

 

In terms of the more sexier AI implementations that grab people's attention, the algorithms that do image analysis and image segmentation, in that respect a lot of the other implementations are probably a lot more embedded in our healthcare system than we probably are aware of than those that are specifically devoted to what the radiologist does. The more image-based are in that sporadic category that I mentioned. I think a lot of that is because a lot of these schemes are out there, and a lot of companies are developing them, but they're not approved yet. They haven't been validated and they haven't been through the types of validation studies that I was talking about, which is do they actually impact clinical decision-making performance. 

 

There are a lot of studies out there that show this or that algorithm was able to achieve an area under the curve 0.86 or whatever and therefore would be great in clinic but they don't go beyond that. Achieving a certain level of stand-alone performance may be enough for some level of approval, but there's far more to it than that. From my perspective, it is that final stage (of research) that is really required in order for these schemes to have a true impact.

 

In that respect I would say that probably more at academic medical centers and large healthcare systems have some AI implementation. There are programs out there that are certainly helping radiologists with measurements, with highlighting features, with finding some lesions and indicating them to radiology, but it is certainly not at the point that I would call pervasive at all.

 

Q. In an article in Aunt Minnie I gather that you're raising concerns about whether the research that's going in into developing the algorithms is asking all the right questions in terms of the impacts that AI has on patients and radiologists. Can you give me some examples of what researchers are missing?

 

There's a lot of research in the development phases, which is critically important because we've got to have good algorithms in the first place. And so the studies that demonstrate that they can achieve a particular area under the ROC curve is essential, but that gets us only so far through the process of what I would call the complete imaging chain, the decision chain.

 

It is not that they're really missing anything per se. It's just that the people who develop these algorithms typically are not the ones who have the experience, or the skill sets to take it to the step that I am recommending, which once implemented, is to see how it impacts performance. 

 

You need very different types of investigators to do that. And so the question becomes with these algorithms that are being developed, these AI tools, at what point do they reach that level of acceptability where they can hand an algorithm off to somebody else to do a true evaluation. And you know, those types of studies, take a lot of work. They're also going to take investment and so depending on the size of the company, on how much they have invested in getting their product out to the market, determines how many of them are actually going to do that next stage. 

 

There are some in the AI space that develop an algorithm run it on set of images to get a publication or win an AI challenge, but they care less about whether it's commercialized and it gets used clinically in the long run. A lot of these studies are done simply because they can be done. It is a smaller set of investigators running studies that are going to take it that next step.

 

What is missing is that the interface or mechanism to help those who are developing the algorithms, who are typically engineers and computer scientists, for example imaging informaticists, to then take it and say okay, that was really great, it looks like it's going to have an impact. Let's prove it. That's a different set of people and a different agenda and I think it's that connection that is missing really.

 

Q. For the prove-it part are you basically talking about the bottom line. Does AI actually result in better patient outcomes? Is that what you're talking about?

 

Kind of yes, I mean once it is implemented clinically, the main question is – is it worth implementing clinically? It's one thing to be able to develop tools and show that they can independently go and achieve some level of accuracy on a set of images, it's another thing to really see if it truly has an impact and does it somehow change the decision making process of the radiologist by providing them with better information or information that is much more relevant than what they have now and then that actually impacts patient care. 

 

You've got the algorithm development, and then you've got does it impact to the way radiologists work. That's a whole different type of study, and then the third area is, does it actually impact patient care and outcome. That's a totally different area of investigation and that gets even harder to do because measuring outcomes is even more difficult. So you really have kind of three main buckets that really need to be involved to see if all of this truly has an impact on care.

 

Q. It sounds like we're in that stage of development, the post-hype stage for any new technology. The initial expectations were really high, but now we're sort of in the proving the concept stage and it's going to be a lot more complicated to actually prove that this is valuable and useful in healthcare would you say?

 

Yes it is something that isn't going to happen overnight. I would say it is going to take five to ten years to where we're going to move from this bucket number one. I think we've started to get increasingly involved in bucket number two. There are more studies that are being done in this second bucket of evaluating AI’s impact on the diagnostic process, the diagnostic workflow, etc. That third bucket, however, is a ways off yet. I would say the second bucket is going to happen within the next five years and people are starting to go there but to get to that third bucket that's going to be in the next five to ten years, really. 

 

Q. In terms of that workflow bucket. One of the things that seems like a challenge is the IT infrastructure and the impact of AI on workflow. I read an article where some of the early implementations are running into traffic jams, with data including images and annotations, reports and other data being routed to an EMR, PACS, and other systems, has that been a problem and how is that being addressed?

 

I think right now a lot of the problems are in bucket one, trying to get enough images and data to really train these algorithms, and that involves getting data from one institution to another. Right now there's a lot of effort going into that at the national level. So there are people submitting grants for centers of excellence that would help with this. They would collect images, curate the images and all the other stuff that's required to do good AI development and then validating the algorithm on a larger data set. 

 

There are people paying attention to that and there's funding being allocated, so that's good. But in that second phase where it's actually being implemented clinically, there are in some instances where the existing infrastructure is not ideally matching what has to be done.

 

I mean ideally some of these algorithms that are being developed are pulling information from images, from the electronic health record and other sources and trying to combine all this information. Depending on the institution and the level of sophistication of their IT systems how well they're integrated and so on, it can be incredibly difficult for a real time scheme to do what it was intended to do, which is in real-time pull information from the PACS, from the RIS, from the hospital information system from the quality information system and take all this data analyze it all in real-time, and pump out the answer. That's not quite there yet.

 

Again that's down the road, right now a lot of these teams that are being developed are "uni-taskers," -  the scheme will find the pneumothorax, or the pulmonary nodule or XYZ. The problem is that in real life you need to find ABC all the way through Z. You've got to find all the things that a radiologist looks for to make it really useful. So I think that the ultimate goal, and that once we get to the implementation phase, is going to take a whole different set of tools that we're going to have to develop to make it happen. 

 

Q. In a blog post PACS expert Herman Oosterwijk suggests or proposes that there's going to be a need to develop some other kind of software fix that will be a so called "AI conductor" that will orchestrate the flow of images and that's just going to be another layer of data instructions that is going to going to impact workflow. He said that your typical DICOM router is simply not up to that task. Is that sort of thing also under development?

 

I think people are starting to, it is getting at what I was talking about - people are thinking that to really make this happen it is much broader than one little scheme or tool that can find a nodule. It is far more complicated than that if we are really going to unleash the massive potential of AI on a large scale. There is going to be a time in between now and seeing AI’s full capacity where we are going to progressively add on, 'now we can do this, and then now we can do this and this.' 

 

At some point in the future, we'll have AI that will really do what everybody thinks it can do, which is pull information from all these different sources, analyze it for every possible disease, using any possible information from that patient's life history, but we're nowhere near that. In the long run it will happen, but it is going to take a very different set of tools that are more dealing with data integration and data workflow than the AI schemes themselves. There's going to be a big part of AI that is going to deal with just this problem, a very different type of AI than analyzing the image, there's more to it than that. And yes, I think there are people starting to look at that.

 

Q. A colleague of yours actually from the center of Ethics at Emory, John Banja was talking about the level of multitasking that a typical radiologist does on a daily basis is so far above where we are with computing technologies, that he thinks that to get to where AI is going is going to take replacing silicon-based transistors with organic biochips or carbon nanotubes or other advanced technologies that are just beginning to be developed. Would you agree with that? 

 

Absolutely yes. Absolutely. I mean if you just take a simple chest image, what most people are familiar with, there's well over 250 possible diagnoses if you really go through and look at everything you can look for. A lot of times it's obvious – the patient has pneumonia. But it is far more complicated to give the AI tool an image and expect it to find everything. A piecemeal approach is not sufficient.

 

I agree that we're going to need not only advances in the AI but we're going to need advances in the technology itself. I agree with John 100 percent.

 

Q.  To that point of making it happen fast, and in a reasonable amount of time brings me to my questions about AI's impact on PACS administrators and their jobs and how they are going to be impacted in terms of making their PACS continue to work smoothly for the radiologist. What do you think are some of the most significant impacts of AI on PACS administrators?

 

I don't have a lot of experience in that, other than to know that when it's (AI) done right, I think it's going to make their jobs easier. The tools for properly positioning the patient to get the proper dose, helping with determining the best workflow through the Imaging rooms, being able to properly route the images and so on -I think AI is going to have a huge impact there. So I think from the PACS administrator’s perspective some of these tools that are being developed will absolutely help with the efficiency of what they do because there are going to be enterprise-wide tools that can help them with data flow.

 

Q. What will PACS administrators need to do to keep up with the developments and prepare for AI and evolve their skills for the future?

 

I think it's exactly what you said. It's keeping up with things, which is incredibly difficult sometimes. There are professional societies the Radiological Society of North America (RSNA), the Association for University Radiologists, and the Society for Imaging Informatics in Medicine (SIIM) in particular that PACS administrators would go to. think ARRT is one as well. These meetings have updates  every year, training sessions, workshops where people can go and to find out what's happening in the field and how to stay on top of things.

 

It's not just going and listening to the scientific talks. Those are interesting but it is actually the workshops that are held and the more tutorial types of things that SIIM in particular is very good at that, as is RSNA, in having talks directed at the broader audience, not just radiologists.

 

It's a matter of things changing so quickly that it really does take attending the annual meetings on a regular basis, but also try to keep up with the literature through blogs, webinars, journals and so on to really help with some of the basic and advanced tutorials. 

 

I don't expect PACS administrators to go out there and learn how to develop an AI scheme, that is silly that's not their job, but to have an understanding of what they're all about and how they can impact and more importantly how can somebody in their positions evaluate whether or not it's something that's going to help them and their team in the radiology enterprise. 

No comments:

Post a Comment

Followers