10498
Blog
Carl Aschkenasi

Duct Tape and Dreams: The Reality of Deploying AI “In the Wild” 

Full disclosure, I know next to nothing about software engineering. Or data security. And I understand the basic math behind AI only because a very smart dude sat me down one day in front of a whiteboard and broke it down for me. That was six years ago. To be honest, like most radiologists, I work hard, do my CME and read the occasional journal article that interests me. Sometimes I go to a conference. Despite the fact that I work at one of the leading medical AI companies in the world, I have come to terms with the reality that I will never fully comprehend what it is all these smart young people do at Aidoc.

Which positions me perfectly to explain it to you. Because I am unencumbered by a detailed understanding of all that goes into algorithm development and because I routinely view the process from 30,000 feet, I think I may be able to break it down to a garden-variety 21st century radiologist like myself.

The part of the process I want to discuss here is actually the final “step” in the process of algorithm development: releasing it into the wild. I am proud to say I’ve seen this a few times with the many algorithms Aidoc has produced over the years. I can tell you, candidly, it’s not always pretty.

When developers at an academic institution wish to develop an image-analysis AI algorithm for, let’s say, predicting which renal lesions at CT are likely to be renal cell carcinoma, they are at this point able to collect and organize the massive amount of data required, design the algorithm, recursively train and test it and get it to the point where, to some reasonable level of sensitivity and specificity, it can do its job. But when these researchers then triumphantly march down the road to the University Medical Center and attempt to deploy this solution in the radiology department’s PACS, they are met with a cold water bath. Assuming they can even get permission to fiddle with the PACS servers and/or software, they are likely to encounter a system built in the early 2000s or even 1990s–massive spoke-and-hub medical systems can’t afford to change their PACS software to keep up with Moore’s law of technological advancement. There is no peripheral port on a scanner or PACS server labeled “AI input.”

What results is a very idiosyncratic, custom-built solution to get their renal cell carcinoma detector to operate on the relevant studies, process them and display the results in a usable fashion, without crashing the PACS or slowing it down. And with a turnaround time that makes its output relevant.

Now, again, I have no idea how this works in terms of lines of code, neither when it works well nor when it doesn’t. But I used to repair old motorcycles, back when I was a young man who wanted transportation and lacked the money for a car. I can tell you this: in many cases, when it comes to deploying an AI algorithm in any medical setting, the typical AI developers are not using the allegorical OEM parts. They’re not even using aftermarket parts. They are using the equivalent of duct tape, Bondo cement and spot welding. That works for a 19 year old’s street bike, but it’s suboptimal for a PACS or EHR.

Of course, we now live in a world where university researchers are no longer the only ones developing AI algorithms. There are many companies, like Aidoc, who develop, sell and successfully deploy a set of algorithms on various hospital systems. I can’t speak for other companies, but I know that Aidoc has the most FDA-certified solutions deployed on what is arguably the industry’s widest diversity of medical settings around the globe. Our success is built on a lot of late nights, sweat, tears and innumerable pizza deliveries, pizza consumed by a set of some of the brightest people I have ever met. Their initial experience in releasing our algorithms to the wild, which, admittedly, in the early days more resembled the duct-tape and spot-welding model described above, has been leveraged into a sophisticated algorithm-delivery platform which can be rapidly and securely deployed and supported in nearly any setting we’ve found. And what impresses me most about this platform is not even that it approaches true system agnosticism, but rather that its developers now have a toolbox to deal with the installations that don’t fit the mold. No duct tape.

And this platform approach, which was so critical to Aidoc’s early growth, is double-edged. Not only does it mesh well with large, sometimes clunky hospital systems, but it also allows the (nearly) effortless insertion of an infinite array of new algorithms into the system. This is important because, as my radiology colleagues will attest, the current common offerings in commercial radiology algorithms necessarily address the “low-hanging fruit” issues in radiology diagnosis, management, and throughput. For example: intracranial hemorrhage. A tiny bit of white on a head CT could mean a bleed, and missing it could be devastating. It’s an extremely common indication, a very common study and sometimes very subtle. Excellent substrate for an image-analysis algorithm. But renal cell carcinoma? Differentiating GGO patterns in the chest? Analysis of patterns in MRI of brain tumors? These are difficult assignments, for more rare conditions, and are unlikely to be readily taken up commercially, at least in 2024. 

These are nonetheless important indications. Algorithms of this sort are in a sense the computing equivalent of orphan drugs, and patients deserve to benefit from them. Most algorithms published today are still published by single-algorithm outfits in academic medical centers. If and when they are ready to be released, a platform that can support that deployment is the surest way to do it, and it allows those “low-prevalence” type of algorithms to improve more rapidly by training on more cases.

Click here to learn more about Aidoc’s aiOS™.

Explore the Latest AI Insights, Trends and Research

Carl Aschkenasi