In the nine years since AlexNet spawned the era of deep learning, artificial intelligence (AI) has made significant technological advancements in medical imaging, with more than 80 deep learning algorithms approved by the US FDA since 2012 for clinical applications in image detection and measurement. . A 2020 survey found that over 82% of imaging vendors believe AI will improve diagnostic imaging over the next 10 years, and the medical imaging AI market is expected to grow 10-fold over the course of from the same period.

Despite these optimistic prospects, AI is still not widely adopted in radiology. A 2020 survey by the American College of Radiology (ACR) found that only about a third of radiologists use AI, mainly to improve image detection and interpretation; of the two-thirds who did not use AI, the majority said they saw no benefit. In fact, most radiologists would say that AI has do not transformed the reading of images or improved their practices.

Why is there such a big gap between the theoretical usefulness of AI and its actual use in radiology? Why hasn’t AI kept its promises in radiology? Why are we not “here” yet?

The reason is not that companies haven’t tried to innovate. This is because they were trying to automate the radiologist’s job – and failed, burning many investors and leaving them reluctant to fund other projects to translate the theoretical usefulness of AI into use. real.

AI companies seem to be wrong about Charles Friedman’s fundamental theorem of biomedical computing: it’s not that a computer can accomplish more than a human; is that a human using a computer can accomplish more than a human alone. Creating this human-machine symbiosis in radiology will require AI companies to understand:

  • The clinical competence of the radiologist and his construction algorithms to give the computer this context
  • Discrete workflow tasks and construction tools that automate rote or tedious tasks
  • User experience and build an intuitive interface

Together, these features, delivered as a unified cloud-based solution, would simplify and optimize the radiology workflow while increasing the intelligence of the radiologist.

History class

Modern deep learning was born in 2012, when AlexNet won the ImageNet challenge, leading to the resurgence of AI as we understand it today. With the problem of image classification sufficiently resolved, AI companies decided to apply their algorithms to images with the greatest impact on human health: x-rays. These post-AlexNet companies can be considered to belong to three generations.

The first generation entered the field with the assumption that AI know-how was sufficient for business success, and therefore focused on building early teams with knowledge of algorithms. However, this group significantly underestimated the difficulty of acquiring and labeling large enough medical imaging datasets to train these models. Without sufficient data, these first-generation companies either failed or had to move away from radiology.

The second generation corrected the failures of its predecessors by launching with data partnerships in hand, whether with academic medical centers or large private health groups. However, these start-ups have faced the dual challenge of integrating their tools into the radiology workflow and building a business model around them. Therefore, they ended up building functional features without any commercial traction.

The third generation of radiology AI companies realized that success required an understanding of radiology workflow, in addition to algorithms and data. These companies have largely converged on the same use case: triage. Their tools categorize images according to their urgency for the patient, sorting out how work is routed to the radiologist without interfering in the execution of that work.

Third-generation solutions for the radiology workflow are a positive advancement that demonstrates there is a path to adoption, but there is still much more AI could do beyond triage and reorganization of work lists. So where should the next wave of AI in radiology go?

Go for the stream

To date, AI has demonstrated its value in its ability to handle asynchronous tasks such as triage and image detection. What is even more interesting is the possibility of improving the interpretation of images in real time by giving the computer the context which allows it to work with the radiologist.

There are many aspects of the radiologist’s workflow that radiologists want improvements in and the AI-based context could optimize and streamline. These include, but are certainly not limited to: defining the radiologist’s preferred image suspension protocols; automatic selection of the appropriate report template for the case; ensure that the radiologist’s dictation is entered in the correct section of the report; and eliminating the need to repeat image measurements for the report.

Individually, a shortcut that optimizes one of these workflow steps – a micro-optimization – would have little impact on the overall workflow. But the collective impact of an entire collection of these micro-optimizations on the radiologist’s workflow would be quite significant.

In addition to its impact on radiology workflow, the concept of “micro-optimization compendium” enables a feasible and sustainable business; whereas it would be difficult, if not impossible, to build a company around a tool which optimizes only one of these stages.

Radiology tools for thinking

In other areas of software development, we are seeing a resurgence of “thinking tools” – technology that expands the human mind – and in these areas, creating a product that improves decision-making and user experience is essential. a table stake. Adoption of this idea is slower in healthcare, where computers and technology have failed to improve usability and workflow and continue to lack integration.

The number and complexity of medical images continues to increase as new imaging applications for screening and diagnosis emerge; but the total number of radiologists is not increasing at the same rate. The continued expansion of medical imaging therefore requires better thinking tools. Without them, we will eventually reach a breaking point where we cannot read all of the images generated, and patient care will suffer.

The next wave of AI needs to solve the real-time interpretation workflow in radiology and we need to embrace this technology when it arrives. No single feature will solve this problem. Only a collection of micro-optimizations, delivered continuously and at high speed via the cloud, will solve it.

Photo: metamorworks, Getty Images

About The Author

Related Posts

Leave a Reply

Your email address will not be published.