Page 25 - Diagnostic Radiology - Interpreting the Risks Part Two_Neat
P. 25
SVMIC Diagnostic Radiology: Interpreting the Risks
humans can do things AI cannot. AI is designed to interpret
specific recognition tasks such as bleeding in the brain or
finding nodules on a pelvic x-ray. AI will not be able to consult
with other doctors regarding diagnosis and treatment, and it
will not be able to provide procedures such as local ablative
therapies or perform image-guided medical interventions which
are unique to the patient. Radiologists will continue to discuss
findings with patients, compare findings from past procedures,
and define the boundaries of technical parameters needed to
elicit the best diagnostic images for the patient.
The AI needed to replace even some of the daily tasks
radiologists perform is a long way from use in the daily practice
of radiology. The American College of Radiology found that
different imaging and algorithms of vendors focus on different
aspects of the patient’s case. For example, the FDA has
approved the use of some deep-learning nodule detectors,
but among the detectors, there were different goals each
detector had. Some detectors are programmed to determine
the probability of a tumor or lesion, the probability of cancer, or
a tumor or lesion’s location and unique qualities. The different
aims between the detectors would make the utilization of deep-
learning systems in a clinical practice very difficult. Accordingly,
the FDA is beginning to specify the inputs and outputs for
deep-learning software. Methodologies to determine the
efficacies and value of the algorithms are required by the FDA
and provided by the ACR. Currently the ACR is working towards
a compilation of use cases determined by factors such as body
part, disease type, etc., to provide continuity between findings of
the clinical process, requirements of images, and explanations
of outputs in order to aid current and future clinical practices.
Gathering these use cases will be a lengthy and complicated
Page 25

