Daniel Chow’s Interview: A Radiologist’s Journey from UCLA to AI Leadership

Daniel Chow Interview

Daniel Chow is a UCI Health radiologist who specializes in neuroradiology and diagnostic radiology. After obtaining his Medical degree at David Geffen School of Medicine at UCLA  he works in the department of radiology of Columbia University Medical Center. Today he is the co-Director of Center for Artificial Intelligence in Diagnostic Medicine, Radiological Sciences. During the ISC (International Stroke Conference) 2020 seminar we had the opportunity to realize an interview with Daniel Chow and discuss about AI in daily routine of clinical practice. 

What do you think AI will  bring to your clinical practice?

The question for me would be what the AI is going to bring us in the short term, and then in the medium and long term. In the short term, I think AI is going to help me as a radiologist to minimize the tedious day to day tasks. We’ve already been doing simple forms of machine learning, for example, that I pull up an MRI study I have my automate hanging protocol and my application will have the correct template up for me. But, what do I find tedious or what can I diagnose in a split second and not even think about? For example, I think of things I would have to do if I had a tumor to measure or if I have a trauma patient with the hemorrhage and I’m following the hemorrhage that I’m looking at. What is the size and doing those quantifiable metrics, would help you right now.

Very often when you follow patients and you have to visit a lesion , the work is long and tedious. These are very easy things that AI could solve in the short term.

And again, this is clinical routine, I am not talking about any fancy reconstruction where we are more in long term. At that point we will get to all the really fun things that we hope AI will do. Such as combining EMR or combining pathology, doing actual diagnosis or prognostication of prediction.

In our generation, in short term,  AI is going to be fully in our workplace. We’re going to be using it and it’s going to make us more accurate, more objective and more reproducible.

When you talk about short term, do you think the actual product are seamlessly integrated in the workflow ?

Currently, no, at least for everything I’ve had to see so far. I have a joke that if I have to click more than two buttons, I’m not going to use it. Often I have to push the study to a better station, open it and initialize it. In a recent setting that works better, and the day-to-day framework where I deal with 50 to 80 cases a day, we don’t really have the time.

Right now, the integration is lacking. But, for viability I’m looking for a tool that involves minimal clicks or means.  However, we are going in the right direction, several companies such as Avicenna understood, they provide a reliable and easy-to-use solution.

Which point do you evaluate in AI product ?

It’s hard to say because it depends on the AI  product and what it’s used for. For example, if it’s a triage tool, a screening tool, I’m going to go evaluate that based on how many false positives there are, how many false negatives they turn on time. I don’t want to be called for every single false positive that’s taking up too much time. And for that, no, I wouldn’t mind having false negatives because I’m going looking to the risk but I would want to minimize the false positives. 

For a measurement tool I would try to look at how well does the mask from the AI tool measure compare to my own measurement. Therefore it depends on the kind of tool, it’s the ultimate thing. A lot of people are not asking is is actually going to improve outcomes. 

What is the benefit to overall survival ? 

 “ The goal is to help patients be.”

There is no difference in overall survival. Let’s say I have a tool that makes me faster, a tool to measure better. If there is no improvement in survival or disability or outcomes, then it really is only a convenience. The idea is really to improve that because ultimately the goal is to help patients be.

Dr. Jennifer Soun: Neuroscience to Radiology – Advancing AI in Clinical Practice

Jennifer Soun Interview

Assistant Professor of Radiology at UC Irvine School of Medicine, Jennifer Soun graduated in 2008 in psychology and neuroscience at Princeton University. Dr. Soun is a board-certified UCI Health diagnostic radiologist who specializes in neuroradiology. Her clinical interests include stroke and vascular imaging. She earned her medical degree at Wake Forest School of Medicine in Winston-Salem, NC. She completed a residency in diagnostic radiology at New York Presbyterian-Columbia University Medical Center in New York City, followed by a fellowship in neuroradiology at Massachusetts General Hospital in Boston.  During the ISC (International Stroke Conference) 2020 seminar we had the opportunity to meet and discuss AI in clinical practice. 

What are the advantages you see in A.I. in your clinical practice? 

AI in my clinical practice is very helpful and has a lot of potential for different things. One major thing is triaging of patients, for example, detecting hemorrhage and being able to put those cases higher up on a list of studies that we read. 

 “ I don’t see AI replacing radiologists, I see it as a very helpful tool assistant to the radiologists. ”

 A.I. can also help in providing more objective data like measurements and how they change over time. I don’t see it replacing radiologists, I see it as a very helpful tool assistant to the radiologists 

Do you think that specificity or sensitivity should be taken more into account in AI products, such as false negative and positive detection?

These measures definitely should be considered carefully when evaluating an AI tool. False negatives can be dangerous. For example, missing a large vessel occlusion is worse than overcalling it because an LVO is a treatable lesion. If left untreated, the patient may have significant morbidity. 

However,  there are more nuanced situations where a false negative may not be clinically significant. For example, if a tiny intracranial hemorrhage is missed, it may be within acceptable limits since a subtle ICH may not be as clinically significant. It’s important to have a balance between specificity and sensitivity.

What is the impact of false positives on workflow?

Having too many false positives can be a problem because, then you can’t trust the AI tool to work effectively, and the radiologist is less likely to even use it. Too many false positives may increase the time that the radiologist spends on a study, which would defeat the purpose of using AI as a fast triage tool. Regardless, the radiologist’s interpretation would need to confirm the final decision-making. Despite these challenges, we remain optimistic as more and more companies bring new solutions combining optimal specificity and sensitivity that could significantly improve radiologists’ workflow and triaging.