This week’s Chicago Med featured an AI powered surgical theatre and the story line was that the AI had invented some artifacts in an MRI image that the surgeon had acted up and subsequently killed the patient.
Given that it was only a TV show, I guess I shouldn’t care less - BUT, I couldn’t help thinking how badly written and inaccurate it was. If you get a chance to watch it, please share your thoughts, I am interested to know what others think.
I haven’t seen it, but there was a news story a few weeks back about AI MRI scans showing “hallucinations”.
Hmm, this is very interesting:
This is because medical imaging devices do not record images directly. Instead, the raw data collected by the devices is analyzed by a computer, and machine-learning algorithms are used to reconstruct the images that doctors and radiologists use for diagnosing a health complication. Image reconstruction is done based on the known physics of the imaging device, along with a set of assumptions about how the final image should appear.
“However, if certain assumptions are wrong during image reconstruction, false structures may be introduced into the final image,” explains Mark Anastasio, a professor of bioengineering at the University of Illinois at Urbana–Champaign.
That is from a study made last year, which argues that AI used in MRI and CT technology needs to have “hallucination” maps.
The researchers have worked on a model which can study the raw data and show up discrepancies in the AI/ML interpretation of the raw data and point out where the halicinations are.
Then it sounds like they took real-world problems with existing devices and extrapolated that into a near-future scenario. There are MRIs that can produce images in near-real-time (20ms), although I have no idea, whether they are still experimental or in actual clinical ue.