Category
Artificial Intelligence

How Medical Image Analysis Will Benefit Patients and Physicians

Automated medical image analysis is set to change the way physicians work and bring benefits to medical organizations and their patients.

Images are the largest source of data in healthcare and, at the same time, one of the most difficult sources to analyze. Clinicians today must rely largely on medical image analysis performed by overworked radiologists and sometimes analyze scans themselves.

This is a situation set to change though, as pioneers in medical technology apply artificial intelligence to image analysis. Computer vision software based on the latest deep learning algorithms is already enabling automated analysis to provide accurate results that are delivered immeasurably faster than manual process can achieve.

As these automated systems become pervasive in the healthcare industry, they may bring about radical changes in the way radiologists, clinicians, and even patients use imaging technology to monitor treatment and improve outcomes.

Computer vision software is already enabling automated analysis to provide accurate results that are delivered immeasurably faster than manual process can achieve.
Tweet

Who Will Benefit from Automated Medical Image Analysis?

In this article, we want to emphasize how advanced medical image analysis will benefit specific stakeholders, so we’ll limit the description of general AI benefits to the graph below, which represents the opinions of technology providers in the healthcare industry.

As the graph indicates, executives in this sales niche believe the most compelling AI (including imaging technologies) benefits to be the improvement of outcomes and healthcare quality.

Benefits of Healthcare AI Products/Services

As the examples of healthcare app development in this article will show, applying machine-learning AI to the analysis of medical X-ray images and scans will bring a number of these benefits to the following three key groups of stakeholders:

  • Radiologists
  • Non-Radiologist
  • Clinicians Patients

Radiologists

In simple terms, there are currently not enough radiologists to cope with the ever-growing volumes of data captured by X-rays, MRI, PET, CT, and ultrasound. This is clearly highlighted in the graph below, which shows the mismatch between the demand and supply of radiologists in the United States.

Demand and Supply of Radiologists VS CT & MRI Tests

Personnel Today reports that in the United Kingdom, the National Health Service has seen radiologists’ workloads increase by 30% over a five-year period, while the workforce has increased by only 15%, and warns that radiologist services may collapse if the human resource shortage is not alleviated.

Automated image analysis will ease the burden on radiologists everywhere, by eliminating the need for them to scrutinize every image in the search for anomalies. Instead, clinicians will only need to focus on images that deep learning algorithms flag for their attention.

AI may even present radiologists with suggestions as to the nature of detected abnormalities. In the fight against cancer, for instance, this might require algorithms to highlight the likelihood of a tumor being either benign or malignant. This will help doctors focus on patients who need attention while easing their diagnostic workload and supporting them in making appropriate decisions.

Non-Radiologist Clinicians

AI-driven image analysis software will bring about a change in the roles of radiologists and other clinicians alike. Radiologists will be able to spend less time screening images and concentrate on diagnosis and decision-making. The same technology will provide non-radiologist physicians with digital assistance to interpret medical images, making them less reliant on hospital radiology departments.

For example, even without extensive sonography or radiology training, clinicians are typically able to make some straightforward diagnoses by examining ultrasound images. Intelligence provided by automated image analysis will extend their capabilities, enabling all doctors and even paramedics to interpret images from portable ultrasound scanners.

Patients

The third group to benefit from advanced medical image analysis will be those whom healthcare exists to serve—patients. They will receive timelier and more accurate diagnoses, and will no longer have to wait weeks for results of X-ray studies.

The range of applications for self-monitoring will increase, including wearable self-scanning solutions. In the hospital setting, patients will be subject to fewer invasive procedures and will have less need to endure the introduction of toxic or radioactive tracer drugs into their bodies. Radiation doses from CT scans and X-rays will be reduced, and fewer scans will be necessary to diagnose or monitor each patient’s condition.

Applying machine-learning AI to the analysis of medical X-ray images and scans will bring a number of benefits to radiologists, non-radiologist clinicians, and patients.
Tweet

How Will Enhanced Image Analysis Deliver These Benefits?

The best way to illustrate what automated medical image analysis can do for patients, radiologists, and other clinicians is to show some examples. The innovations discussed in the following sections of this article comprise findings from recent research, solutions in development, and products undergoing commercialization or already in commercial use.

Automated Medical Image Analysis in CT Scanning

The use of convolutional neural networks to analyze CT scans has seen much progress and growth in the last couple of years, but has mainly involved 2D slices from a patient’s chest, abdomen, or brain. Yet breakthroughs are on the way, as innovators have improved the performance of deep learning solutions that analyze the entire 3D image series from a CT scan.

One company specializing in deep learning technology for the medical field, called Aidoc, recently launched the first full-body solution for CT analysis, which will afford radiologists a workflow-integrated application, enabling them to analyze scans of the chest, c-spine, abdomen, and head, without the need to switch between discrete image analysis applications.

Breakthroughs are on the way, as innovators have improved the performance of deep learning solutions that analyze the entire 3D image series from a CT scan.
Tweet

Boosting the Speed, Power, and Comfort of MRI

Like CT scanning, Magnetic Resonance Imaging (MRI) is a non-invasive method of examining the internal workings of the body. Unlike CT scanning, MRI presents less risk to patients, because it does not emit radiation in order to capture images. Its main drawback, however, is the long examination time. For instance, a cardiac MRI can take more than an hour to perform.

By applying to this problem image analysis based on machine learning, San Francisco company Arterys has developed a solution that not only cuts down the time needed for cardiac MRI examinations but also increases the quantity and quality of data provided. Better still, Arterys’ ViosWorks application eliminates another MRI issue—the need for patients to hold their breath during certain sequences of the examination.

ViosWorks, Advanced App For Cardiac MRI Exams

ViosWorks enhances images from MRI scanners, delivering a 3D view of the heart with the addition of visualized and quantified blood-flow data. According to Imaging Technology News, ViosWorks enables the capture of 20 gigabytes of data in a fraction of the time required for conventional MRI technology to acquire just 200 megabytes.

This enables a patient to breathe freely throughout the examination, unlike conventional scans, during which a patient may have to spend periods holding her breath. For instance, during a cardiac MRI assessment, a patient will be asked to remain perfectly still, without breathing, as many as 14 times during the examination.

Build your image recognition solution for healthcare with
Iflexion

Greater Safety and Accuracy for PET Scans

In addition to diagnosis, medical imaging techniques, such as PET scanning, are becoming increasingly useful in evaluating patients’ response to treatment, particularly for cancer. Early and frequent response evaluation is essential, for example, when using chemo and radiation therapy to treat lung cancer.

When physicians can assess patient response in the first week or two of treatment, they can adapt dosages, either by reducing them to alleviate toxicity in non-diseased tissue or by increasing dosage for patients whose tumors are not responding positively.

The Pitfalls of PET Imaging

PET scanning enables early response evaluation and is also a non-invasive alternative to biopsy, but it requires patients to receive an internal dose of a radioactive drug known as a “tracer.” This drug enables the PET scanning equipment to capture images of the organ or area of interest in the patient’s body.

The need to use what is essentially a toxic substance is one of the drawbacks of PET scanning. Another is the possibility of smaller lesions—or lesions absorbing only a small quantity of the tracer—being missed by the scan. There is also a risk of photon misidentification, which can lead to losses in PET image intensity and contrast.

The Addition of Algorithms to PET Scanning Solutions

Research has shown that machine learning can improve the effectiveness of PET medical image analysis. Algorithms can be developed and trained to remove image noise, improve quality, and gather image data in greater quantities and at a faster rate than standard PET equipment can. Consequently, the quantities of radioactive tracer needed to capture reliable images may be reduced, which, of course, is good news for patients who must undergo PET scans.

The reduction of toxicity is not the only benefit for cancer patients. Integration of machine learning into PET scanning and medical image analysis offers the following advantages over conventional technology:

  • Improved image quality relieves the need for follow-up scans, thereby reducing patients’ overall exposure to the tracer drug.
  • Instant high-quality imaging allows physicians to make decisions much earlier, even during the scanning process, hence speeding up and improving the accuracy of treatment.
  • Tumors can be monitored frequently and non-invasively to match chemo and radiotherapy doses to treatment response, thus increasing the prognosis and survivability of lung cancer.

Machine learning algorithms can even be trained to classify tumors in PET images, for example, as either being responsive or non-responsive to treatment. This alleviates the workload for radiologists and increases productivity, so more patients can benefit from prompt decisions and appropriate treatment protocols.

Research has shown that machine learning can improve the effectiveness of PET medical image analysis.
Tweet

Making Ultrasound More User-Friendly

While all the aforementioned machine-learning-driven advances in medical image analysis offer great promise, some of the most exciting developments are taking place in the ultrasound-imaging domain.

In a 2017 Medium article, radiologist Kevin Seals boldly suggests that the marriage of new semiconductor-powered probes built as smartphone peripherals with image analysis software may soon allow patients to scan themselves and capture ultrasound data for use in their treatment or condition monitoring.

Ultrasound on a Chip

Kevin Seals’ predictions are not merely speculative. One new solution uses a sophisticated probe and machine learning artificial intelligence software—dubbed ultrasound on a chip—and has already received FDA approval that Seals describes as “robust.”

Butterfly iQ Plugged Into a Smartphone

Furthermore, the system is expected to cost less than $2,000, which positions it as a viable replacement for the ubiquitous stethoscope. This is an incredible leap for ultrasound technology, which has up till now involved the use of multiple probes, each with a very limited breadth of application, by sonographers extensively trained to make sense of the images they produce.

Direct patient ultrasound might still be some way off, but direct access and interpretation of ultrasound imaging for all clinicians, not just radiologists, would appear to be just around the corner.

Direct patient ultrasound might still be some way off, but direct access and interpretation of ultrasound imaging for all clinicians, not just radiologists, would appear to be just around the corner.
Tweet

Enhancing the Effectiveness of X-rays

The sheer quantity of X-ray images captured daily presents a huge problem for clinicians around the world. For example, an Imaging Technology News article puts the number of diagnostic X-ray images captured annually by the UK’s National Health Service at over 22 million, but with not enough radiologists to analyze such a vast quantity, more than 200,000 patients had to wait a month or more for their X-ray results, as reported by the Express news website.

By making it possible to automate the initial screening of X-rays, image analysis software can help radiologists keep up with their workload. By screening every image using trained algorithms, computers can classify the content of X-rays and raise alerts for those requiring detailed scrutiny by a skilled human clinician.

Automating X-ray Analysis

One such example of automated screening is a system being used in developing countries to detect signs of tuberculosis visible in chest X-rays. The solution, developed by a subsidiary of Canon, uses machine learning to detect abnormalities with more accuracy than human screening staff, although it hasn’t yet proven as accurate as physicians who specialize in TB diagnosis and treatment.

Given the shortages of radiologists in developing countries, though, an automated solution with a better-than-average degree of accuracy will doubtlessly help many patients receive early diagnosis and treatment, and therefore decrease mortality rates.

Elsewhere, artificial neural networks are removing subjectivity from the assessment of skeletal tumor burden on prostate cancer patients. As this form of cancer can spread from the prostate into a patient’s bones, physicians use X-rays to identify when this happens and assess how much of the skeletal structure is affected. A new machine-learning solution has been developed that can read and interpret X-rays, and by measuring bone density, objectively quantify the extent of tumor growth.

A new machine-learning solution has been developed that can read and interpret X-rays, and by measuring bone density, objectively quantify the extent of tumor growth.
Tweet

Bringing it All Together: Multimodal Medical Image Analysis

Few clinicians, especially radiologists, would deny the value of automated assistance in the detection of medical anomalies from CT, MRI, X-ray, ultrasound, or PET images. Solutions for classifying and analyzing images from each specific imaging mode will undoubtedly help clinicians treat more patients, with greater accuracy, and in a timelier manner than they can manage using manual methods alone.

These types of machine learning solutions are only the beginning though, with artificial intelligence systems such as IBM’s Watson already being developed to interpret patients’ entire medical history data, including analysis of visual data from multiple modalities.

Training Doctor Watson

In other words, the time may soon be upon us when a clinician can call up a patient’s latest chest X-ray for example, and receive not only a suggested diagnosis of any abnormalities in that X-ray, but also a concise yet detailed study of historic visual and other data from the patient’s records.

The study might contain analyses of all prior X-rays or images captured using other methods (such as CT, MRI, ultrasound, and PET), enabling the clinician to make a fast, accurate diagnosis and prepare an appropriate program of treatment.

Again, this is not a speculative opinion, but rather, a reflection of what is already being achieved by companies such as Agfa, using the Watson system, which was trained for medical image analysis following IBM’s 2016 acquisition of Merge Healthcare.

Solutions for classifying and analyzing images from each specific imaging mode will undoubtedly help clinicians treat more patients, with greater accuracy.
Tweet

Medical Image Analysis: Making the Data Work

IBM researchers claim that 90% of medical data is sourced from imaging solutions, but with painstaking manual scrutiny as the primary means of medical image analysis, there’s little wonder clinicians find the process akin to drinking water from a fire hose.

Machine learning for image analysis will put data to better use, improving the way physicians allocate their time and supporting them in delivering better outcomes, and in so doing, will deliver important benefits to the stakeholders who matter most—patients, who depend on medical imaging for their wellness, health, and survival.

Machine learning for image analysis will put data to better use, improving the way physicians allocate their time and supporting them in delivering better outcomes.
Tweet
Content type
Blog
We integrate computer vision and image analysis
into enterprise solutions

WANT TO START A PROJECT?

It’s simple!

Attach file
Up to 5 attachments. File must be less than 5 MB.
By submitting this form I give my consent for Iflexion to process my personal data pursuant to Iflexion Privacy and Cookies Policy.