Detecting, diagnosing and treating cancer requires having access to the right tools.

Consider lung cancer, which causes one in five cancer-related deaths worldwide - taking about 1.6 million lives a year - and which has frustratingly high rates of mortality. In England, to cite one example, more than one-third of cases are diagnosed after presenting as an emergency, by which time the vast majority are already at a late stage.

To diagnose the disease, physicians rely on the segmentation of lesions on the lungs using a combination of PET and CT scans. These determine the functional properties of a lesion, as well as its anatomical structure and characteristics.

Poland-based Future Processing, a member of NVIDIA's Inception program, is working to simplify the use of these tools, making the diagnosis process more affordable, accessible and accurate.

Its medical imaging solutions business segment works closely with medical imaging experts, research institutions and clinics around the world to develop software that can make better sense of images.

One area of focus is dynamic contrast enhanced imaging and the analysis of computed tomography (CT) images. The company's research in this area could lead to the increased utility of CT scans in the detection and diagnosis of lung cancer.

In a potential advance in the fight against lung cancer, Future Processing is working on a solution that will eliminate the need for the combination of PET and CT scans. Instead, doctors would be able to make diagnoses based exclusively from CT scans.

Using convolutional neural networks, the team has shown that diagnoses from CT scans alone can be made efficient and accurate.

'Before, the segmentation of active lesions required co-registering PET and CT sequences in a time-consuming procedure,' explains Dr. Jakub Nalepa, senior research scientist at Future Processing. 'In fact, we have just presented a paper where, using CNNs with exclusively CT scans, we demonstrated segmentation of a single image within minutes - and this can be accelerated further.'

This acceleration in segmentation speeds is powered by NVIDIA Tesla GPU accelerators and could make a huge difference for both doctors and patients. By automatically segmenting the lesions, radiologists can save precious time and measure lesion progress.

It would also be a boon for medical sites without access to PET scanners as they could care for their patients directly, using only a CT scanner. This is more cost-effective for medical sites, with a CT scan costing between $1,200 to $3,200, whereas a PET scan costs $3,000 to $6,000, on average. It also provides a better experience for the patient, with only one scan to prep for and endure.

As for accuracy, Nalepa and his team have shown that their approach reduces the rate of false positives, when studying lung data without active lesions, from 90.14 percent to 6.6 percent.

Going forward, the team hopes to develop its solution further and also apply it to other forms of cancer.

Krzysztof Pawelczyk, Michal Kawulok, Jakub Nalepa, Michael P. Hayball, Sarah J. McQuaid, Vineet Prakash, Balaji Ganeshan: 'Towards Detecting High-Uptake Lesions from Lung CT Scans Using Deep Learning,' S. Battiato et al. (Eds.): Proc. ICIAP 2017, Part II, LNCS 10485, pp. 1-11, Springer, 2017.

Nvidia Corporation published this content on 16 November 2017 and is solely responsible for the information contained herein.
Distributed by Public, unedited and unaltered, on 16 November 2017 09:21:04 UTC.

Original documenthttps://blogs.nvidia.com/blog/2017/11/16/detect-cancer-from-ct-scans/

Public permalinkhttp://www.publicnow.com/view/254ECEF21FA4F554EE13952AFACD3B8C1D0BF860