TWIMLfest 2020

Session

Deep Learning in Medical Imaging

Over the last few years, Deep Learning technologies have made a tremendous impact on Medical Imaging. In this session participants will hear about some of the latest advances in the following areas: medical image segmentation and classification and object detection in medical images. A review of these advances will be presented via a detailed analysis of research papers from prestigious journals such as Nature Methods.

In-Depth Summary Details:

Title: “Federated Learning: A privacy-preserving method for training deep learning models using healthcare data.”
Speaker: Dr. Anthony Reina

Federated learning is a distributed machine learning approach that enables organizations to collaborate without sharing sensitive data, such as, patient records, financial data, or classified secrets (McMahan, 2016; Sheller, Reina, Edwards, Martin, & Bakas, 2019; Yang, Liu, Chen, & Tong, 2019; Sheller et al., 2020). The basic premise behind federated learning is that the model moves to meet the data rather than the data moving to meet the model. Dr. Reina will discuss Intel’s work with the Federated Tumor Segmentation Initiative (FeTS) which uses Intel’s open-sourced federated learning framework across data from different hospitals to train an AI model for tumor segmentation from MRI scans of the brain.

Title: Deep neural ensembles for improved pulmonary abnormality detection in chest radiographs
Speaker: Sivaramakrishnan Rajaraman

Cardiopulmonary diseases account for a significant proportion of deaths and disabilities across the world.Chest X-rays are a common diagnostic imaging modality for confirming intra-thoracic cardiopulmonary abnormalities. However, there remains an acute shortage of expert radiologists, particularly in under-resourced settings that results in interpretation delays and could have global health impact. These issues can be mitigated by an artificial intelligence (AI) powered computer-aided diagnostic (CADx) system.Such a system could help supplement decision-making and improve throughput while preserving and possibly improving the standard-of-care. A majority of such AI-based diagnostic tools at present use data-driven deep learning (DL) models that perform automated feature extraction and classification. Convolutional neural networks (CNN), a class of DL models, have gained significant research prominence in tasks related to image classification, detection, and localization. The literature reveals that they deliver promising results that scale impressively with an increasing number of training samples and computational resources. However, the techniques may be adversely impacted due to their sensitivity to high variance or fluctuations in training data. Ensemble learning helps mitigate these by combining predictions and blending intelligence from multiple learning algorithms. Complex non-linear functions constructed within ensembles help improve robustness and generalization. Empirical result predictions have demonstrated superiority over the conventional approach with stand-alone CNN models. In this talk, I will describe example work at the NLM that use model ensembles to improve pulmonary abnormality detection in chest radiographs.

Session Speakers

Interventional Radiologist
University of Central Florida
Research Scientist
National Library of Medicine (NLM), NIH
Principal Architect
Bioclinica
Chief AI Architect for Health and Life Sciences
Intel

Oops, please Login or Create Account to view On Demand

The good news is that it's both easy and free to register and get access.

Account Login

Create Account

Password
Newsletter Consent(Required)
Terms and Privacy Consent
Hidden
This field is for validation purposes and should be left unchanged.