Ultrasound Research and Applications at MICCAI 2018

October 30, 2018

Below are my notes on the papers that prominently featured ultrasound at this year’s Medical Image Computing and Computer Assisted Intervention (MICCAI 2018) Conference.  The conference proceedings are available online at https://www.miccai2018.org/en/MICCAI-2018-PROCEEDINGS.html

I’ve also included notes on the papers presented at the Point-of-Care Ultrasound (POCUS) workshop that I organized the day prior to the MICCAI main conference.   The POCUS workshop proceedings are available online at https://www.miccai2018.org/en/MICCAI-2018-WORKSHOP-PROCEEDINGS.html

Special thanks goes to Computer Vision News for publishing an article on the MICCAI 2018 POCUS workshop. Additionally, thanks goes to NIH NIBIB/NIGMS for support (R01EB021396).   Also, I’ve previously published notes on ultrasound research and applications at ISBI 2018 and MICCAI 2017.

 

International Workshop on Point-of-Care Ultrasound (POCUS) at MICCAI 2018

Workshop Proceedings: “Simulation, Image Processing, and Ultrasound Systems for Assisted Diagnosis and Navigation”, LNCS 11042, Springer, 2018

Premise of the workshop: For the full potential of point-of-care ultrasound (POCUS) to be realized, POCUS systems must be approached as if they were new diagnostic modalities, not simply inexpensive, portable ultrasound image systems.

  • Robust Photoacoustic Beamforming Using Dense Convolutional Neural Networks
    • Workshop Proceedings, p. 3
    • Uses a deep CNN for beamforming of RF data from photoacoustic ultrasound acquisitions. NN should overcoming the need to assume constant speed of sound in tissue, as done by traditional reconstruction methods.   NN uses dense convolution that aggregates features at each level. Also dilates convolution layer results to preserve resolution.  Network is trained and tested using simulated data.
  • A Training Tool for Ultrasound-Guided Central Line Insertion with Webcam-Based Position Tracking
    • Workshop Proceedings, p. 12
    • Open-source (3D Slicer, SlicerIGT, and PLUS), inexpensive training system for clinicians. System uses a laptop, webcam, and Telemed USB-based US probe.  Uses ArUco patterned markers for video-based tracking (provides position and orientation). Paper evaluates ease and reproducibility of calibration.  It is shown to provide sufficient accuracy for clinical performance evaluation and training – uses clear phantom.
  • GLUENet: Ultrasound Elastography Using Convolutional Neural Networks
    • Workshop Proceedings, p. 21
    • Conventional eslastography tracking algorithms (e.g., involving dynamic programming to measure displacement) suffer from decorrelation noise. Proposed CNN is used to estimate the time delay between two RF data acquisitions given a coarse deformation field (from FlowNet2).
  • CUST: CNN for Ultrasound Thermal Image Reconstruction Using Sparse Time-of-Flight Information
    • Workshop Proceedings, p. 29
    • Uses speed of sound in tissues to estimate thermal changes/maps. Signal is recorded by a sensor positioned on the opposite anatomic side as the emitter (mounted on the RF ablation probe).  A neural network is trained to estimate speed of sound changes due to thermal ablation between the source and sensor.  Trained using simulated data.  Tested using porcine livers.
  • Quality Assessment of Fetal Head Ultrasound Images Based on Faster R-CNN
    • Workshop Proceedings, p. 38
    • Presents an automated method for determining if a prenatal ultrasound image represents a standard plane. Uses a “faster region-based CNN” (based on VGG-16) as well as a region proposal network (RPN). RPN output added to FRCNN. FRCNN localizes landmarks within the RPN-defined region.  Numeric score based on landmarks is used to determine if the image is a standard plane.
  • Recent Advances in Point-of-Care Ultrasound using the ImFusion Suite for Real-Time Image Analysis
    • Workshop Proceedings, p. 47
    • Presents features recently added to the proprietary ImFusion software for ultrasound. Added features list prior publications (many from MICCAI 2017) that discussed 3D reconstruction from freehand 2D ultrasound for specific body parts using a trained neural network, image noise filtering using neural networks, bone boundary estimation and registration using neural networks, and others.
  • Markerless Inside-Out Tracking for 3D Ultrasound Compounding
    • Workshop Proceedings, p. 56
    • Mounts an Intel RealSense device with mono, stereo, and active-depth capabilities to a probe and uses visual SLAM and an adaptive map of the room to localize the probe. Uses publicly available ORB-SLAM2 (mono camera) and Direct Sparse Odometry (stereo camera) algorithm implementations. Evaluates using robotically controlled probe. Also, demonstrates the system for 3D transrectal ultrasound acquisition by clinicians.
  • Ultrasound-Based Detection of Lung Abnormalities Using Single Shot Detection Convolutional Neural Networks
    • Workshop Proceedings, p. 65
    • Trained a CNN to recognize five features (B-Mode videos and M-Mode) used in the diagnosis of lung pathologies: b-lines, merged b-lines, consolidation, and pleural effusion from B-mode and lack of lung sliding from M-Mode. For B-Mode video, used six single-class Single Shot Detection (SSD) networks that combines region of interest and object classification into a single pass by evaluating pre-defined region sizes at each location. For M-Mode, used Inception V3 network. Evaluated using a swine model.  3 seconds of acquisition at 20 fps – from each of 8 lung regions.
  • Quantitative Echocardiography: Real-Time Quality Estimation and View Classification Implemented on a Mobile Android Device
    • Workshop Proceedings, p. 74
    • Uses NN running on an android phone to provide real-time feedback on which standard view is being acquired and the quality of that acquisition. Uses a mobile version of TensorFlow.  Data is streamed to the android device (in the paper using a frame grabber, at the live demo using a Clarius probe streaming to an iPhone which then pushed images to the android).  Bar across top indicates view quality.  Text indicates view classification (which of the 14 standard views is depicted). Network uses DenseNet and LSTM layers per frame and combines outputs from multiple frames to make final decision.  Achieves 30 FPS and under 0.4s latency.
  • Single-Element Needle-Based Ultrasound Imaging of the Spine: An In Vivo Feasibility Study
    • Workshop Proceedings, p. 82
    • Single element in the tip of a needle, controlled by a robotic (3D printer adapted) system. B-mode acquired by rotation in a plane about tip, with synthetic aperture focusing.  Very convincing ex vivo and in vivo studies were conducted.

 

International Workshop on Correction of Brainshift with Intra-Operative Ultrasound (CuRIOUS) at MICCAI 2018

Workshop Proceedings: “Simulation, Image Processing, and Ultrasound Systems for Assisted Diagnosis and Navigation”, LNCS 11042, Springer, 2018

This challenge sought to evaluate methods for tracking brainshift during brain tumor resection procedures. Brain shift may result from gravity, drug administration, intracranial pressure change, and tissue removal.  Shift can be tracked by observing the midline, adjacent vessels, the surgical target, or other anatomic structures using intra-operative ultrasound.  The challenge provided intra-operative ultrasound images that needed to be registered with pre-operative MRI, from clinical cases.  Participants provided model, transform, and deep-learning based registration algorithms for comparison.   Select methods are featured in the Workshop Proceedings, pages 127-184

 

MICCAI 2018 Main Conference

  • Evaluation of Adjoint Methods in Photoacoustic Tomography with Under-Sampled Sensors
    • Conference Proceedings Vol. 1, p. 73
    • Compares Time-Reversal (TR) and Back-Projection (BP) for localizing sound sources (as optical absorbers) in tissue. They show that given sparse sensors around the tissues, TR produces better signal contrast while BP produces better signal resolution.  They propose a new Truncated Back-Projection method that offers improvements.
  • High Frame-Rate Cardiac Ultrasound Imaging with Deep Learning
    • Conference Proceedings Vol. 1, p. 126
    • Uses deep learning to improve the quality of fast multi-line acquisitions by training the network to make multi-line acquisitions look like traditional single-line acquisition
  • Real Time RNN Based 3D Ultrasound Scan Adequacy for Developmental Dysplasia of the Hip
    • Conference Proceedings Vol. 1, p. 365
    • Goal is to enable novice users to acquire sufficient hip scans of pediatric patients. Uses a CNN with recurrent layers to assess a times series of B-mode images as they are being acquired as part of a 3D volumetric scan. Network is trained to determine if key anatomic features will be present in the reconstructed 3D scan.
    • Note: See related work by Kwitt et al. in MedIA and at SPIE MI.
  • Direct Reconstruction of Ultrasound Elastography Using an End-to-End Deep Neural Network
    • Conference Proceedings Vol. 1, p. 374
    • Deep learning to generate elastography (displace and strain) images from RF data. Trained using simulated data.  Compared with normalized cross-correlation.
  • 3D Fetal Skull Reconstruction from 2DUS via Deep Conditional Generative Networks
    • Conference Proceedings Vol. 1, p. 383
    • An adaption of conditional variational autoencoders for 3D reconstruction of skull shape from standard 2D ultrasound fetal views. A hierarchical application of CVAE allows for reconstruction even if some of the scans are missing.
    • Excellent paper: Novel network architecture for approximating a 3D structure from 3 (or fewer) slice views.
  • Standard Plane Detection in 3D Fetal Ultrasound Using an Iterative Transformation Network
    • Conference Proceedings Vol. 1, p. 392
    • Network learns how to adjust the parameters of a 2D plane to better fit it to a standard anatomic view from a 3D acquisition. Applied iteratively to find the best fit. Also produces confidence estimates.
  • Fast Multiple Landmark Localisation Using a Patch-Based Iterative Network
    • Conference Proceedings Vol. 1, p. 563
    • Estimate how to adjust (direction and magnitude) the parameters that sample a (2.5D) patch from a volume to better capture a desired landmark in that patch.  Also estimates confidence of movement in each direction. Extended to localize multiple landmarks simultaneously in multiple patches using PCA to project all landmarks into a lower dimensional space and solve for location there.
  • Adversarial Deformation Regularization for Training Image Registration Neural Networks
    • Conference Proceedings Vol. 1, p. 774
    • Uses a GANS to regularize deformable registration transforms based on expected anatomy movement. GANS is trained to discriminate FEM generated deformation fields using anatomic models from registration deformation fields. Generator tries to align labeled volumes (with simulated deformations during training).  No labels are needed during testing/application/inference.
  • Initialize Globally Before Acting Locally: Enabling Landmark-Free 3D US to MRI Registration
    • Conference Proceedings Vol. 1, p. 827
    • Uses low-resolution coarse segmentation (used to inform multiple label-specific distance maps) to initialize the global rigid position of an US volume in an MRI volume (brain tumor scans). Optimize matching of distance maps simultaneously.   Provides a large capture range and works with small US volumes matching to large MRI volumes.
  • Multi-task SonoEyeNet: Detection of Fetal Standardized Planes Assisted by Generated Sonographer Attention Maps
    • Conference Proceedings Vol. 1, p. 870
    • Uses sonographer gaze tracking (attention maps) to assess video frames and detect the standard abdominal circumference plane. GANs network (modified) is used so that when run (during inference) gaze tracker data isn’t needed.
  • Less is More: Simultaneous View Classification and Landmark Detection for Abdominal Ultrasound Images
    • Conference Proceedings Vol. 2, p. 711
    • One network performs view classification and landmark detection, simultaneously, for 10 standard abdominal views and 14 landmarks. Landmarks trained as Gaussian (distance) heat maps, learned by one network.  View classification used a ResNet50 architecture.
  • Integrate Domain Knowledge in Training CNN for Ultrasonography Breast Cancer Diagnosis
    • Conference Proceedings Vol. 2, p. 868
    • CNN (VGG16 w/ImageNet weights) to classify images and predict tumor malignancy, includes BI-RADS categories as domain knowledge as additional inputs to the network. Trained 2 networks: one for BI-RADS and one for malignancy.  Visualize class activation map (CAM, last convolution layer’s output) to highlight suspicious regions.
  • AutoDVT: Joint Real-Time Classification for Vein Compressibility Analysis in Deep Vein Thrombosis Ultrasound Diagnostics
    • Conference Proceedings Vol. 2, p. 903
    • DVT diagnosed as vascular compressibility at anatomic landmarks, identified using ultrasound. Trained a CNN to confirm a vein can be fully compressed – trained the same network for two landmark positions. Compared multiple network architectures, using time as a 3rd dim, i.e., using 3D convolution on time-series B-mode images.  Visualized saliency (activation) maps – which nicely highlighted the vessels.
  • Automatic Lacunae Localization in Placental Ultrasound Images via Layer Aggregation
    • Conference Proceedings Vol. 2, p. 921
    • Lacunae are lesions on the placenta associated with a life-threatening condition known as abnormally invasive placenta (AIP). Presents method to avoid manual lesion delineation for training (instead uses seed points, region growing (via super pixels), and shape priors to approximate confidence maps that are learned).  Compared various network architectures for lesion localization.  Use side-outputs per layer (aggregated to provide classification estimates, used to speed training).
  • Multi-modal Synthesis of ASL-MRI Features with KPLS Regression on Heterogeneous Data
    • Conference Proceedings Vol. 3, p. 473
    • Deals with missing data (small ROIs and missing modalities per patient) in multi-modality classification. Goal is to synthesize arterial spin labelling (ASL) MRI cerebral blood flow (CBF) maps (regional CBF values) from T1 MRI and carotid flow ultrasound, using kernel partial least squares regression (KPLS) that accepts partial inputs (small R0Is in MRI and ultrasound) and can generate partial (small ROI) outputs.
  • Dilatation of Lateral Ventricles with Brain Volumes in Infants with 3D Transfontanelle US
    • Conference Proceedings Vol. 3, p. 557
    • Transcranial 3D ultrasound in infants to estimate the ratio of lateral ventricle volume to brain volume. Ventricle segmentation achieved using multi-atlas deformable registration to initialize a  deformable mesh segmentation method. Uses a local linear correlation metric to match MRI to ultrasound. Brain volume estimated using 3D ellipsoid modeling.
  • Learning from Noisy Label Statistics: Detecting High Grade Prostate Cancer in Ultrasound Guided Biopsy
    • Conference Proceedings Vol. 4, p. 21
    • Incorporates biopsy findings as “soft labels” to overcome their inherent label “noise” (sparse and in-accurate), so that they can be used as training data. Trains a Bayesian probabilistic network (recurrent) to segment (highlight) temporal ultrasound (RF analysis of isonification over 5 seconds / 100 frames has been shown to improve tissue characterization using ultrasound) for biopsy guidance.
  • A Feature-Driven Active Framework for Ultrasound-Based Brain Shift Compensation
    • Conference Proceedings Vol. 4, p. 29
    • Provides ultrasound-to-ultrasound registration to account for intra-operative brain shift. Estimates landmark movement and interpolates a deformation field using a Gaussian Process Model – uses variograms to characterize the spatial dependences of a stochastic process (bins deformation vectors by distance and direction).  Estimates uncertainty and allows a user to add additional landmarks to reduce uncertainty.
    • See also: https://www.ncbi.nlm.nih.gov/pubmed/23899632
  • Respiratory Motion Modelling Using cGANs
    • Conference Proceedings Vol. 4, p. 81
    • Trains conditional GANs to learn relationship between simultaneously acquired time-series ultrasound and 4D MR images. cGANs uses 2D ultrasound to estimate a 3D MRI deformation field.  cGANs supports small patient-specific training data. cGANS inputs at time t and outputs def field at time t+1.  Combine def fields at t-1 and t+1 to estimate deformation at time t.
  • Physics-Based Simulation to Enable Ultrasound Monitoring of HIFU Ablation: An MRI Validation
    • Conference Proceedings Vol. 4, p. 89
    • Estimate temperature maps induced by HIFU by combining intra-op acquired ultrasound with physics-based simulation. Time-of-flight ultrasound (as a by-product of HIFU pressure waves and as recorded by distal ultrasound sensors) captures changes in temperature as changes in wave propagation speed. Physical simulation extends the 2D recordings to 3D temperature maps.
  • Simultaneous Segmentation and Classification of Bone Surfaces from Ultrasound Using a Multi-feature Guided CNN
    • Conference Proceedings Vol. 4, p. 134
    • U-Net with an added classification output and with inputs of B-mode image, local phase tensor image (first and second derivative information), local phase bone image (phase energy and variants), and bone shadow enhanced image (a measure of attenuation). Also weighted based on distance from transducer to eliminate probe-tissue interference/echos.
  • Deep Adversarial Context-Aware Landmark Detection for Ultrasound Imaging
    • Conference Proceedings Vol. 4, p. 151
    • CNN GANs for localization of prostate landmarks and contour, includes an adversarial component to ensure reasonable solutions achieved. Landmarks and contours are encoded as distance maps.
  • Towards a Fast and Safe LED-Based Photacoustic Imaging Using Deep Convolutional Neural Network
    • Conference Proceedings Vol. 4, p. 159
    • Traditionally lasers are used to induce displacement for photoacoustic imaging. Herein, neural nets are used to overcome the noise inherent in LED photoacoustics.  Training improved by using target images at each network layer.  Layer target images generated by sample averaging during acquisition – each subsequent layer’s training image has undergone more sample averaging.
  • Framework for Fusion of Data- and Model-Based Approaches for Ultrasound Simulation
    • Conference Proceedings Vol. 4, p. 332
    • Proposes a VR system for ultrasound training that integrates pre-recorded and simulated ultrasound – paper focuses on simulated ultrasound for regions of clinical interest, pre-recorded used for “background” regions, and stitching is used to integrate them. Application is transvaginal ultrasound.  Simulation, texture, and stitching method were previously published.  Deformation is application specific (insert fetus into sac).  Nice results.
  • Generalizing Deep Models for Ultrasound Image Segmentation
    • Conference Proceedings Vol. 4, p. 497
    • Seeks to address need for retraining when images have different appearance/acquisition parameters. Uses “appearance conversion”.  Uses online adversarial networks to convert a test image and its segmentation into results that fool image and segmentation discriminators.  Additionally, uses a conditional GANs to verify pairing of converted image and its segmentation.
  • Deep Attentional Features for Prostate Segmentation in Ultrasound
    • Conference Proceedings Vol. 4, p. 523
    • Presents a new network architecture that uses intermediate layer training to overcome intensity inhomogeneities and capture features around the object (prostate) of interest in an image (“deep attentional features”). One component of intermediate layer training trains a network to process the output of each layer to generate the segmentation. Another component combines the output of all intermediate layers to train a network to produce the segmentation.
  • Accurate and Robust Segmentation of the Clinical Target Volume for Prostate Brachytherapy
    • Conference Proceedings Vol. 4, p. 531
    • Goal is to segment prostate from transrectal ultrasound for brachytherapy. Addresses “difficult cases” and segmentation at base and apex. Uses autoencoder (with sparsity constraint) to cluster and then adaptively selects training data using cases in “difficult” clusters.  Uses multiple CNNs to determine agreement and identify difficult cases based on poor agreement.  Each CNN  has its input layer connected directly to multiple conv layers, having different size and stride kernels, rather than relying only on network depth to achieve scaled features.  In testing, if agreement is weak, integrates the networks’ outputs using a statistical shape model and estimates of local boundary uncertainty from the networks.
  • Domain and Geometry Agnostic CNNs for Left Atrium Segmentation in 3D Ultrasound
    • Conference Proceedings Vol. 4, p. 630
    • Includes a shape prior with CNN. CNN is a modified U-Net (a “V-Net” for 3D images) for initial segmentation.   Initial segmentation is passed through an autoencoder (trained using true segmentations) to impart a statistical shape prior on the segmentation.   Ability of a CNN trained on an auto-encoder’s  latent representation to distinguish between training and testing images is used to determine if domain adaptation is needed for current set of testing images.
  • Densely Deep Supervised Networks with Threshold Loss for Cancer Detection in Automated Breast Ultrasound
    • Conference Proceedings Vol. 4, p. 641
    • Uses a modified U-Net to detect cancers in 3D breast ultrasound. Network includes a layer for local adaptive thresholding to separate cancer from non-cancer voxels.  Network also pools output of compressing layers to define an additional loss function that incorporates class balancing (since cancers are rare in images) and overlap minimization (emphasizing cancer and non-cancer regions have minimal overlap).
  • Fast Vessel Segmentation and Tracking in Ultra High-Frequency Ultrasound images
    • Conference Proceedings Vol. 4, p. 746
    • Combines phase analysis, distance-regularized level sets, and Kalman filtering to track the movement and deformation of small vessels. Method is uses GPU hardware for computations.  Vessels identified via seed points.  Analyzes local (patch-based) intensity variance to highlight small and large vessel extent.  Vessel boundaries further highlighted using Cauchy filtering (in Fourier space).  Vessel boundaries initially segmented along radial lines from seed points and then refined using levelsets.   Time series data is processed by propagating seed points using heuristics.

Leave a Reply