13octAll Day17MICCAI 2019The 22nd International Conference on Medical Image Computing and Computer Assisted Intervention

Event Details

Kitware’s Director of Medical Computing Andinet Enquobahrie and Senior Director of Strategic Initiatives Stephen Aylward will present at the International Conference on Medical Image Computing and Computer Assisted Intervention.

Development and Face Validation of Ultrasound-Guided Renal Biopsy Virtual Trainer

To be presented at a joint workshop MIAR, AE-CAI and CARE

The 7th International Conference on Medical Imaging and Augmented Reality (MIAR)
The 13th Augmented Reality, Augmented Environments for Computer Assisted Interventions (AE-CAI),
and 6th Computer Assisted and Robotic Endoscopy (CARE)

https://workshops.ap-lab.ca/aecai-care-miar-2019/

Abstract: The overall prevalence of chronic kidney disease (CKD) in the general population is approximately 14 percent with more than 661,000 Americans having a kidney failure. Ultrasound-guided renal biopsy is a critically important tool in the evaluation and management of renal pathologies and it requires competent biopsy technique and skill to consistently obtain high yield biopsy samples. This paper presents KBVTrainer, a virtual simulator that we developed to train clinicians to improve procedural skill competence in ultrasound-guided renal biopsy. The simulator was built using low-cost hardware components and open source software libraries. We conducted a face validation study with five experts who were either adult/pediatric nephrologists or interventional/diagnostic radiologists. All the experts had considerable experience performing US guided needle biopsies within their specialty with experience ranging from 3-23 years. The trainer was rated very highly (>4.4) for the usefulness of the real US images (highest at 4.8), potential usefulness of the trainer in training for needle visualization, tracking, steadiness and hand-eye coordination, and overall promise of the trainer to be useful for training US guided needle biopsies. The lowest score of 2.4 was received for the look and feel of the US probe and needle compared to clinical practice. The force feedback received a moderate score of 3.0. The clinical experts provided abundant verbal and written subjective feedback and were highly enthusiastic about using the trainer as a valuable tool for future trainees. As part of the future work, we will improve the technology based on clinical and user feedback, develop automated skill assessment and tracking modules deployable in a web-based application, and conduct a clinical validation study.

 

Active Learning Technique for Multimodal Brain Tumor Segmentation using Limited

To be presented at Medical Image Learning with Less Labels and Imperfect Data

https://www.hvnguyen.com/lesslabelsimperfectdataml

Abstract: Image segmentation is an essential step in biomedical image analysis. In recent years, deep learning models have achieved significant success in segmentation. However, deep learning requires the availability of large annotated data to train these models, which can be challenging in biomedical imaging domain. In this paper, we aim to accomplish biomedical image segmentation with limited labeled data using active learning. We present a deep active learning framework that selects additional data points to be annotated by combining U-Net with an efficient and effective query strategy to capture the most uncertain and representative points. This algorithm decouples the representative part by first finding the core points in the unlabeled pool and then selecting the most uncertain points from the reduced pool, which are different from the labeled pool. In our experiment, only 13% of the dataset was required with active learning to outperform the model trained on the entire 2018 MICCAI Brain Tumor Segmentation (BraTS) dataset. Thus, active learning reduced the amount of labeled data required for image segmentation without a significant loss in the accuracy.

Intramodality Domain Adaptation using Self Ensembling and Adversarial Training

To be presented at Domain Adaptation and Representation Transfer (DART) workshop

https://sites.google.com/view/dart2019/

Abstract. Advances in deep learning techniques have led to compelling achievements in medical image analysis. However, performance of neural network models degrades drastically if the test data is from a domain different from training data. In this paper, we present and evaluate a novel unsupervised domain adaptation(DA) framework for semantic segmentation which uses self ensembling and adversarial training methods to effectively tackle domain shift between MR images. We evaluate our method on two publicly available MRI dataset to address two different types of domain shifts: On the BraTS dataset to mitigate domain shift between high grade and low grade gliomas and on the SCGM dataset[13] to tackle cross institutional domain shift. Through extensive evaluation, we show that our method achieved favorable results on both datasets.

“Proposal Writing 101” at the MICCAI “Tutorial On Publishing, GRant writing And career Development” (TOPGRAD) workshop

Presented by Kitware’s Senior Director of Strategic Initiatives Stephen Aylward on October 13th

Abstract: The TOPGRAD tutorial provides advice on publishing, grant writing and career trajectories in the MICCAI field. The tutorial will consist of a number of talks and discussion on the review process, publishing strategies (including open access) and grant writing requirements. Moreover, we hold an extensive Q&A session between early career researchers and experts in the relevant fields.

Time

october 13 (Sunday) - 17 (Thursday) CEST

Location

InterContinental Shenzhen

Overseas Chinese City, Nanshan, Shenzhen, China

Questions or comments are always welcome!

X