• Admin

ASCB-EMBO 2019 Poster: Deep Learning Minimizes the Impact of Fundamental Microscopy Limitations

Updated: Dec 6, 2019

Presentation time: Sunday Dec 8th at 12:00 PM

Poster number: P56/B57






Abstract

Fluorescence microscopy has contributed to numerous major discoveries in life sciences. This is despite the limitations all imaging systems have. Typically, light microscopes excel at just one of the three core factors that modulate image quality and/or sample viability: spatial resolution, temporal resolution and light exposure. To minimize the impact of the mentioned handicaps, deep learning (DL) enabled microscopy image restoration is starting to be adopted (Content Aware Image Restoration (CARE) [1] and our own work [2]). Here we illustrate how a customized Residual Channel Attention Network (RCAN) [3] can be used for image restoration for data created on either an instant structured illumination microscopy (iSIM) or a point (resonant) scanning confocal microscopy. 3D live cell imaging of mitochondria and lysosomes using iSIM: training was done using 20 3D image pairs depicting cells with Tom20‐labelled mitochondria and 20 3D image pairs with Lamp1‐labeled lysosomes. All 3D images were 1.9K x 1.5K x 14 voxels. The raw input images (RII) were captured using 3.4 W/cm2 laser and the ground truth (GT) data was acquired using 0.37 kW/cm2 laser. Routine live cell iSIM imaging is done at ~ 100 W/cm2 ‐ which significantly limits the length of recordings. The trained model was applied to 3D+time (500 time points) acquired at low laser power (3.4 W/cm2) that was not used for training. Applying the trained model to new RII significantly improved signal to noise ratio (SNR), spatial resolution and overall appearance versus the RII. Fast image acquisition of cleared 3D samples using a confocal system in resonant mode: Training images were created with a confocal microscope with a 40x 1.3 NA objective lens and resonant scanner in single‐line mode (RII) or 64‐line average mode (GT). Acquiring GT data took >3x longer than acquiring the equivalent raw images. The RII had low SNR and included major pixel shift artifacts, normal for resonant scanning. The GT had high SNR and no noticeable artifacts. Training was done using 8 3D image pairs depicting fluorescently labelled neurons. Applying the trained DL model to new RII resulted in high SNR and spatial resolution images. We demonstrate that DL can greatly diminish the amount of light the sample is exposed to, allowing for a dramatic extension in experiment duration while retaining the capability to perform super resolution imaging. We demonstrate this capability by studying the dynamic interaction of mitochondria and lysosomal vesicles using iSIM. In addition, we discuss how the same type of approach can be adopted to significantly reduce the time needed to image optically cleared brain samples when using a point scanning confocal system equipped with a resonant scanner, while maintaining the spatial resolution.



Authors

H. Sasaki, J. Chen, H. Lai, Y. Yi, Y. Su, C. Chou Huang, S.J. Lee, H. Zhao, H. Shroff, L.A.G. Lucas

  • Y. Yi and H. Zhao are a part of the Department of Restorative Sciences, School of Dentistry, at the Texas A&M University, Dallas, TX

  • J.Chen and H. Shroff are a part of Advanced Imaging and Microscopy Resource, National Institutes of Health, Bethesda, MD

  • H. Shroff is also a part of Section on High Resolution Optical Imaging, National Institute of Biomedical Imaging and Bioengineering, Bethesda, MD

  • Y. Su is part of the National Institute of Biomedical Imaging and Bioengineering, NIH, Bethesda, MD

  • Other authors are part of DRVision Technologies LLC, Bellevue, WA


References

  1. Weigert M, et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat Methods 15, 1090-1097 (2018)

  2. Sasaki H, et al. Deep learning enables long term gentle super resolution imaging. Poster presented at ASCB-EMBO 2018, San Diego CA

  3. Zhang Y, et al. Image super-resolution using very deep residual channel attention networks. ECCV 2018. 294-310