{"id":10,"date":"2017-11-17T17:47:26","date_gmt":"2017-11-17T17:47:26","guid":{"rendered":"https:\/\/my.dev.vanderbilt.edu\/bagl\/?page_id=10"},"modified":"2024-09-06T16:16:33","modified_gmt":"2024-09-06T16:16:33","slug":"research","status":"publish","type":"page","link":"https:\/\/my.dev.vanderbilt.edu\/bagl\/research\/","title":{"rendered":"Research"},"content":{"rendered":"<h3><strong>Lab Publications:<\/strong><\/h3>\n<p>Lab publications can be found at Professor Noble&#8217;s NCBI <a href=\"https:\/\/www.ncbi.nlm.nih.gov\/myncbi\/jack.noble.1\/bibliography\/public\/\">Biblography<\/a>.<\/p>\n<p>Datasets from lab publications are being made available upon request for NIH-funded studies conducted by the BAGL lab.\u00a0 Please contact Professor Noble by email with such requests. You will be provided with a Box link to download the data spreadsheet. By downloading the data you agree to (1) only use the data for educational or research purposes and not for commercial purposes; (2) allow the BAGL lab to keep record of your download of the data for grant reporting purposes with potential public release of information regarding the institution(s) where you are affiliated; (3)\u00a0 not share the datasets directly with any other individual, but rather direct others to this website to request their own download link; and (4) acknowledge the grants that supported creation of the datasets when the data is used in a publication or presentation. The grant numbers that need to be cited are listed in the downloaded data spreadsheet.<\/p>\n<h1><\/h1>\n<h3><strong>Research Projects:<\/strong><\/h3>\n<p><a href=\"#DigitalTwins\"><strong>Digital twins for cochlear implants:<\/strong>\u00a0Image processing, deep learning, and bio-modeling methods to construct patient-specific cochlear implant electroanatomical models for neural activation stimulation, treatment planning, and customized processor programming<\/a><\/p>\n<p><a href=\"#InsertionGuidance\"><strong>Optimized cochlear implant placement<\/strong>: Image-processing, deep learning, modeling, and image guidance methods to plan, guide, and improve surgical placement of cochlear implants.<\/a><\/p>\n<p><a href=\"#EarSegmentation\"><strong>Image segmentation for ear anatomy:<\/strong> Automatic deep learning, model-based, and traditional image processing algorithms for accurately and robustly localizing critical ear anatomy.<\/a><\/p>\n<p><a href=\"#TumorResection\"><strong>Image-guided brain tumor resection surgery:<\/strong> Deep learning methods to localize brain tumor tissue and register the surgical field to segmented pre-operative MR images using intra-operative ultrasound imaging.<\/a><\/p>\n<hr \/>\n<h3><a name=\"DigitalTwins\"><\/a><br \/>\n<strong>Digital twins for cochlear implants<\/strong><\/h3>\n<p><em>Funded by grant\u00a0R01DC014037 from the NIDCD.<\/em><\/p>\n<p>We are developing comprehensive patient-specific cochlear implant electro-anatomical models. The models could permit neural activation simulation, treatment planning, and customized processor programming optimization strategies that have never before been possible.<\/p>\n<p>Our processing pipeline (shown below) involves: (1) localizing the patient cochlear anatomy and electrodes in CT, (2) estimating patient specific tissue resistivity and voltage maps using electric field models, and (3) estimating neural fiber health and activation patterns using neural activation models.<\/p>\n<p><a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/DigitalTwin3.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-full wp-image-176\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/DigitalTwin3.png\" alt=\"DigitalTwin3\" width=\"755\" height=\"546\" srcset=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/DigitalTwin3.png 755w, https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/DigitalTwin3-300x217.png 300w\" sizes=\"auto, (max-width: 755px) 100vw, 755px\" \/><\/a><\/p>\n<p>An example of neural activation patterns that we estimate for a CI patient is shown in the following video.<\/p>\n<div style=\"width: 788px;\" class=\"wp-video\"><!--[if lt IE 9]><script>document.createElement('video');<\/script><![endif]-->\n<video class=\"wp-video-shortcode\" id=\"video-10-1\" width=\"788\" height=\"728\" poster=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/stim_poster.png\" preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/Stim-1.mp4?_=1\" \/><a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/Stim-1.mp4\">https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/Stim-1.mp4<\/a><\/video><\/div>\n<p>&nbsp;<\/p>\n<p><strong>Patient-specific high resolution tissue resistivity maps for cochlear implant electric field modeling<\/strong><\/p>\n<p>We are currently developing and validating multiple approaches for constructing high resolution electric field models for patient-specific cochlear implant simulation. For example, we are evaluating custom deep learning architectures based on cycle consistent generative adverserial networks (Cycle GAN) to estimate electric fields from tissue label maps.\u00a0Our network is trained with weak supervision learning\u00a0with custom, physics-based loss terms. Our proposed method can generate high quality predictions that improve the speed of constructing models.<\/p>\n<p><a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/cycleGANEP2.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-full wp-image-182\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/cycleGANEP2.png\" alt=\"cycleGANEP2\" width=\"820\" height=\"285\" srcset=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/cycleGANEP2.png 820w, https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/cycleGANEP2-300x104.png 300w, https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/cycleGANEP2-768x267.png 768w\" sizes=\"auto, (max-width: 820px) 100vw, 820px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p><em>Ziteng Liu and Jack H. Noble, \u201cPatient-specific electro-anatomical modeling of cochlear implants using deep neural networks,\u201d Proceedings of the SPIE Conference on Medical Imaging, vol. 12034, pp. 12034-14, 2022.<\/em><\/p>\n<p>We are also developing conditional generative adversarial network (cGAN) deep learning techniques to synthesize high resolution \u03bcCT from CT\u00a0images to create high resolution tissue class maps.\u00a0\u00b5CT images offer significantly more detail than CT images, with ~1000x better volume resolution. However, \u00b5CT images cannot be acquired <em>in vivo<\/em>.\u00a0In this work, we aim to investigate whether a cGAN approach can learn how to produce a very detailed\u00a0synthetic\u00a0\u00b5CT\u00a0image using a clinical resolution patient CT scan as input.<\/p>\n<p>&nbsp;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-39 \" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/cgan-1024x403.png\" width=\"613\" height=\"246\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/atlas_based_model.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-full wp-image-183\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/atlas_based_model.png\" alt=\"atlas_based_model\" width=\"282\" height=\"147\" \/><\/a>We have also developed traditional atlas-based methods to create patient customized stimulation\u00a0models. In this approach, we use using high resolution\u00a0\u03bcCTs of cochlear specimens that are non-rigidly registered to patient CT images\u00a0to facilitate atlas-based tissue resistivity map estimation.<\/p>\n<p><em>Cakir A., Dawant B.M., Noble J.H., \u201cDevelopment of a microCT-based patient-specific model of the electrically stimulated cochlea,\u201d Lecture Notes in Computer Science \u2013 Proceedings of MICCAI, 2017.<\/em><\/p>\n<p><em>Cakir A., Dawant B.M., Noble J.H., \u201cEvaluation of a \u00b5CT-based electro-anatomical cochlear implant model,\u201d Proceedings of the 2016 SPIE Conf. on Medical Imaging, vol. 9786, pp. 97860M, 2016.<\/em><\/p>\n<hr \/>\n<p>&nbsp;<\/p>\n<p><strong><a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/NeuralHealth.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-full wp-image-184\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/NeuralHealth.png\" alt=\"NeuralHealth\" width=\"232\" height=\"167\" \/><\/a>Localization and activation simulation of auditory nerve fibers<\/strong><\/p>\n<p>We have developed a method to detect the neural health of individual auditory nerve fiber bundles by optimizing health parameters of our neural models so that model simulated physiological measurements match those acquired from the patient&#8217;s CI.<\/p>\n<p><a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/AGF.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-full wp-image-186\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/AGF.png\" alt=\"AGF\" width=\"535\" height=\"129\" srcset=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/AGF.png 535w, https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/AGF-300x72.png 300w\" sizes=\"auto, (max-width: 535px) 100vw, 535px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>These methods provide an unprecedented window into the health of the inner ear, opening the door for studying population variability and intra-subject neural health dynamics.<\/p>\n<p><em>Z Liu, A Cakir, JH Noble, \u201cAuditory Nerve Fiber Health Estimation Using Patient Specific Cochlear Implant Stimulation Models,\u201d Lecture Notes in Computer Science \u2013 Proceedings of the International Workshop on Simulation and Synthesis in Medical Imaging, vol. 12417, pp 184-194, 2020.<\/em><\/p>\n<hr \/>\n<h3><\/h3>\n<p>&nbsp;<\/p>\n<p><strong><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-medium wp-image-40\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/p11-300x276.png\" alt=\"p11\" width=\"300\" height=\"276\" srcset=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/p11-300x276.png 300w, https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/p11.png 383w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/>Image-Guided Cochlear Implant Programming<\/strong><\/p>\n<p>Cochlear Implants (CIs) restore hearing using an electrode array that is surgically implanted into the cochlea. In recent work, a multidisciplinary team of researchers at Vanderbilt, Drs. Jack Noble, Benoit Dawant, Ren\u00e9 Gifford, and Robert Labadie, has developed image processing techniques that have permitted accurate detection of the post-operative position of CI electrodes relative to the auditory nerve cells they stimulate (<a title=\"video\" href=\"https:\/\/my.dev.vanderbilt.edu\/jacknoble\/wp-content\/uploads\/sites\/2067\/2016\/03\/v5.mp4\">watch video of cochlear implant in CT image<\/a>). Using these image processing techniques, they have developed the first Image-Guided CI Programming (IGCIP) strategy.The IGCIP strategy is to deactivate CI electrodes that the imaging information suggests create overlapping stimulation patterns (<a href=\"https:\/\/my.dev.vanderbilt.edu\/jacknoble\/wp-content\/uploads\/sites\/2067\/2016\/03\/IGCIP.mp4\">steps of IGCIP system shown in this video<\/a>). Current studies show that IGCIP leads to significantly better hearing outcomes with CIs. The current goals of this project are to continue \u00a0IGCIP validation studies and to develop new patient-customized, IGCIP strategies that could provide more objective information to the CI programming process and lead to CI programs that better approximate natural hearing performance.<\/p>\n<p><em>Noble JH, Labadie RF, Gifford RH, Dawant BM, \u201cImage-guidance enables new methods for customizing cochlear implant stimulation strategies,\u201d IEEE Trans Neural Syst Rehabil Eng. vol. 21(5):820-9, 2013.<\/em><\/p>\n<p><em>Noble JH, Gifford RH, Hedley-Williams AJ, Dawant BM, and , Labadie RF, \u201cClinical evaluation of an image-guided cochlear implant programming strategy,\u201d\u00a0Audiology &amp; Neurotology, vol. 19, pp. 400-11, 2014.<\/em><\/p>\n<p><em>Noble J.H., Hedley-Williams A.J., Sunderhaus L.W., Dawant B.M., Labadie R.F., Camarata S.M., Gifford R.H., \u201cInitial results with image-guided cochlear implant programming in children,\u201d\u00a0Otology &amp; Neurotology\u00a037(2), pp. 69-9, 2016<\/em><\/p>\n<hr \/>\n<p><a name=\"InsertionGuidance\"><\/a><\/p>\n<p>&nbsp;<\/p>\n<h3><strong>Optimized cochlear implant placement<\/strong><\/h3>\n<p>We are developing methods to aid surgeons in optimizing surgical placement of cochlear implant electrode arrays. Optimized array placement leads to less surgical trauma and has been shown to be associated with improved hearing outcomes.<\/p>\n<p><strong>Image-guided Cochlear Implant Insertion Techniques<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-full wp-image-100\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/igi-1.png\" alt=\"igi\" width=\"546\" height=\"364\" srcset=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/igi-1.png 546w, https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/igi-1-300x200.png 300w\" sizes=\"auto, (max-width: 546px) 100vw, 546px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>Cochlear implant (CI) electrode arrays are treated as one size fits most, despite the fact that sub-optimal placement of the array is the norm. In this study, we\u00a0are investigating the use of patient-customized insertion plans, created by analyzing pre-operative CTs. Temporal bone studies show patient customized cochlear implant insertion techniques achieve better positioning of electrode arrays. We are currently evaluating whether these techniques improve electrode positioning in patients.<\/p>\n<p><em>Jack Noble and Robert Labadie, \u201cPreliminary results with image-guided cochlear implant insertion techniques,\u201d Otology &amp; Neurotology, vol. 39(7), pp. 922-928, 2018.<\/em><\/p>\n<hr \/>\n<p><strong>Self-supervised segmentation of cochlear implant insertion tools in surgical videos<\/strong><\/p>\n<p>Semantic segmentation of tools in surgical microscope video can permit real-time feedback to surgeons regarding adherence to the surgical plan. As manual segmentation of many video frames would be time consuming, we have developed a self-supervised deep learning method to segment cochlear implant insertion tools using the contrastive learning framework shown below.<\/p>\n<p><strong><a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/minmaxsimilaritynetwork-1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-full wp-image-199\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/minmaxsimilaritynetwork-1.png\" alt=\"minmaxsimilaritynetwork\" width=\"639\" height=\"354\" srcset=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/minmaxsimilaritynetwork-1.png 639w, https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/minmaxsimilaritynetwork-1-300x166.png 300w\" sizes=\"auto, (max-width: 639px) 100vw, 639px\" \/><\/a><\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>This contrastive learning framework, which we call &#8220;Min-Max Similarity&#8221; permits accurately segmenting the CI insertion sheath in real-time and only needs a small number of labelled video frames for training.<\/p>\n<p><a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/minmaxsimilarityresults-1.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-full wp-image-198\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/minmaxsimilarityresults-1.png\" alt=\"minmaxsimilarityresults\" width=\"712\" height=\"609\" srcset=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/minmaxsimilarityresults-1.png 712w, https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/minmaxsimilarityresults-1-300x257.png 300w\" sizes=\"auto, (max-width: 712px) 100vw, 712px\" \/><\/a><a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/minmaxsimilarityresults.png\"><br \/>\n<\/a><em>A Lou, K Tawfik, X Yao, Z Liu, J Noble, \u201cMin-Max Similarity: A Contrastive Semi-Supervised Deep Learning Network for Surgical Tools Segmentation,\u201d IEEE Trans. on Medical Imaging, vol. 42(10), pp. 2832-2841, 2023<\/em><\/p>\n<hr \/>\n<p>&nbsp;<\/p>\n<p><strong><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-medium wp-image-136\" src=\"https:\/\/my.dev.vanderbilt.edu\/jacknoble\/wp-content\/uploads\/sites\/2067\/2016\/03\/simulator-300x103.png\" alt=\"\" width=\"300\" height=\"103\" \/>Cochlear Implant Electrode Array Placement Simulator<\/strong><\/p>\n<p>The primary goals when placing the CI electrode array are to fully insert the array into the cochlea while minimizing trauma to the cochlea. Studying the relationship between surgical outcome and various surgical techniques has been difficult since trauma and electrode placement are generally unknown without histology. Our group has created a CI placement simulator that combines an interactive 3D visualization environment with a haptic-feedback-enabled controller. Surgical techniques and patient anatomy can be varied between simulations so that outcomes can be studied under varied conditions. With this system, we envision that through numerous trials we will be able to statistically analyze how outcomes relate to surgical techniques.<\/p>\n<p><em>Turok RL, Labadie RF, Wanna GB, Dawant BM, Noble JH, \u201cCochlear implant simulator for surgical technique analysis,\u201d Proceedings of the SPIE Conf. on Medical Imaging, 9036, 903619, 2014.<\/em><\/p>\n<hr \/>\n<h3><a name=\"EarSegmentation\"><\/a><\/h3>\n<p>&nbsp;<\/p>\n<h3><strong>Image segmentation for ear anatomy<\/strong><\/h3>\n<p><strong>Self-supervised Registration and Segmentation of the Ossicles using only one labelled dataset<\/strong><\/p>\n<p><a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/AtlasBasedUNet.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-full wp-image-204\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/AtlasBasedUNet.png\" alt=\"AtlasBasedUNet\" width=\"453\" height=\"236\" srcset=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/AtlasBasedUNet.png 453w, https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/AtlasBasedUNet-300x156.png 300w\" sizes=\"auto, (max-width: 453px) 100vw, 453px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>Recently published image segmentation methods that leverage\u00a0deep learning usually rely on a large number of manually definedvground truth labels for training. However, it is a laborious and time-consuming task to prepare the dataset. We propose\u00a0a novel technique using a self-supervised 3D-UNet that produces a dense deformation field between an atlas and a target image that can be used for atlas-based segmentation of the ossicles. Our results show that our method outperforms traditional image segmentation methods and generates a more accurate boundary around the ossicles compared to traditional methods.<\/p>\n<p><em>Yike Zhang, Jack H. Noble, \u201cSelf-supervised registration and segmentation on ossicles with a single ground truth label,\u201d Proceedings of the SPIE Conference on Medical Imaging, vol. 12466, pp. 12466-80, 2023 (in press)<\/em><\/p>\n<hr \/>\n<p>&nbsp;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-medium wp-image-129\" src=\"https:\/\/my.dev.vanderbilt.edu\/jacknoble\/wp-content\/uploads\/sites\/2067\/2016\/03\/uCT-300x267.png\" alt=\"\" width=\"300\" height=\"267\" \/><strong>Statistical Shape Model-Based Automatic Segmentation of Intra-Cochlear Anatomy<\/strong><\/p>\n<p>Segmentation of intracochlear structures (Scala Tympani, Scala Vestibuli and Media, and Modiolus) would aid surgical guidance and post-surgery analysis of electrode positioning, which could help improve placement outcomes and lead to better hearing outcomes. However, intracochlear structures are too small to be seen in conventional in vivo imaging, thus traditional segmentation techniques are inadequate. In this work, we circumvent this problem by creating a weighted active shape model with micro CT (\u03bcCT) scans of the cochlea acquired ex-vivo (<a href=\"https:\/\/my.dev.vanderbilt.edu\/jacknoble\/wp-content\/uploads\/sites\/2067\/2016\/03\/uct.mp4\">see video of Scala Tympani=Red, Scala Vestibuli=Blue, and Modiolus=green manually segmented in \u03bcCT<\/a>). We then use this model to segment conventional CT scans. The model is fit to the partial information available in the conventional scans and used to estimate the position of structures not visible in these images. Quantitative evaluation of our method, made possible by the set of \u03bcCTs, results in Dice similarity coefficients averaging 0.77 and surface errors of 0.15 mm.\u00a0<a href=\"https:\/\/my.dev.vanderbilt.edu\/jacknoble\/wp-content\/uploads\/sites\/2067\/2016\/03\/cochleassm.mp4\">A video of the shapes in the model is shown here (Red=ST, Blue=SV\/SM, Green=Promontory)<\/a>.<\/p>\n<p><em>Noble, J.H., Labadie, R.F., Majdani, O., Dawant, B.M., \u201cAutomatic segmentation of intra-cochlear anatomy in conventional CT\u201d,\u00a0IEEE Trans. on Biomedical. Eng., Vol. 58, No. 9, pp. 2625-32, 2011.<\/em><\/p>\n<p><em>Noble, J.H., Gifford, R.H., Labadie, R.F., Dawant, B.M., 2012, \u201cStatistical Shape Model Segmentation and Frequency Mapping of Cochlear Implant Stimulation Targets in CT,\u201d N. Ayache et al. (Eds.): MICCAI 2012, Part II, LNCS 7511, pp. 421-428. 2012.<\/em><\/p>\n<hr \/>\n<p>&nbsp;<\/p>\n<p><strong><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-medium wp-image-126\" src=\"https:\/\/my.dev.vanderbilt.edu\/jacknoble\/wp-content\/uploads\/sites\/2067\/2016\/03\/electrodelocalization2-300x258.png\" alt=\"\" width=\"300\" height=\"258\" \/>Automatic graph-based localization of cochlear implant electrodes in CT<\/strong><\/p>\n<p>To facilitate clinical translation of IGCIP, we are developing fully automated image analysis algorithms that permit accurately locating CI electrodes in post-implantation CT images (<a href=\"https:\/\/my.dev.vanderbilt.edu\/jacknoble\/wp-content\/uploads\/sites\/2067\/2016\/03\/CIinCT5.mp4\">video of CI electrodes in CT<\/a>).We have developed a novel\u00a0graph-based method for localizing electrode arrays in CTs that is effective for various implant models. It relies on a novel algorithm for finding a path of fixed length in a graph that optimizes an intensity and shape-based cost function and achieves maximum localization errors that are sub-voxel (<a href=\"https:\/\/my.dev.vanderbilt.edu\/jacknoble\/wp-content\/uploads\/sites\/2067\/2016\/03\/ElectrodeLocalization2.mp4\">video of electrode localization procedure<\/a>). \u00a0These results indicate that our methods could be used in a clinical IGCIP system.<\/p>\n<p><em>Y Zhao, S Chakravorti, RF Labadie, BM Dawant, JH Noble, \u201cAutomatic graph-based method for localization of cochlear implant electrode arrays in clinical CT with sub-voxel accuracy,\u201d Medical image analysis, vol. 52, pp. 1-12, 2019.<\/em><\/p>\n<p><em>Yiyuan Zhao, Robert Labadie, Benoit Dawant, Jack Noble, \u201cValidation of cochlear implant electrode localization techniques using \u00b5CTs,\u201d Journal of Medical Imaging, vol. 5(3), pp. 035001, 2018.<\/em><\/p>\n<p><em>Yiyuan Zhao, Benoit Dawant, and Jack Noble., \u201cAutomatic localization of closely-spaced cochlear implant electrode arrays in clinical CTs,\u201d Med. Phys., vol 45 (11), pp. 5030-5040, 2018.<\/em><\/p>\n<hr \/>\n<h3><a name=\"TumorResection\"><\/a><\/h3>\n<p>&nbsp;<\/p>\n<h3><strong>Automatic segmentation of brain tumor resections in intraoperative\u00a0ultrasound images using U-Net<img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-full wp-image-94\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/usunet-2.png\" alt=\"usunet\" width=\"602\" height=\"298\" srcset=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/usunet-2.png 602w, https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2689\/2017\/11\/usunet-2-300x149.png 300w\" sizes=\"auto, (max-width: 602px) 100vw, 602px\" \/><\/strong><\/h3>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h3><\/h3>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>To compensate for the intraoperative brain tissue deformation, computer-assisted intervention methods\u00a0have been used to register preoperative Magnetic Resonance (MR) images with intraoperative images. In order to\u00a0model the deformation due to tissue resection, the resection cavity needs to be segmented in intraoperative images. In a collaborative study with Prof. Matthieu Chabanas from the Univ. of Grenoble, we are developing an automatic method to segment the resection cavity in intraoperative ultrasound (iUS)\u00a0images using\u00a0U-Net-based deep neural networks.<\/p>\n<p><em>Fran\u00e7ois-Xavier Carton, Matthieu Chabanas, Florian Le Lann, Jack H. Noble \u201cAutomatic segmentation of brain tumor resections in intraoperative ultrasound images,\u201d Journal of Medical Imaging, vol. 7(3), pp. 031503, 2020.<\/em><\/p>\n<p><em>Fran\u00e7ois-Xavier Carton, Jack H. Noble, Florian Le Lann, Bodil K. R. Munkvold, Ingerid Reinertsen, Matthieu Chabanas, \u201cMulticlass segmentation of brain intraoperative ultrasound images with limited data,\u201d Proceedings of the SPIE Conf. on Medical Imaging, 11598-19, 2021.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Lab Publications: Lab publications can be found at Professor Noble&#8217;s NCBI Biblography. Datasets from lab publications are being made available upon request for NIH-funded studies conducted by the BAGL lab.\u00a0 Please contact Professor Noble by email with such requests. You will be provided with a Box link to download the data spreadsheet. By downloading the&#8230;<\/p>\n","protected":false},"author":2760,"featured_media":0,"parent":0,"menu_order":2,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"tags":[],"class_list":["post-10","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/my.dev.vanderbilt.edu\/bagl\/wp-json\/wp\/v2\/pages\/10","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/my.dev.vanderbilt.edu\/bagl\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/my.dev.vanderbilt.edu\/bagl\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/my.dev.vanderbilt.edu\/bagl\/wp-json\/wp\/v2\/users\/2760"}],"replies":[{"embeddable":true,"href":"https:\/\/my.dev.vanderbilt.edu\/bagl\/wp-json\/wp\/v2\/comments?post=10"}],"version-history":[{"count":45,"href":"https:\/\/my.dev.vanderbilt.edu\/bagl\/wp-json\/wp\/v2\/pages\/10\/revisions"}],"predecessor-version":[{"id":226,"href":"https:\/\/my.dev.vanderbilt.edu\/bagl\/wp-json\/wp\/v2\/pages\/10\/revisions\/226"}],"wp:attachment":[{"href":"https:\/\/my.dev.vanderbilt.edu\/bagl\/wp-json\/wp\/v2\/media?parent=10"}],"wp:term":[{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/my.dev.vanderbilt.edu\/bagl\/wp-json\/wp\/v2\/tags?post=10"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}