IKOSA AI case study final result of lipid droplets deep learning algorithm training for TEM

​Case study: TEM Myocardium Assay

This case study describes the application of a myocardium tissue segmentation algorithm trained with KML Vision’s AI-powered microscopy image analysis software IKOSA AI

For this purpose we propose a deep learning-based method to assess the ultrastructure of myocardial tissue using images obtained from a Transmission Electron Microscopy (TEM) device as an input. 

The proposed method makes use of a deep neural network in order to automatically detect and segment sub-cellular structures of myocardial tissue.

Deep learning methods for tissue segmentation offer many advantages compared to traditional analysis techniques. These state-of-the-art methods account for faster, more robust and scalable quantification and have recently gained in popularity

Being able to accurately interpret the morphological ultrastructure of cardiac tissue is crucial to understanding the onset of cardiovascular diseases. Cardiovascular disease is reportedly associated with changes in the subcellular properties of myocardial tissue (Krahmer et al. 2011; Griffiths 2012; Goldberg et. al. 2017; Bonora et. al. 2019).   

Dysfunctions in the storage of lipid droplets, in particular, have been discussed in existing scientific literature as a marker of reduced cardiac function. This process is often paired with reduced mitochondrial generation of energy. 

For this reason we are interested in developing a robust method for the automatic segmentation of lipid droplets and mitochondria in myocardial tissue sections of mice, while ignoring the remaining structures (e.g. sarcomeres, z-stripes etc.). 

Get an impression of what the few vital steps of training the algorithm and conducting the myocardium tissue segmentation analysis look like and see how little effort it took us. 

In the course of this case study we describe the standard process of training our segmentation model using IKOSA AI. Further, we provide an overview and evaluation of the results of our model training. Next, we offer a discussion of the constraints of the method based on our findings. Finally, we offer suggestions for further improvement and validation of the algorithmic model.

​Background information

Amount of used data (min 5-10 images)

We decided to put the software to a challenge by using a small set of cardiac tissue images as an input. For this reason, we used a minimal training dataset of only 4 images.    

Training a segmentation model using IKOSA AI is possible, even if you only have access to a small set of input images.

For more information on dataset size requirements please refer to our FAQ section. 

Preparation before the algorithm training

In order to be able to obtain the most accurate and robust model, we carefully prepared our dataset before conducting the actual training. When preparing the data, the images relevant for analysis have to be selected and configured based on the given experimental conditions.

High-resolution 2D images of myocardial tissue of mice obtained using a transmission electron microscope (TEM) have been used for our model development.

For the purposes of the algorithm training and evaluation, we selected a couple of images from the total dataset that contained the morphological structures to be measured. 

Since using a rather balanced dataset in terms of the distribution of morphological structures facilitates the development of a more robust model, we selected those images containing a similar total number of lipid droplets and mitochondria. This way we could more easily see how well the algorithm performs for each object category.

We annotated and labelled regions with lipid droplets and mitochondria in all 4 cardiac tissue images. A total of 106 lipid droplets and 126 mitochondria have been manually annotated. This was done by means of delineation of each region using the annotation feature available on the IKOSA Platform.

These manual annotations serve as the “ground truth” for our supervised algorithm training. The algorithm learns how lipid droplets and mitochondria look like from the annotated objects without explicitly stating color, or shape parameters.  

Algorithm training time

The time to train deep learning algorithms with IKOSA AI varies depending on the complexity of your task and the specifics of your dataset. In the case of our model it took us about 1 hour all together to prepare the image dataset (40 mins) and run the training itself (20 mins). 

To allow a validation of the generalization performance after the training, about 20% of the input images are not used to build the model. Hence, this training has been executed on just three images and one has been used for validation.

  • Description for the interpretation of results is provided in FAQ.

​Quantitative results

Based on our results, the segmentation analysis method still poses some constraints after the first training, but definitely looks promising. In the following, we evaluate our model’s performance based on an in-depth discussion of the quantitative results.  

The dice coefficient for the lipid droplet category is 0.91. This metric can range from 0 to 1, where 0 represents no overlap and 1 denotes full overlap of the ground truth and the predicted area.

The value yielded by our algorithm points out that there is a high degree of overlap between the “ground truth” used as the training target and the actual predictions of our segmentation model. 

The segmentation algorithm has achieved a precision of 87.97% for the lipid droplets label category. This value suggests that roughly 88% of the predicted area in our image data overlaps with the “ground truth.” 

The recall value of approximately 94% suggests that our automated model has successfully predicted a relatively high percentage of the actual “ground truth.”   

The dice coefficient for mitochondria is slightly lower at 0.87, but still quite promising. Our algorithm has yielded a precision level of roughly 86% for the mitochondria category. 

Generally, the quantitative results of our myocardium segmentation analysis are favorable and can be improved even further by re-training the algorithm on additional input data.

​Label - Lipid Droplets

Lipid droplets quantitative results

​Label - Mitochondria

Mitochondria quantitative results
  • ​​​An explanation of ​Dice Coefficient, Precision and Recall ​is in ​Appendix.

​Qualitative results

Description of visualizations for qualitative results of all validation images

Further, we discuss the outcomes with regards to the qualitative dimensions. The output of our myocardium tissue segmentation analysis suggests that the trained algorithm performed rather well for the automatic identification of lipid droplets.

Based on the qualitative results we can see that the algorithm did segment two false-positives (shown in red/bottom right) which have not been previously included during the manual annotation.

At a closer look it appears that the algorithm actually did a better job than the human annotator to some extent as these structures have been segmented as lipid droplets with higher precision.

In general, the quantitative performance metrics of the segmentation algorithm may also be slightly improved by making the annotations more precise before starting the training. Some marginal areas surrounding the lipid droplets in the myocardial tissue images have been classified as “false positive” or “false negative” due to the fact that the annotations of some regions have been rather loose.

However, we can clearly see that the segmentation model successfully learned the concept of how a lipid droplet looks and is able to propose a more precise segmentation of the objects.  

​Label - Lipid Droplets

Image: tem-myocardium-3.tif - Lipid Droplets

Visualisation of input image and prediction of deep learning algorithm on lipid droplets
Visualisation of manual annotations on the input image and prediction of deep learning algorithm on lipid droplets

We have made similar observations when looking at our second label, the mitochondria. The algorithm also performed well for the automatic segmentation of these structures.

The three large false positive regions have most likely been missed during annotation. However, they have been included in the predictions by our trained algorithm.

As already discussed for the lipid droplets, the overall performance metrics of our myocardium tissue segmentation model could be improved by outlining the annotations more thoroughly and giving the contours a more precise shape.  

​Label - Mitochondria

Image: tem-myocardium-3.tif - Mitochondria

Visualisation of input image and prediction of deep learning algorithm on mitochondria
Visualisation of manual annotations on the input image and prediction of deep learning algorithm on mitochondria

Evaluation of the TEM Algorithm Training

After checking the training report, it becomes clear that we are on the right track. The proposed automated method for the segmentation of cardiac tissue yields a high level of accuracy for both label categories examined. 

The algorithm is also significantly faster compared to any manual outline process. While the annotation of a single object (without morphometry) takes about 4 seconds on average, the algorithm segments and measures all objects from both label categories in the entire image within the same amount of time.

In particular, considering that a typical image from our labeled dataset contains 40 lipid droplets and 41 mitochondria, the achieved speed-up compared to manual work is about 80x. 

This means we are now ready to assess the performance of the myocardium tissue segmentation algorithm using a larger dataset of 20 to 30 images that were not part of the training.

We want to find out if we have missed crucial areas or images that should also be added to the training set. Since these may also contain representative objects that are at this stage predicted with insufficient accuracy, this will add more robustness to our model.

Train your own deep learning algorithms for microscopy image segmentation with IKOSA AI  

In this case study you have learnt the basic steps of creating an automatic algorithm for myocardial tissue segmentation using IKOSA AI. Our deep learning software solution allows you to train your own tissue segmentation algorithms and run advanced analytical methods on your image datasets without any coding skills.  

Using IKOSA AI you can develop algorithms suited for many different types of segmentation analyses: from a single-cell segmentation to more complex myocardial and neural tissue segmentation methods. Take a look at our Case Study collection to learn about all the exciting applications of IKOSA AI.  

References

Bonora, Massimo, et al. "Targeting mitochondria for cardiovascular disorders: therapeutic potential and obstacles." Nature Reviews Cardiology 16.1 (2019): 33-55. 

Goldberg, Ira J., et al. "Deciphering the role of lipid droplets in cardiovascular disease: A report from the 2017 National Heart, Lung, and Blood Institute Workshop." Circulation 138.3 (2017): 305-315.

Griffiths, Elinor J. "Mitochondria and heart disease." Advances in Mitochondrial Medicine (2012): 249-267.

Krahmer, Natalie, et al. "Phosphatidylcholine synthesis for lipid droplet expansion is mediated by localized activation of CTP: phosphocholine cytidylyltransferase." Cell metabolism 14.4 (2011): 504-515.

Appendix

An explanation of ​Dice Coefficient, Precision and Recall

​More case studies

​Want to learn more about IKOSA AI?

Book a demo

Simply contact us via email ikosa@kmlvision.com or phone +43 680 156 7596 to discuss your needs.