Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Organelle-specific phase contrast microscopy enables gentle monitoring and analysis of mitochondrial network dynamics

Open Access Open Access

Abstract

Mitochondria are delicate organelles that play a key role in cell fate. Current research methods rely on fluorescence labeling that introduces stress due to photobleaching and phototoxicity. Here we propose a new, gentle method to study mitochondrial dynamics, where organelle-specific three-dimensional information is obtained in a label-free manner at high resolution, high specificity, and without detrimental effects associated with staining. A mitochondria cleavage experiment demonstrates that not only do the label-free mitochondria-specific images have the required resolution and precision, but also fairly include all cells and mitochondria in downstream morphological analysis, while fluorescence images omit dim cells and mitochondria. The robustness of the method was tested on samples of different cell lines and on data collected from multiple systems. Thus, we have demonstrated that our method is an attractive alternative to study mitochondrial dynamics, connecting behavior and function in a simpler and more robust way than traditional fluorescence imaging.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Mitochondria are not only the power stations of a cell but also deal with oxidative stress and DNA damage. They connect or separate with each other through constant fusion and fission. In living cells, the network structures of mitochondria are perpetually evolving in order to exchange metabolic materials and engage in complex signaling [1]. Because of their critical role in cell fate, mitochondria have been a key organelle target of study by biologists and medical experts. For example, research has found that neurodegenerative diseases such as Alzheimer’s disease and Parkinson’s syndrome are linked to both structural and functional changes in mitochondria, where these changes may be critical to producing the neuropathologies associated with these diseases [2]. High levels of mitochondrial fission activity are essential to the proliferation and invasion of cancer cells [3,4], as well as the self-renewal and resistance to differentiation of certain stem cells [5,6]. As a regulator of the cell’s stress response, mitochondria are also sensitive to oxidation and other local changes to the cell [7] that can be brought about by experimental observation and thus studying their dynamics requires gentle methods. The current primary method to visualize those dynamics is through fluorescence microscopy. Compared to Electron microscopy which requires a fixed sample, fluorescence-based methods can be used to image live cells and have achieved many insights about the functional roles of mitochondria in development and diseases [8]. However, fluorescence methods typically require laser excitation when high imaging speed and contrast is desired [9,10]. Although less damaging than electron based methods, laser irradiation of living cells can still cause damage, since the excited fluorophores produce cytotoxic substances such as reactive oxygen species. Meanwhile fluorophores will photobleach, limiting the observation time, which makes it more difficult to observe mitochondria dynamics for long periods of time without interference. To make things worse, mitochondria themselves and their interconnected network are all three-dimensionally distributed. Thus, axial scanning is required to resolve these structures, which in turn causes more damage to the cell and the mitochondria could behave differently due to that damage. Label-free microscopy, such as phase-based techniques, offers an attractive alternative to study the long term behavior of the mitochondria networks. Traditional phase contrast microscopy does not have the resolution and image contrast to visualize mitochondria. Fortunately, recent developments in phase contrast microscopy methods and hardware, including our own work [1114], enable one to see individual mitochondria clearly in live cells without labeling. Among these, diffraction phase tomography (DPT) stands out as providing exquisite 3D resolution of the cell, but at the price of a relatively slow time resolution. Our recently-developed Ultra-Oblique Quantitative Phase Microscopy (UO-QPM) system provides high resolution with a concomitantly high temporal resolution, >100-fold faster than DPT methods. Further, the raw data is directly usable, which improves the ability to position and alter cells using the high resolution image as a real-time feedback, which is challenging for DPT methods that require lengthy numerical reconstructions to obtain meaningful images. However, a common problem among all of these phase microscopy systems is the lack of specificity of the label-free images. In addition to mitochondria, other membrane-bound organelles and structures such as the nucleus, lipid droplets, lipid membranes, etc., are also present in the image. The phase image is thus not specific to individual organelles, making post-acquisition data analysis substantially more complex than the fluorescence case. This imposes a serious problem in analyzing the dynamics of the mitochondrial networks due to the heterogeneous structures of the mitochondria and the complex background. In the past few years, deep learning techniques have seen great success in tackling such problems and are increasingly being used in microscopic imaging applications [15] such as super-resolution [16,17], holographic image reconstruction [18,19], and extended depth of field [19,20]. Researchers have now demonstrated the possibility of generating fluorescence-like images from unlabeled images, such as Google’s "in silico" labeling [21] and the Allen Institute’s results [22]. However, the results are not sufficient for analyzing mitochondrial morphology and distribution, largely due to the poor image quality in the raw data. Even in more recent works where the image quality is higher, the focus is on cell classification [23] or the dry mass movements inside a cell [24]. Accurately analyzing detailed mitochondrial morphology based on label-free images and their evolution over time remains to be demonstrated.

Here, we propose a new paradigm to study the dynamics of mitochondria: First obtain raw data with the high resolution phase microscope, then use a trained neural network to obtain the mitochondria-specific image and send that image to existing mitochondria software developed using fluorescence images for morphology analysis. In this way, quantitative information about the mitochondria networks can be extracted through unlabeled cells with minimum interference, using prior-validated analysis pipelines, and with the ability to image long-term without photobleaching. In our work, we have used our UO-QPM system mentioned above to acquire the phase images [13]. The spatial resolution and imaging speed is about 270 nm and 250 fps respectively, which is enough to visualize the morphology and motion of many organelles such as mitochondria, ER, vesicles, and others. We hypothesize that with our high spatial resolution and fast phase images, deep learning should be able to recognize individual mitochondria and map out the mitochondrial-network structures easily. Further, since the image quality of the raw data is high, the CNN network relies less on cell-type dependent information such as the spatial distribution of mitochondria within a cell to perform its task, and thus its performance is not tied to the specific cell type where the training is performed. To test this hypothesis, we constructed a CNN network trained using images collected only from COS -7 cells, yet used that network to infer mitochondrial morphology in diverse cell lines. Here, we test it on data from two additional cell types (carcinoma cell line PLC/PRF/5 and Hep3B), and also from a second microscope constructed by a different person (such that alignment, detector position, and other experimental factors were all changed, additionally challenging the network’s robustness). In all cases, the neural network successfully predicts the mitochondria from the phase images with high accuracy. In order to validate the utility of the mitochondria-specific images obtained this way, we performed morphology analysis using the validated analysis pipeline MitoGraph [25,26] and compared the results with the fluorescence-based image. We found that the software segmentation of each individual mitochondrion is consistent between each imaging mode, and consequently the morphology parameters such as the length, nodes and connectivity are also similar to each other. This validates that phase-based mitochondria-specific images can be used for downstream morphology analysis. Interestingly, we also found that when the fluorescence signal is low or when there is heterogeneous uptake of the labeling agent, the dim cells or dim mitochondria in the fluorescence image are left behind by the analysis software when analyzing the fluorescence images, while they are included in the phase-based image analysis. Thus, we believe our new paradigm is more suitable to study the dynamics of mitochondria compared to the fluorescence-based methods as experimental errors such as heterogeneous or inconsistent staining are completely avoided.

2. Results

2.1 OS-PCM method

The organelle-specific phase contrast microscopy (OS-PCM) methodology is outlined in Fig. 1. Raw images of unlabeled living cells are acquired by a high contrast phase microscope where individual organelles are visible to the naked eye. Then, a trained CNN is used to infer the three-dimensional structure of individual mitochondria and the mitochondrial network. With the mitochondria-specific images, morphology analysis using previously-validated software such as MitoGraph can be performed to obtain parameters such as the connectivity, length and nodes of the mitochondrial network. Since the cell is unlabeled and imaged with a low photon-flux from LEDs in a wavelength regime without high absorption, dynamics of the mitochondrial network can be studied over a long time period without inducing photobleaching. The photon flux from the LED source is 100-fold lower than the laser flux used for fluorescence excitation (0.8 nW/$\mathrm {\mu}$m$^2$ vs. 80 nW/$\mathrm {\mu}$m$^2$, respectively). More importantly, mitochondria studied this way would not experience the oxidative stress caused by the fluorescence and its behavior is more likely to be identical to a theoretical unobserved cell [27]. Thus OS-PCM is a gentle method to study mitochondria compared to fluorescence.

 figure: Fig. 1.

Fig. 1. Process for three-dimensional, bleach-free imaging and analysis of mitochondrial network dynamics through OS-PCM. Three dimensional raw data are collected with a home-built two-channel microscope through axial scanning. In the phase channel, ultra-oblique illumination with a LED ring is used to obtain high resolution. Four controlled phase shifts are applied to the unscattered light to obtain four frames for shifting-based reconstruction of the phase image. In the fluorescence channel, a laser is used for illumination and the 3D image can be acquired simultaneously with the phase channel, and deconvolved to boost the contrast. In the training of the CNN network, paired phase and fluorescence images, where organelles such as mitochondria are labeled, are used as inputs. After the training, the network can infer mitochondria-specific information from the nonspecific phase image with sufficient resolution and precision for accurate downstream morphology analysis of the mitochondria network.

Download Full Size | PDF

In order to validate the performance of OS-PCM, we have tested the precision and robustness of the mitochondria inference from the unlabeled images first, and then, second, tested the utility of the mitochondria-specific images in downstream morphology analysis.

2.2 3D prediction of mitochondria from unlabeled cell images

A U-Net convolutional neural network to perform 3D prediction of mitochondria was trained on ~200 pairs of fluorescence and phase images from COS-7 samples. Training data was selected to ensure high SNR (~8 dB) and where the staining is highly uniform within the cells. Below, we highlight our method’s performance when MitoTracker staining is not uniform, due to cell health or FCCP treatment. However, in the training data, cells were selected where mitochondria are well-labeled. Prior to the training, fluorescence images are also normalized to minimize the influence of the illumination. Test data was composed of images from 43 COS-7 cells, 41 PLC cells and 28 Hep3B cells. As the annotated data is acquired by a wide field fluorescence microscope, its contrast is lower than that of a confocal microscope and was boosted by deconvolution (more details can be found in Methods) prior to being submitted to the training network. The output is a fluorescence-like image where the values are used to signal the presence of mitochondria. In this work, the morphology such as the size, connection, etc., are of interest. The exact intensity value is not important as long as it differentiates the mitochondria from the background. Figure 2(a) shows the paired original raw data and the fluorescence image in the top row, and 2D projection of the 3D data of the deconvolved fluorescence and digital labeling in the lower row, where the image is color-coded based on the z-position of the maximum intensity. Zoomed-in ROIs from the 3D rendering (marked by boxes in Fig. 2(a) are shown in 2(b) and 2(c) where individual mitochondria and the network structure can be seen clearly. One can see that the prediction results are similar to the fluorescence result. Careful examination reveals that the deconvolved fluorescence result has higher resolution and contrast compared to the phase prediction, due to the slightly higher resolution of the fluorescence channel (~210 nm vs. 270 nm), and due to the fact that the dye is localized to the mitochondrial membrane and cristae. Thus, the fluorescent mitochondria tend to have more internal structure visible. However this slight difference in the mitochondria shape does not affect the morphology analysis such as the length and connectivity of the mitochondrial network, as shown below. Further, some of the intensity fluctuations are spurious, caused by the unbound dye and heterogeneous uptake of the dye, with consequently uneven signal levels (for example, see the structures pointed by the arrow in the green box in Fig. 2(b) and 2(c). Thus, when the fluorescence signal from the mitochondria is low or uneven, individual mitochondria appear fragmented and the morphology analysis will find smaller edge lengths and reduced connectivity. However, phase-based images do not suffer from the issue of uneven or nonspecific staining, and thus more reliable morphology analysis can be performed, as we show below. More examples on the COS-7 cell line data where the sizes and shapes of mitochondria are quite heterogeneous can be found in Fig. S1.

 figure: Fig. 2.

Fig. 2. Three-dimensional mitochondrial prediction results for COS-7 cells. (a) Top row: paired raw non-specific phase images and the fluorescence images (one z-slice shown); bottom row: depth-encoded projection of deconvolved fluorescence and 3D OS-PCM z-stacks. (b) Zoomed-in views of the four boxed regions shown in (a). (c) 3D renderings of two examples from (b) where one can see that individual mitochondria structures appear smooth and complete in the OS-PCM result while the fluorescence results are more fragmented. Spurious structures due to signals from other z-slices or unbound dye are also shown in the fluorescence-based result (identified by the green arrow in (b) and (c)) while absent in OS-PCM result. Scale bars: (a), 20 $\mathrm {\mu}$m; (c), 1 $\mathrm {\mu}$m.

Download Full Size | PDF

Next we went on to test the robustness of the neural network with images from different cell lines other than COS-7, on which the network was trained. To further probe the robustness, we also tested the network on data from an independent microscope of the same design, such that the precise alignment, pixel-level correspondence between the fluorescence and phase channels, and other factors are different between the training and test data. The collected raw phase images from unlabeled PLC and Hep3B cells were processed by the neural network with the prediction results being shown in Fig. 3(a) and 3(b) for PLC and Hep3B, respectively. For comparison, we also show the paired fluorescence images. Examples of the 3D rendering of the phase and fluorescence data are shown in Fig. 3(c). Once again one can see that overall the mitochondrial network structures are similar between the labeled and label-free modalities, despite the fact that the mitochondrial shape and structure is quite different from the COS-7 cell line. This indicates that our network is indeed correctly inferring the characteristics of the mitochondria from the training data, rather than making guesses based on, for example, the location of a given structure within the cell. For the individual mitochondria, the structure appears to be more continuous in the inferred results than the fluorescence case, which would be beneficial in computing the length of individual mitochondria and their connectivity. Taken together, Fig. 3 thus confirms the robustness of the network and demonstrates that it has learned to recognize mitochondria even in cases where the spatial distribution, mean sizes of mitochondria, and the relative abundance of linear versus round mitochondria differs substantially from the training data. Small discrepancies can be seen, for example comparing the cyan boxes in Fig. 3(a) and 3(b), the inferred result is not as continuous as might be expected from visual inspection of the raw phase image. However, as we will show below, these errors do not impact the overall morphological parameters calculated from the full cell data.

 figure: Fig. 3.

Fig. 3. Three-dimensional mitochondrial prediction results for PLC and Hep3 cell lines with a CNN network trained on COS-7 cell line. (a) Paired, depth-encoded projections of the fluorescence and OS-PCM z-stacks. (b) Zoomed-in views of the four boxed regions in (a). (c) 3D rendering of two examples from (b). Scale bars: (a), 20 $\mathrm {\mu}$m; (c), 1 $\mathrm {\mu}$m.

Download Full Size | PDF

To quantitatively analyze the prediction results, we computed several similarity metrics such as mean square error, the structural similarity metric (SSIM) [28] and multi scale SSIM (MS-SSIM) [29] over test data collected from 43 COS-7 cells, 41 Hep3B cells and 28 PLC cells. The results are shown in Fig. 4, the mean values of SSIM are 0.8706, 0.8652 and 0.8192 and the mean values of MS-SSIM are 0.9357, 0.9331 and 0.87 respectively for those three cell lines, which largely recapitulates the visual results presented in Fig. 2 and Fig. 3. Note that the images are aligned prior to the similarity computation. More details about the alignment can be found in Section 3.2. We also tested data collected from a similar system constructed by a different person using different detectors, optics and alignment methods. The SSIM and MS-SSIM results are shown in Fig. S2. One can see that the performance on data collected from two systems are similar to each other. Thus, together with the above image comparisons shown in Fig. 2 and Fig. 3, we conclude that our CNN performance is not tied to a specific cell line or precise hardware alignment. The underlying reason is that mitochondria in all these cells share similar morphologies, and what is different is their distributions along the spectra of size and shape and their spatial distribution within the cell. When supplied with high resolution images where mitochondria are obviously latent in the raw image, the neural network has learned the features about mitochondria themselves, not their spatial position or other irrelevant but correlated information that would change from cell line to cell line. This underscores the value of utilizing high quality, high resolution input data such as that obtained by the UO-QPM microscope.

 figure: Fig. 4.

Fig. 4. Quantitative evaluations of the prediction for the three cell lines shown in violin plots. (a) Mean square error metric (MSE). (b) Structural similarity metric (SSIM). (c) Multi scale structural similarity metric (MS-SSIM).

Download Full Size | PDF

2.3 Morphological analysis of the mitochondria network

In order to evaluate the utility of the mitochondria-specific images, we performed morphology analysis to find the length of individual mitochondria and the connectivity of the network using the previously-validated image analysis software MitoGraph [25,26], which was designed to take fluorescence-labeled images as an input. Other software, such as MiNA [30], Mytoe [31], Momito [32], Mitochondria Analyzer [33], or others could also be used [34]. We emphasize here that in the following analysis, the digitally stained phase images are submitted to MitoGraph without any changes to the MitoGraph software, emphasizing that our organelle-specific contrast enables the direct use of prior-validated processing pipelines. A quantitative evaluation of the results was performed through comparing the morphology parameters analyzed from the phase-based methods and the fluorescence-based method.

As is well known, the morphology analysis depends largely on the SNR of the image. For fluorescence images, depending on the heterogeneous uptake of the dye, different cells or mitochondria may exhibit different brightness or SNR. High SNR enables the subsequent morphology analysis to be more reliable. Meanwhile, for the phase-based image, the signal is intrinsic and SNR or brightness does not depend on uptake of the labeling agent. In conducting the comparison between the fluorescence and phase methods, we further explore how the different SNR behaviors of these two methods affect the morphology analysis. First we choose an image pair where the fluorescence signal is high. Figure 5 shows the morphology results for a cell exhibiting mitochondria with heterogeneous morphology. Figure 5(a) shows the segmented results of the individual mitochondria from the fluorescence and phase-based data. Zoom-ins of a few individual mitochondria are shown in Fig. 5(b). Visually we can see that the results from both data are consistent with each other. The quantitative morphological parameters output by MitoGraph, such as the length of the mitochondria and their connectivity are shown in Fig. 5(c). One can see that the parameters computed from these two modalities are similar to each other, validating the performance of the phase-based paradigm. Note that the deep learning is used here to recognize mitochondria for the morphology analysis where the size, nodes and connectivity are of importance. With deep learning, the resolution may also be improved to reveal more details inside the mitochondria [17]. However this is beyond the scope of this study. To evaluate how the fluorescence staining can affect the morphology analysis, we next show two cells within the same field of view that have vastly different fluorescence signal strength due to the heterogeneous uptake of the labeling dye, shown in Fig. 6. One z-slice of the mitochondria images from both the fluorescence and phase-based methods are shown in Fig. 6(a). Note that the signal in the phase-based method is more consistent between these two cells. Visual inspection of the images confirms that the mitochondria are still clearly visible with a continuous body structure in the phase image while appearing granular in the fluorescence image. Segmented results for those two cells are shown in Fig. 6(b). Clearly one can see that in the fluorescence case, the noise in the low signal cell image leads to severe, and spurious, fragmentation of the mitochondria in the segmentation, while the phase-based image yields the expected results based on a visual analysis of the data. The computed morphology parameters for the fluorescence and phase-based data are shown in Fig. 6(c). For the cell that appears bright in the fluorescence mode, the morphology parameters computed from the OS-PCM are similar to the fluorescence based method. However, when the cell appears dim in the fluorescence mode, the morphology results from OS-PCM and fluorescence are quite different. As the two cells are similar to each other in appearance, and are located next to each other on the same plate, one should expect that their morphology parameters should be similar to each other. As shown in Fig. 6(c), our OS-PCM based analysis confirms this. However, due to the heterogeneous labeling uptake and resultant poor segmentation in the fluorescence mode, the extracted morphological parameters are quite different when using fluorescence images. This demonstrates that the phase-based method actually yields more reliable and consistent results. Even in the "bright" cell case, there are some dim mitochondria, highlighting heterogeneous uptake even within a single cell. Thus in Fig. 6(d) we show a few examples of this heterogeneous uptake from both the bright and dim cells. Intensity profiles of these ROIs for both the fluorescence and OS-PCM results are shown in Fig. 6(e), allowing a more quantitative visualization of the heterogeneous signal strength between the two modalities. During the analysis of the fluorescence image, dim structures are "missed" in the segmentation and analysis process, yet recognized with phase-based segmentation. Further, the low signals also cause the spurious structures in the background due to unbound dye to be included in the analysis (as shown in Fig. 6(d) and pointed by a green arrow). The origin of the unbound MitoTracker and poor labeling of individual mitochondria is due to the membrane potential of each individual mitochondrion, which is related to cell and mitochondrial health. With careful culture protocols and when excessive imaging times are avoided, the mitochondria of many of the cells were largely uniform and bright. However, poor staining does occur. This becomes obvious when the paired phase image is available. Note that careful visual examination reveals that the mitochondria identified only by the phase based method (i.e. with poor staining, shown in Fig. 6(d)) retain typical morphologies and thus can be successfully recognized by the neural network. Thus, using label-free methods, a more complete and unbiased picture of the sample can be obtained, without artifacts introduced by labeling efficiency or other issues related to the staining process. Here we compare the morphology analysis based on different cells. Comparison between two methods on the same cell can be found in Fig. S3.

 figure: Fig. 5.

Fig. 5. Morphological analysis result of the mitochondria network. (a) 3D mitochondria surface of fluo-decon and OS-PCM from Mitograph. (b) Zoomed-in views of the boxed regions shown in (a). (c) Main indicators of the MitoGraph morphological analysis show that the results of fluorescence-based method and OS-PCM are similar.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Morphological analysis for two cells within the same field of view where one cell appears to be "bright" in the fluorescence mode. (a) Paired fluorescence and OS-PCM images. (b) 3D mitochondria surface analyzed by MitoGraph. In the fluorescence based method, the mito-surface appears to be more fragmented in the dim cell than the bright cell, while for OS-PCM method, the results are consistent between these two cells. (c) Extracted morphology parameters for these two cells. PHI represents the relative size of the largest connected component to the total mitochondrial size. Avg Edge is calculated by taking the total mitochondrial length and dividing that with the total edge number. Total Edge represents the total edge number. Total Nodes represents the total number of nodes. Finally, Total Components represents the total number of mitochondria. More information about those parameters can be found in [35]. (d) Zoomed-in views of a few representative mitochondria from both the bright and dim cells. (e) Intensity line profiles of the mitochondria studied in (d), each normalized to its maximum along the whole line.

Download Full Size | PDF

Thus we have demonstrated that phase-based images not only can be used to analyze the morphology of mitochondria but also ensures unbiased analysis, while the fluorescence-based method biases the results towards the cells and mitochondria with high dye uptake, not to mention other detrimental factors such as photo-bleaching and toxicity inherent in the fluorescence imaging process. Note that the mitochondria picked out only by the phase-based method still maintain typical mitochondrial morphology and thus can be recognized by the neural network.

2.4 Dynamic study of mitochondria cleavage with FCCP

Aside from experimental simplicity, a key advantage of phase-based microscopy is that the signal and image contrast are maintained over time, since the intrinsic signal never bleaches. This means that we can study the evolution of the mitochondria network over long time scales and at high time resolution. As one example, we study the mitochondria cleavage process by adding FCCP to the culture medium. FCCP is a commonly used mitochondrial oxidative phosphorylation coupling agent that leads to mitochondrial cleavage [36] where the percentage of round and small mitochondria in the morphology distribution will increase. A total of 120 time points were collected over a span of 40 minutes with our phase microscope and then digitally stained. Figure 7(a-b) shows the imaging results and the downstream morphological analysis. One can see from the images that mitochondria become smaller and rounder, clearly a result of the cleavage. From the morphology analysis one can see that over time the length of mitochondria tends to become shorter, and the connectivity score goes down with time following the FCCP-induced cleavage. This experiment validates that the phase-based mitochondria-specific images are sufficient to study the morphology dynamics of mitochondria.

 figure: Fig. 7.

Fig. 7. Morphological analysis result of mitochondria cleavage process. (a) Two examples of the OS-PCM images from the mitochondria cleavage experiment(see Visualization 1). One z-slice is shown here. (b) Time lapse of the mitochondria network including the total nodes, total edges, average edge length and connectivity score all show a downward trend during the experiment. (c) Two cells in the same field of view(see Visualization 2). (d) Time lapse data of the mitochondria network parameters of these two cells, showing slightly different time courses. (e) An example visual timecourse of a fragmenting mitochondrion(see Visualization 3), from the green box in (c). Scale bars in (a), (c) and (e) represent 10, 20, 5 $\mathrm {\mu}$m respectively.

Download Full Size | PDF

Due to the heterogeneity of cells, one may also expect that different cells undergo different time courses when confronted with the same stimulus. Figure 7(c) shows multiple cells within the same field of view. Two cells marked by dashed boxes are analyzed and the averaged edge length vs time results are shown in Fig. 7(d). One can see that even though both cells exhibit the same cleavage trend, their speed and time course are not exactly aligned. One cell shows some resistance at the beginning where the average length goes up, but eventually succumbs to the FCCP. The different reaction to the FCCP stimulus might be due to the different state within the cell cycle or other factors. Nevertheless, this observation underscores the importance of the dynamic study of the mitochondria networks, and the ability of label-free methods to provide high quality temporal data without experimental complexity or experiment-induced bias.

For the detailed morphology change during the cleavage process, time lapse images of one mitochondrion, shown in the green box in Fig. 7(c), are shown in Fig. 7(e). The top row shows the nonspecific phase images and bottom row shows the mitochondria specific images obtained from OS-PCM. Here one can see that cleavage happens through mitochondria first wrapping itself into a donut shape with a tail and then breaking from the tail. Other morphology changes such as the shrinkage of an individual mitochondrion can be found in Figure S4. Videos of the mitochondria cleavage with FCCP can be found at Visualization 1, Visualization 2, Visualization 3 and Visualization 4. Since it is difficult to demonstrate both the 3-dimensional structure of the mitochondria network and its temporal dynamics simultaneously, we have performed a maximum intensity projection along the z axis for each time point in the video.

3. Methods and materials

3.1 Sample preparation and imaging

3.1.1 Cell handling and staining process

COS-7, PLC and Hep3B cells were purchased from the Cell Bank of the Committee on Typical Culture Collection, Chinese Academy of Sciences. COS-7 cells are grown in DMEM medium (GIBCO) supplemented with 10% fetal bovine serum (GIBCO) and 1% penicillin/streptomycin at 37°C and 5% $\rm {CO_2}$ until approximately 60 to 80% confluence was reached. PLC and Hep3B cells were grown in MCMM medium (GIBCO) with 10% FBS (GIBCO) and 1% penicillin/streptomycin added at 37°C and 5% $\rm {CO_2}$ until approximately 60 to 80% confluence was reached. For QPM and fluorescence data collection experiments, cells were inoculated on a 50 mm glass bottom dish (WPI FluoroDish) and incubated in a cell culture incubator for more than 24 hours before staining. For labeling of mitochondria, cells were incubated with 200 nM MitoTracker Red CMXRos (Thermo-Fisher Scientific, M7512) in high glucose DMEM for 20 min, then washed and covered with a Hanks balanced salt solution containing Ca and Mg ions but no phenol red (HBSS, Thermo Fisher Scientific) for imaging. For the labeling of the transfected genes, CellLight Mitochondria-RFP (Thermo Fisher Scientific) was used according to the manufacturer’s protocol, incubated overnight in a cell culture incubator maintained at 37°C and 5% $\rm {CO_2}$, and then imaged.

For the mitochondria cleavage experiment, 2 mL of FCCP (MedChemExpress, HY-100410) with a concentration of 5 $\mathrm {\mu}$M is added to the cell culture dish, with images being collected immediately after addition.

3.1.2 High resolution multimodal microscopy

Quantitative phase images of the samples were acquired using a home-built microscope based on the spatial light interference method. More details can be found in Reference [13]. Briefly, an array of light-emitting diodes (LEDs) in a ring shape was used to illuminate the unlabeled cells, with some portion of the light being scattered by the cells, and the rest passing undisturbed through the sample. After passing through the sample, both scattered and unscattered light are collected by an objective (Nikon Plan Apo 60X / NA 1.4). The phase of the unscattered light was modulated in the pupil plane by a spatial light modulator (Meadowlark Optics, ODPDM512-0532-P8) before entering the camera. The amplitude of the unscattered light was reduced using a ring-shaped mask fabricated from an OD1 neutral density filter such that its intensity was more similar to the intensity of the scattered light, which improves the contrast of the final image. A fluorescence channel is added to the phase microscope through a dichroic mirror (Semrock, FF560). The excitation source was a 577 nm laser (Changchun New Industries, MSL-F-577).

The magnifications of both channels are set at 120x and the fields of view are also identical. All z stacks consist of 16 z slices with an interval of 300 nm. While our theoretical z-resolution is ~500nm, and thus would require 250 nm steps to Nyquist sample, when imaging mitochondria whose tubular width is about 1 micron, a z step size of 300 nm balances resolution and speed and is furthermore the step size recommended by the MitoGraph analysis software. All imaging hardware was automatically controlled by a LabVIEW (National Instruments, Austin, TX, USA) control program.

3.2 Data processing and image alignment

For the acquired raw fluorescence images we performed a 3D deconvolution process using the open source Fiji [37] plug-in tool DeconvolutionLab2 [38] from the biomedical imaging group (BIG) at EPFL. We used the Richardson-Lucy Total Variation algorithm with a number of iterations of 200 and parameter $\lambda$ set to 1 $\times$ 10$^{-12}$. The PSF was calculated using a theoretical Born & Wolf model [39,40]. To increase the processing speed, the JTransforms open source multi-threaded FFT library was used to accelerate the FFT operation.

For the quantitative phase images, the phase shifting method was used to recover the phase information from four intensity images when phase values of 0, $\pi /2$, $\pi$, $3\pi /2$ are applied to the unscattered light through a spatial light modulator, as follows,

$$\varphi \left( {\overrightarrow r } \right)\textrm{ = arctan}\left[ {\frac{{I\left( {\overrightarrow r ,1.5\pi } \right) - I\left( {\overrightarrow r ,0.5\pi } \right)}}{{I\left( {\overrightarrow r ,0\pi } \right) - I\left( {\overrightarrow r ,\pi } \right)}}} \right]$$
The co-registering between the paired fluorescence and phase image is performed through a simple spatial offset since the magnifications and pixel sizes in both channels are the same. The spatial offset was obtained through measurement of fluorescent beads and using a home-written script in MATLAB. The structural similarity metric, i.e., SSIM index, is based on the computation of three terms, namely the luminance term, the contrast term and the structural term, and is computed as follows,
$$\textrm{SSIM}\left( {x,y} \right) = \frac{{\left( {2{\mu _x}{\mu _y} + {c_1}} \right)\left( {2{\sigma _{xy}}\textrm{ + }{c_2}} \right)}}{{\left( {\mu _x^2 + \mu _y^2 + {c_1}} \right)\left( {\sigma _x^2 + \sigma _y^2 + {c_2}} \right)}}$$
$$MS-SSIM\left( {x,y} \right) = {\left[ {{l_M}\left( {x,y} \right)} \right]^{{\partial _M}}} \cdot \prod_{j = 1}^M {{{\left[ {{c_j}\left( {x,y} \right)} \right]}^{{\beta _j}}}} {\left[ {{s_j}\left( {x,y} \right)} \right]^{{\gamma _j}}}$$
where $x$ and $y$ are the numerically labeled output and the corresponding fluorescently stained reference images, $\mu$ and $\sigma$ are the mean and standard deviation, respectively. The overall index is a multiplicative combination of the three terms. Multiscale structural similarity (MS-SSIM) index is more robust when compared to SSIM [29]. We used Matlab (MathWorks) to perform maximum intensity projection (MIP) for the 3D stack and then calculated SSIM and MS-SSIM index based on the MIP image. Due to the slight rotational and displacement differences between the fluorescence and phase channel images, there is a slight mismatch between the digital labeling images and the fluorescence images, which requires accurate image alignment before the SSIM index is calculated. The alignment parameters are found using native image registration functions in the Matlab image toolbox.

3.3 CNN network

The CNN architecture for digital labeling is based on the U-Net architecture commonly used in biomedical imaging tasks [41] and is shown in Fig. S5. Each layer performs a convolutional operation followed by a batch normalization and activation with the ReLU operation. Batch normalization has strong regularization properties and makes the training faster and more stable [42,43]. Convolution is performed with a 3$\times$3 pixels kernel and zero padding is used to ensure that the output is the same size as the input. The downsampling process is applied with 2$\times$2 pixels convolution with a stride of 2 pixels that halve the spatial area of the output. At each downsampling step the number of feature channels is doubled. The upsampling process is applied with a 2$\times$2 pixels transposed convolution with a stride of 2 pixels that double the spatial area of the output. At each upsampling step the number of feature channels is halved. The last layer of the network does not perform normalization and ReLU operations. Due to memory limitations associated with GPU calculations, we cropped images into 3D patches of 256 $\times$ 256 $\times$ 16 pixels (XYZ). These data were randomly shuffled before feeding into the neural network. For the training purpose, we have collected 214 pairs of fluorescence and phase images from COS-7 samples and they were randomly divided into the training pool and test pool according to a ratio of 80:20 (171 pairs as training data and 43 pairs as the test data). The pixel size for each 3D image is 1024 x 1024 x 16. The training process was performed in a typical backward and forward propagation mode, with model parameters updated by random gradient descent (backward propagation) to minimize the loss function (MSE)

$$MSE=\frac{1}{n}\sum_{i=1}^{n} \left(y_i - f\left(x_i\right)\right)^2$$
between the output image and the target image. The network was trained using Adam with the optimizer parameter $\beta$ values of 0.9 and 0.999 and a learning rate of 0.001. An L2 penalty was applied during the Adam optimizer as a regularization against overfitting [44]. The convolutional kernel were randomly initialized by using truncated normal distribution with a standard deviation of 0.02 and a mean of zero and all of the network bias were initialized as zero. We used a mini-batch size of 14 and all input images were randomly subsampled uniformly both across all training images and spatially within an image. After a total of 300,000 mini-batch iterations the loss function was no longer decreased and the network training stopped. The convergence process of the network training is shown in Fig. S6.Note that we have applied typical strategies such as batch normalization, regularization in the Adam optimizer and early stopping in order to avoid overfitting [4244]. Finally we have visually inspected the midpoint prediction results and compared them both with the fluorescence image and the original phase image since individual mitochondria are resolved in both images.

The network was implemented using Python version 3.7.0 and Pytorch framework version 1.4.0. We implemented the software on a desktop computer with a 3.6 GHz Core i9-9900K CPU (Intel Corporation, Santa Clara, CA, USA) and 64 GB of RAM, running the Ubuntu operating system. The final output image of the network was 1024 x 1024 pixels. Network training and testing was performed using dual NVIDIA TITAN RTX GPUs (NVidia, Santa Clara, CA, USA).

4. Discussion

In this work, we have presented a new imaging and analysis paradigm where the mitochondria dynamics can be studied in a gentle and unbiased way. We use a high resolution phase contrast microscope to collect cell images where unlabeled organelles can be seen clearly and mitochondria would not experience photo-induced stress caused by the label and strong fluorescent excitation light. A CNN network is used to pick out only the mitochondria from the panoramic view of the cell image. Due to the high resolution in both the lateral and axial direction, 3D structures of individual mitochondria and the complex network can be resolved and the image quality is sufficient for the following morphology analysis. Since the phase microscope can perform long-term continuous imaging, one can track each individual mitochondrion’s movement and the evolution of the whole network. In order to lay a strong foundation for this paradigm, we have tested our system systematically, including probing the robustness of the CNN network to cell lines distinct from the training data, and to experimental factors such as system alignment and construction.

In our results, we show several quantitative metrics that are consistent between the phase data and a ground truth, despite the different cell lines and different experimental systems. The fundamental reason for the network robustness, particularly considering the relative paucity of training data (only 171 image pairs are analyzed in the training step), is that due to the high contrast in the raw data, where individual mitochondria can be easily identified by the naked eye, the neural network can train on the mitochondrial features themselves. This is in contrast to prior work, where low-resolution phase or bright field images are used, and where essentially no information about the mitochondria is present in the raw data, forcing the CNN network to perform more "guesswork" about where the mitochondria are, and what their morphology might be. In order to avoid overfitting, we have also employed typical strategies such as batch normalization, regularization in the Adam optimizer and early stopping in the training. With high contrast phase images, we can also visually compare the results from the network and the fluorescence as a final safety check. Thus we have compared the paired fluorescence image and the midpoint result from the neural network in the training data. If overfitting was present, this would manifest as a network with poor robustness. Yet by challenging the network with data from other cell lines and from a similar system constructed by a different person, we find the network output consistently yields predictions that are highly consistent with paired fluorescence images (c.f Figs. 2, 3, and 6). However, we note that the results are still not perfect, as evidenced by visual inspection of the data presented in Fig. 2 and Fig. 3, as well as the quantitative metrics still showing some variability versus cell line. Thus, while the network performance is sufficient for downstream batch analysis of mitochondrial morphology, the digital labeling performance has room to improve, for example by expanding the training data to include multiple cell lines, and to deconvolve the phase data to improve the sectioning and resolution to be more similar to the deconvolved fluorescence data.

As OS-PCM relies on intrinsic signals that do not bleach under long term observation, our method can achieve observation times beyond traditional fluorescence methods. Therefore, to promote future analytical developments using phase based data, we have also collected a long-term 5000 frame set of an unlabeled cell and used the neural network to predict the 2D mitochondria. These data can be found at Dataset 1 [45].

In the utility test and validation experiment, we used the fluorescence based analysis as a reference to evaluate the morphology parameters computed from OS-PCM. We found that when the fluorescence image has a high SNR, the parameters are similar to each other. However low SNR fluorescence images are not suitable for downstream morphology analysis, as the low signal leads to excessive spurious fragmentation in the segmented image. Thus the fluorescence-based method favors bright cells or bright mitochondria, potentially biasing results. Interestingly, for phase-based methods, there is no such preference. Cells and mitochondria that appear to be dim in the fluorescence channel are imaged with similar brightness to their peers in the phase contrast image and thus included in the downstream analysis in an unbiased manner.

In mitochondria cleavage experiments, we found that addition of FCCP causes MitoTracker, a commonly used mitochondrial labeling dye, to diffuse into the cytoplasm, nucleus and other parts of the cell. The redistribution of the dye seriously interferes with the fluorescence imaging (see example images in Fig. S7). Even with more complex methods such as viral gene transfection, the signals of the fluorescence images are not consistent, depending largely on the cell status and the production lots of the virus. However, with our label-free quantitative phase imaging, the signal is consistent (paired images are shown in Fig. S8). Further, even when the labeling is successful, the heterogeneous uptake of cells and mitochondria could lead to artificial selection of samples based on brightness, either by users or by the algorithm. Thus, we believe that using high resolution phase microscopy to collect cell images and performing digital segmentation will reduce random experimental factors and thus lead to more reliable analysis.

Finally, since other organelles such as the ER, vesicles, and nuclei are also clearly visible in our phase contrast microscope, we can equivalently obtain their specific images through deep learning, and is the direction of our future work. In this way, our method holds promise for the future label-free visualization of pan-organelle dynamics, including their mutual interactions, without photoxicity, photobleaching, or any other staining-induced experimental variability.

Funding

National Key Research and Development Program of China (2017YFA0505300); Anhui Province Key R & D Project (202003a07020020).

Acknowledgments

The authors thank Junru Feng and Junying Li of the School of Life Sciences of the University of Science and Technology of China for providing cells.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available in Dataset 1 [45].

Supplemental document

See Supplement 1 for supporting content.

References

1. G. W. Dorn, “Evolving concepts of mitochondrial dynamics,” Annu. Rev. Physiol. 81(1), 1–17 (2019). [CrossRef]  

2. R. H. Swerdlow, “Mitochondria and mitochondrial cascades in Alzheimer’s disease,” J. Alzheimers Dis. 62(3), 1403–1416 (2018). [CrossRef]  

3. F. Bonnay, A. Veloso, V. Steinmann, T. Kocher, M. D. Abdusselamoglu, S. Bajaj, E. Rivelles, L. Landskron, H. Esterbauer, R. P. Zinzen, and J. A. Knoblich, “Oxidative metabolism drives immortalization of neural stem cells during tumorigenesis,” Cell 182(6), 1490–1507.e19 (2020). [CrossRef]  

4. A. L. Smith, J. C. Whitehall, C. Bradshaw, D. Gay, F. Robertson, A. P. Blain, G. Hudson, A. Pyle, D. Houghton, and M. Hunt, “Age-associated mitochondrial dna mutations cause metabolic remodeling that contributes to accelerated intestinal tumorigenesis,” Nat. Cancer 1(10), 976–989 (2020). [CrossRef]  

5. R. S. Demarco, B. S. Uyemura, C. D’Alterio, and D. L. Jones, “Mitochondrial fusion regulates lipid homeostasis and stem cell maintenance in the drosophila testis,” Nat. Cell Biol. 21(6), 710–720 (2019). [CrossRef]  

6. E. Mansell, V. Sigurdsson, E. Deltcheva, J. Brown, C. James, K. Miharada, S. Soneji, J. Larsson, and T. Enver, “Mitochondrial potentiation ameliorates age-related heterogeneity in hematopoietic stem cell function,” Cell Stem Cell 28(2), 241–256.e6 (2021). [CrossRef]  

7. V. Eisner, M. Picard, and G. Hajnoczky, “Mitochondrial dynamics in adaptive and maladaptive cellular stress responses,” Nat. Cell Biol. 20(7), 755–765 (2018). [CrossRef]  

8. D. C. Chan, “Mitochondrial dynamics and its involvement in disease,” Annu. Rev. Pathol. Mech. Dis. 15(1), 235–259 (2020). [CrossRef]  

9. B. Glancy, “Visualizing mitochondrial form and function within the cell,” Trends Mol. Med. 26(1), 58–70 (2020). [CrossRef]  

10. S. Skylaki, O. Hilsenbeck, and T. Schroeder, “Challenges in long-term imaging and quantification of single-cell dynamics,” Nat. Biotechnol. 34(11), 1137–1144 (2016). [CrossRef]  

11. Z. Wang, L. Millet, M. Mir, H. Ding, S. Unarunotai, J. Rogers, M. U. Gillette, and G. Popescu, “Spatial light interference microscopy (slim),” Opt. Express 19(2), 1016–1026 (2011). [CrossRef]  

12. Y. Cotte, F. Toy, P. Jourdain, N. Pavillon, D. Boss, P. Magistretti, P. Marquet, and C. Depeursinge, “Marker-free phase nanoscopy,” Nat. Photonics 7(2), 113–117 (2013). [CrossRef]  

13. Y. Ma, S. Guo, Y. Pan, R. Fan, Z. J. Smith, S. Lane, and K. Chu, “Quantitative phase microscopy with enhanced contrast and improved resolution through ultra-oblique illumination (uo-qpm),” J. Biophotonics 12(10), e201900011 (2019). [CrossRef]  

14. Y. Zhang, K. de Haan, Y. Rivenson, J. Li, A. Delis, and A. Ozcan, “Digital synthesis of histological stains using micro-structured and multiplexed virtual staining of label-free tissue,” Light: Sci. Appl. 9(1), 78 (2020). [CrossRef]  

15. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019). [CrossRef]  

16. C. Qiao, D. Li, Y. Guo, C. Liu, T. Jiang, Q. Dai, and D. Li, “Evaluation and development of deep neural networks for image super-resolution in optical microscopy,” Nat. Methods 18(2), 194–202 (2021). [CrossRef]  

17. H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019). [CrossRef]  

18. Y. Rivenson, Y. Wu, and A. Ozcan, “Deep learning in holography and coherent imaging,” Light: Sci. Appl. 8(1), 85 (2019). [CrossRef]  

19. Y. Wu, Y. Rivenson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018). [CrossRef]  

20. Y. Wu, Y. Rivenson, H. Wang, Y. Luo, E. Ben-David, L. A. Bentolila, C. Pritz, and A. Ozcan, “Three-dimensional virtual refocusing of fluorescence microscopy images using deep learning,” Nat. Methods 16(12), 1323–1331 (2019). [CrossRef]  

21. E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In silico labeling: predicting fluorescent labels in unlabeled images,” Cell 173(3), 792–803.e19 (2018). [CrossRef]  

22. C. Ounkomol, S. Seshamani, M. M. Maleckar, F. Collman, and G. R. Johnson, “Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy,” Nat. Methods 15(11), 917–920 (2018). [CrossRef]  

23. Y. N. Nygate, M. Levi, S. K. Mirsky, N. A. Turko, M. Rubin, I. Barnea, G. Dardikman-Yoffe, M. Haifler, A. Shalev, and N. T. Shaked, “Holographic virtual staining of individual biological cells,” Proc. Natl. Acad. Sci. 117(17), 9223–9231 (2020). [CrossRef]  

24. M. E. Kandel, Y. R. He, Y. J. Lee, T. H.-Y. Chen, K. M. Sullivan, O. Aydin, M. T. A. Saif, H. Kong, N. Sobh, and G. Popescu, “Phase imaging with computational specificity (pics) for measuring dry mass changes in sub-cellular compartments,” Nat. Commun. 11(1), 6256 (2020). [CrossRef]  

25. M. P. Viana, S. Lim, and S. M. Rafelski, “Quantifying mitochondrial content in living cells,” Methods Cell Biol. 125, 77–93 (2015). [CrossRef]  

26. M. P. Viana, A. I. Brown, I. A. Mueller, C. Goul, E. F. Koslover, and S. M. Rafelski, “Mitochondrial fission and fusion dynamics generate efficient, robust, and evenly distributed network topologies in budding yeast cells,” Cell Syst. 10(3), 287–297.e5 (2020). [CrossRef]  

27. H. M. Moreno, “Phototoxic effects of epifluorescence or tomographic phase microscopies on mammalian organelles,” Thesis, EPFL (2019).

28. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

29. Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, vol. 2 (Ieee, 2003), pp. 1398–1402.

30. A. J. Valente, L. A. Maddalena, E. L. Robb, F. Moradi, and J. A. Stuart, “A simple imagej macro tool for analyzing mitochondrial network morphology in mammalian cell culture,” Acta Histochem. 119(3), 315–326 (2017). [CrossRef]  

31. E. Lihavainen, J. Makela, J. N. Spelbrink, and A. S. Ribeiro, “Mytoe: automatic analysis of mitochondrial dynamics,” Bioinformatics 28(7), 1050–1051 (2012). [CrossRef]  

32. M. Ouellet, G. Guillebaud, V. Gervais, D. Lupien St-Pierre, and M. Germain, “A novel algorithm identifies stress-induced alterations in mitochondrial connectivity and inner membrane structure from confocal images,” PLoS Comput. Biol. 13(6), e1005612 (2017). [CrossRef]  

33. A. Chaudhry, R. Shi, and D. S. Luciani, “A pipeline for multidimensional confocal analysis of mitochondrial morphology, function, and dynamics in pancreatic beta-cells,” Am. J. Physiol. Endocrinol. Metab. 318(2), E87–E101 (2020). [CrossRef]  

34. E. F. Iannetti, J. A. Smeitink, J. Beyrath, P. H. Willems, and W. J. Koopman, “Multiplexed high-content analysis of mitochondrial morphofunction using live-cell microscopy,” Nat. Protoc. 11(9), 1693–1710 (2016). [CrossRef]  

35. M. C. Harwig, M. P. Viana, J. M. Egner, J. J. Harwig, M. E. Widlansky, S. M. Rafelski, and R. B. Hill, “Methods for imaging mammalian mitochondrial morphology: a prospective on mitograph,” Anal. Biochem. 552, 81–99 (2018). [CrossRef]  

36. H. Terada, “The interaction of highly active uncouplers with mitochondria,” Biochim. Biophys. Acta 639(3-4), 225–242 (1981). [CrossRef]  

37. J. Schindelin, I. Arganda-Carreras, E. Frise, V. Kaynig, M. Longair, T. Pietzsch, S. Preibisch, C. Rueden, S. Saalfeld, B. Schmid, J. Y. Tinevez, D. J. White, V. Hartenstein, K. Eliceiri, P. Tomancak, and A. Cardona, “Fiji: an open-source platform for biological-image analysis,” Nat. Methods 9(7), 676–682 (2012). [CrossRef]  

38. D. Sage, L. Donati, F. Soulez, D. Fortun, G. Schmit, A. Seitz, R. Guiet, C. Vonesch, and M. Unser, “Deconvolutionlab2: An open-source software for deconvolution microscopy,” Methods 115, 28–41 (2017). [CrossRef]  

39. M. Born and E. Wolf, Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light (Elsevier, 2013).

40. H. Kirshner, F. Aguet, D. Sage, and M. Unser, “3-d psf fitting for fluorescence microscopy: implementation and localization application,” J. Microsc. 249(1), 13–25 (2013). [CrossRef]  

41. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-assisted Intervention (Springer, 2015), pp. 234–241.

42. E. Moen, D. Bannon, T. Kudo, W. Graf, M. Covert, and D. Van Valen, “Deep learning for cellular image analysis,” Nat. Methods 16(12), 1233–1246 (2019). [CrossRef]  

43. N. Bjorck, C. P. Gomes, B. Selman, and K. Q. Weinberger, “Understanding batch normalization,” in NeurIPS, (2018), pp. 7705–7716.

44. I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” in International Conference on Learning Representations, (2019).

45. S. Guo, Y. Ma, Y. Pan, Z. Smith, and K. Chu, “Data file,” figshare (2021), https://doi.org/10.6084/m9.figshare.14823138.

Supplementary Material (6)

NameDescription
Dataset 1       Data File 1 for Organelle-specific phase contrast microscopy enables gentle monitoring and analysis of mitochondrial network dynamics
Supplement 1       Supplement Figures
Visualization 1       Visualization 1
Visualization 2       Visualization 2
Visualization 3       Visualization 3
Visualization 4       Visualization 4

Data availability

Data underlying the results presented in this paper are available in Dataset 1 [45].

45. S. Guo, Y. Ma, Y. Pan, Z. Smith, and K. Chu, “Data file,” figshare (2021), https://doi.org/10.6084/m9.figshare.14823138.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Process for three-dimensional, bleach-free imaging and analysis of mitochondrial network dynamics through OS-PCM. Three dimensional raw data are collected with a home-built two-channel microscope through axial scanning. In the phase channel, ultra-oblique illumination with a LED ring is used to obtain high resolution. Four controlled phase shifts are applied to the unscattered light to obtain four frames for shifting-based reconstruction of the phase image. In the fluorescence channel, a laser is used for illumination and the 3D image can be acquired simultaneously with the phase channel, and deconvolved to boost the contrast. In the training of the CNN network, paired phase and fluorescence images, where organelles such as mitochondria are labeled, are used as inputs. After the training, the network can infer mitochondria-specific information from the nonspecific phase image with sufficient resolution and precision for accurate downstream morphology analysis of the mitochondria network.
Fig. 2.
Fig. 2. Three-dimensional mitochondrial prediction results for COS-7 cells. (a) Top row: paired raw non-specific phase images and the fluorescence images (one z-slice shown); bottom row: depth-encoded projection of deconvolved fluorescence and 3D OS-PCM z-stacks. (b) Zoomed-in views of the four boxed regions shown in (a). (c) 3D renderings of two examples from (b) where one can see that individual mitochondria structures appear smooth and complete in the OS-PCM result while the fluorescence results are more fragmented. Spurious structures due to signals from other z-slices or unbound dye are also shown in the fluorescence-based result (identified by the green arrow in (b) and (c)) while absent in OS-PCM result. Scale bars: (a), 20 $\mathrm {\mu}$m; (c), 1 $\mathrm {\mu}$m.
Fig. 3.
Fig. 3. Three-dimensional mitochondrial prediction results for PLC and Hep3 cell lines with a CNN network trained on COS-7 cell line. (a) Paired, depth-encoded projections of the fluorescence and OS-PCM z-stacks. (b) Zoomed-in views of the four boxed regions in (a). (c) 3D rendering of two examples from (b). Scale bars: (a), 20 $\mathrm {\mu}$m; (c), 1 $\mathrm {\mu}$m.
Fig. 4.
Fig. 4. Quantitative evaluations of the prediction for the three cell lines shown in violin plots. (a) Mean square error metric (MSE). (b) Structural similarity metric (SSIM). (c) Multi scale structural similarity metric (MS-SSIM).
Fig. 5.
Fig. 5. Morphological analysis result of the mitochondria network. (a) 3D mitochondria surface of fluo-decon and OS-PCM from Mitograph. (b) Zoomed-in views of the boxed regions shown in (a). (c) Main indicators of the MitoGraph morphological analysis show that the results of fluorescence-based method and OS-PCM are similar.
Fig. 6.
Fig. 6. Morphological analysis for two cells within the same field of view where one cell appears to be "bright" in the fluorescence mode. (a) Paired fluorescence and OS-PCM images. (b) 3D mitochondria surface analyzed by MitoGraph. In the fluorescence based method, the mito-surface appears to be more fragmented in the dim cell than the bright cell, while for OS-PCM method, the results are consistent between these two cells. (c) Extracted morphology parameters for these two cells. PHI represents the relative size of the largest connected component to the total mitochondrial size. Avg Edge is calculated by taking the total mitochondrial length and dividing that with the total edge number. Total Edge represents the total edge number. Total Nodes represents the total number of nodes. Finally, Total Components represents the total number of mitochondria. More information about those parameters can be found in [35]. (d) Zoomed-in views of a few representative mitochondria from both the bright and dim cells. (e) Intensity line profiles of the mitochondria studied in (d), each normalized to its maximum along the whole line.
Fig. 7.
Fig. 7. Morphological analysis result of mitochondria cleavage process. (a) Two examples of the OS-PCM images from the mitochondria cleavage experiment(see Visualization 1). One z-slice is shown here. (b) Time lapse of the mitochondria network including the total nodes, total edges, average edge length and connectivity score all show a downward trend during the experiment. (c) Two cells in the same field of view(see Visualization 2). (d) Time lapse data of the mitochondria network parameters of these two cells, showing slightly different time courses. (e) An example visual timecourse of a fragmenting mitochondrion(see Visualization 3), from the green box in (c). Scale bars in (a), (c) and (e) represent 10, 20, 5 $\mathrm {\mu}$m respectively.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

φ ( r )  = arctan [ I ( r , 1.5 π ) I ( r , 0.5 π ) I ( r , 0 π ) I ( r , π ) ]
SSIM ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 σ x y  +  c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 )
M S S S I M ( x , y ) = [ l M ( x , y ) ] M j = 1 M [ c j ( x , y ) ] β j [ s j ( x , y ) ] γ j
M S E = 1 n i = 1 n ( y i f ( x i ) ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.