Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

ICF-PR-Net: a deep phase retrieval neural network for X-ray phase contrast imaging of inertial confinement fusion capsules

Open Access Open Access

Abstract

X-ray phase contrast imaging (XPCI) has demonstrated capability to characterize inertial confinement fusion (ICF) capsules, and phase retrieval can reconstruct phase information from intensity images. This study introduces ICF-PR-Net, a novel deep learning-based phase retrieval method for ICF-XPCI. We numerically constructed datasets based on ICF capsule shape features, and proposed an object–image loss function to add image formation physics to network training. ICF-PR-Net outperformed traditional methods as it exhibited satisfactory robustness against strong noise and nonuniform background and was well-suited for ICF-XPCI’s constrained experimental conditions and single exposure limit. Numerical and experimental results showed that ICF-PR-Net accurately retrieved the phase and absorption while maintaining retrieval quality in different situations. Overall, the ICF-PR-Net enables the diagnosis of the inner interface and electron density of capsules to address ignition-preventing problems, such as hydrodynamic instability growth.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

X-ray phase contrast imaging (XPCI) is a powerful imaging technology. It presents greater sensitivity to low-density materials than absorption-based X-ray imaging and is widely used in various fields, including medicine, biology, material science, and others [1,2]. Inertial confinement fusion (ICF) experiments require accurate diagnosis of capsules to avoid hydrodynamic instability growth that hinders successful ignition. The ablator and fuel layers of a capsule are both made of low-density materials, so its absorption-based imaging has a poor signal-to-noise ratio (SNR). Therefore, XPCI potential for effective ICF diagnosis has been explored [37].

As most existing detectors can only record intensity information, phase information is lost. Phase retrieval aims at reconstructing the phase from phase contrast images for use in analyzing the inner interface and 2-D electron density of ICF capsules. Traditional phase retrieval methods can be divided into deterministic and iterative methods. The deterministic methods involve finding the phase solution of a diffraction equation under certain approximations, such as transport of intensity equation (TIE) methods [813], contrast transfer function (CTF) methods [14], phase-attenuation duality (PAD) methods [15] and so on. Some recent research proposed methods improved on traditional deterministic methods by various regularization methods, such as the NLTikh method [16], which is a nonlinear generalization of the traditional CTF method by Tikhonov regularization, and is no longer limited to CTF’s weak absorption and phase shift restrictions. The iterative methods involve retrieving the phase by performing numerical diffraction alternately between the object and image plane with constraints [1720] or solving the nonconvex optimization problem by algorithms such as alternating direction method of multipliers (ADMM) [2124]. However, applying traditional methods to ICF-XPCI remains difficult. Deterministic methods can only work under specific conditions, which are difficult to meet in ICF-XPCI (e.g., TIE methods usually require two [25] or more [26] images to estimate the paraxial differentiation of intensity, and contrast transfer function methods only apply to weak phase shifting and absorption samples [27]), and have poor robustness against image degradations and parameter errors. Iterative methods require multiple images and appropriate regularizations to ensure convergence, but the current ICF-XPCI system can only perform a single exposure in an experiment, and obtaining a priori knowledge for regularization remains difficult. Therefore, it is necessary to develop a new effective phase retrieval method.

Recently, deep learning (DL) has shown potential to solve inverse problems such as optical tomography [28], magnetic resonance imaging [29], super-resolution imaging [30], digital holography [31] and so on. Phase retrieval methods based on DL have also been proposed [3238], which are independent of experimental and sample constraints, and offer stronger robustness than deterministic methods. DL methods often require only one image to retrieve the phase without a priori knowledge and are time-effective compared with iterative methods. However, current DL methods have poor phase retrieval accuracy for ICF capsule images owing to their relatively weak phase contrast signals at the inner interfaces. While current methods also numerically back-propagate the image onto the object plane and input it into the network to add image formation physics [3436], which is effective for holograms with strong diffraction fringes, they are unsuitable for ICF-XPCI images with relatively weak fringes due to experimental constraints. Finally, ICF-XPCI presents severe image degradations, especially strong noise due to the influence of a high-power laser experiment under extreme environments and the nonuniform background due to uneven source focal spot intensity. The current DL methods require improvement to cope with these issues.

This study introduces ICF-PR-Net, a novel DL based phase retrieval method for the XPCI of ICF capsules. For the datasets, the phase and absorption datasets were built numerically with reference to the ICF capsule shape in previous ICF experiments, and the intensity dataset was built by diffraction simulation. For the network, we separated phase and absorption retrieval into two networks to avoid them interfering with each other, thereby enabling the retrieval of the weak phase signals at the capsule’s inner interfaces. We then obtained the initial phase/absorption by analytical operators and inputted it into the retrieval network together with the intensity to add image formation physics to the network—an approach that is more effective for ICF-XPCI’s weak diffraction conditions compared to the back-focusing methods mentioned above. Finally, we set the flat-field correcting and denoising networks before retrieval to enhance the network’s ability to cope with nonuniform background and strong noise. For training, we added the initial phase loss and multiparameter intensity loss to realize loss functions comprising object and image losses in order to add physical priori knowledge to training. The ICF-PR-Net is based on a generative adversarial network (GAN), and to balance the local and global structures in retrieval, the generator is based on a multiscale network M-Net, and the discriminator is based on PatchGAN.

The remainder of this paper is structured as follows. First, we introduce the principles of phase contrast imaging and phase retrieval. Second, we describe the network architecture, datasets, and training of our method. Third, we present the numerical validation of our method, including a comparison with other typical phase retrieval methods and testing it under different conditions. Fourth, we apply our method to experimental X-ray phase contrast images of different capsules. Finally, we discuss future applications of our method in real ICF experiments.

2. Methods

2.1 X-ray phase contrast imaging and phase retrieval

Figure 1 shows the X-ray inline phase contrast imaging and phase retrieval processes. The X-rays emitted from ∼5–10 µm diameter point source transmit through an ICF capsule and change in their complex amplitude; thus, the capsule’s internal structure can be characterized by the phase ${\varphi _o}$ and absorption ${\mu _o}$ on the object plane. Subsequently, the transmitted X-rays propagate and diffract in free space, and a detector captures the intensity ${I_i}$ of X-rays and outputs the phase contrast image.

 figure: Fig. 1.

Fig. 1. Schematic of X-ray inline phase contrast imaging and phase retrieval of the ICF capsule.

Download Full Size | PDF

The complex index of refraction can be written as $n = 1 + \delta - i\beta $, where $\delta $ is the refraction decrement that accounts for phase, and $\beta $ is the attenuation decrement that accounts for absorption. Consider a coherent plane wave with wavelength $\lambda \; $ propagating along the z-axis. The phase ${\varphi _o}$ and absorption ${\mu _o}$ of the complex wave transmitted through the capsule are as follows:

$${\varphi _o}(x,y) ={-} \frac{{2\pi }}{\lambda }\int {\delta (x,y,z)dz}$$
$${\mu _o}(x,y) ={-} \frac{{2\pi }}{\lambda }\int {\beta (x,y,z)dz}$$

The transmitted complex wave at the object plane can be written as ${E_o}(x,y) = \exp [ - {\mu _o}(x,y) + i{\varphi _o}(x,y)]$. According to the scalar diffraction integral in the paraxial limit, the complex wave at the image plane $z = R$ can be written as follows:

$${E_i}({x,y} )= \int\!\!\!\int {{E_o}({x,y} )h({x,y;R} )dxdy} $$
where $h(x,y;R)$ is the optical transfer function:
$$h({x,y;R} )= \frac{1}{{i\lambda R}}\exp \left[ { - \frac{{i\pi }}{{\lambda R}}({{x^2} + {y^2}} )} \right]$$

In the Fourier frequency domain, Eq. (3) becomes:

$$\mathrm{{\cal F}}\{{{E_i}({x,y} )} \}= H({u,v;R} )\cdot \mathrm{{\cal F}}\{{{E_o}({x,y} )} \}$$
where $\mathrm{{\cal F}}$ represents the 2-D Fourier transform, $(u,v)$ is the spatial frequency coordinates, and $H(u,v)$ can be expressed as follows:
$$H({u,v;R} )= \exp [{i\pi \lambda R({{u^2} + {v^2}} )} ]$$

Thus, the image plane intensity can be calculated as follows:

$${I_i}({x,y} )= {|{{\mathrm{{\cal F}}^{ - 1}}\{{H({u,v} )\cdot \mathrm{{\cal F}}\{{{E_o}({x,y} )} \}} \}} |^2}$$
where ${\mathrm{{\cal F}}^{ - 1}}$ represents the 2-D Fourier inverse transform.

For ICF-XPCI using a point X-ray source, the geometric magnification must be sufficiently large to ensure diagnostic resolution; therefore, the incident wave must be regarded as a spherical wave. Moreover, spatial partial coherence cannot be neglected owing to the relatively large size of the ICF-XPCI X-ray source. Therefore, Eq. (7) needs to be corrected and is given as follows:

$${I_i}({x,y} )= {\left|{{\mathrm{{\cal F}}^{ - 1}}\left\{ {H\left[ {Mu,Mv;\frac{{{R_2}}}{M}} \right] \cdot \mathrm{{\cal F}}\left\{ {{E_o}\left( {\frac{x}{M},\frac{y}{M}} \right)} \right\}} \right\}} \right|^2}\mathrm{\ast }{I_s}\left( {\frac{{{R_1}}}{{{R_2}}}x,\frac{{{R_1}}}{{{R_2}}}y} \right)$$
where ${R_1}$ is the source-to-object distance, ${R_2}$ is the object-to-image distance, $M = ({R_1} + {R_2})/{R_1}$ is the geometric magnification, ${I_s}$ is the X-ray source intensity, and $\mathrm{\ast }$ is convolution. Compared to Eq. (7), Eq. (8) transforms the coordinates and performs convolution with the light source intensity. For simplicity, we summarize Eq. (8) as follows:
$${I_i}({x,y} )= {\mathbf {\cal H}}\{{{E_o}({x,y} )} \}$$
where ${\mathbf {\cal H}}$ is called the diffraction operator. Therefore, the phase contrast imaging calculation is reduced to finding the intensity ${I_i}$ at the image plane with known complex amplitude ${E_o}$ at the object plane using the diffraction operator ${\mathbf {\cal H}}$.

In contrast, phase retrieval is solving the complex amplitude ${E_o}$ at the object plane with known intensity ${I_i}$ at the image plane, thus obtaining the phase ${\varphi _o}$ and absorption ${\mu _o}$ at the object plane. Therefore, phase retrieval can be seen as the inverse operator ${{\mathbf {\cal H}}^{ - 1}}$ of the diffraction operator ${\mathbf {\cal H}}$, so:

$${E_o}({x,y} )= {{\mathbf {\cal H}}^{ - 1}}\{{{I_i}({x,y} )} \}$$

However, owing to the loss of phase information when recording intensity ${I_i}$ by the detector, phase retrieval ${{\mathbf {\cal H}}^{ - 1}}$ is usually ill-posed. While the deterministic method introduces approximations and attempts to obtain an analytical formula for ${{\mathbf {\cal H}}^{ - 1}}$ by solving the diffraction equation, the iterative method views ${{\mathbf {\cal H}}^{ - 1}}$ as a minimization problem:

$$E_o^\mathrm{\ast } = \mathop {\textrm{argmin}}\limits_{{E_o}} \{{{{|{{\mathbf {\cal H}}\{{{E_o}} \}- {I_i}} |}^2} + R({{E_o}} )} \}$$
where $R({E_o})$ is a regularization term that includes a priori knowledge of the sample.

Unlike traditional methods, the DL-based phase retrieval method uses a trained deep network to fit ${{\mathbf {\cal H}}^{ - 1}}$. First, a deep neural network ${N_\theta }$ defined by a set of weights and biases $\theta $ is built. Second, a training dataset ${T}$ is built containing a large amount of labeled data comprising the image plane intensity I and corresponding object plane absorption ${\mu _o}$ and phase ${\varphi _o}$, i.e., ${T} = \{ (I_i^n,\mu _o^n,\varphi _o^n),n = 1, \ldots ,k\} $. Finally, the following optimization problem is solved by, for example, the gradient descent method:

$$N_\theta ^\mathrm{\ast } = \mathop {\textrm{argmin}}\limits_\theta L[{{N_\theta }({I_i^n} ),({\mu_o^n,\varphi_o^n} )} ],\forall ({I_i^n,\mu_o^n,\varphi_o^n} )\in {T}$$
where L is loss function, which represents the distance between the network outputs ${N_\theta }(I_i^n)$ and their corresponding ground truth $(\mu _o^n,\varphi _o^n)$.

Obviously, the network architecture, datasets, and loss function determine the network outputs. The proposed ICF-PR-Net is specially designed for these three points to improve the retrieval quality of ICF capsules, which are elaborated on in the following section.

2.2 Network architecture

Figure 2 shows the ICF-PR-Net network architecture. First, the poor intensity I is inputted into the flat-field correcting network ${G_{IC}}$ to obtain the corrected intensity ${I_c}$; Second, ${I_c}$ is inputted into the denoising network ${G_{DN}}$ to obtain the denoised intensity ${I_n}$; Third, the initial phase ${\varphi _i}$ is computed by the analytic phase retrieval operator $APR$, and the initial absorption ${\mu _i}$ is computed by the absorption guessing operator $AG$; Fourth, the initial phase ${\varphi _i}$, concatenated in the channel with ${I_n}$, is inputted into the phase retrieval network ${G_{PR}}$ to obtain the output phase ${\varphi _o}$, and the initial absorption ${\mu _i}$, concatenated in the channel with ${I_n}$, is inputted into the absorption retrieval network ${G_{AR}}$ to obtain the output absorption ${\mu _o}$.

 figure: Fig. 2.

Fig. 2. Schematic of the ICF-PR-Net network architecture. The red element represents the deep network. The blue element represents the analytic operator.

Download Full Size | PDF

The analytic phase retrieval operator $APR$ is based on the modified Bronnikov algorithm (MBA) [39]. Under ICF-XPCI parameters, MBA can retrieve the approximate phase distribution for use as an initial input to provide physical priori knowledge for the phase retrieval network ${G_{PR}}$. To provide an initial input to the absorption retrieval network ${G_{AR}}$, we propose the absorption guessing operator $AG$ that can estimate the absorption distribution from the initial phase ${\varphi _i}$ and denoised intensity ${I_n}$. Assuming that the absorption is zero, we first calculate the intensity of the pure-phase sample using ${I_\varphi } = {\mathbf {\cal H}}\{ \exp (i{\varphi _i})\}$. Next, we calculate the intensity of the pure absorption sample by ${I_\mu } = {I_i} - \alpha {I_\varphi }$, where $\alpha \in [0,1]$ is the correction factor. Finally, by Beer’s law, we can obtain the initial absorption by ${\mu _i} ={-} \log ({I_\mu } + \varepsilon )$, where $\varepsilon $ is the regularization parameter.

2.3 Datasets

The ICF-PR-Net datasets should have a large sample size (>1000) to ensure successful network training. However, preparing thousands of ICF capsules with different materials, structures and shapes is expensive and time-consuming. Moreover, even if massive capsules are successfully prepared and their phase contrast images are captured, it is difficult to obtain their real phase and absorption images on the object plane as the ground-truth for training. Therefore, the ICF-PR-Net datasets are generated by numerical method.

The ICF-PR-Net datasets comprise object and intensity datasets. The object datasets contain the phase ${\varphi _o}$ and absorption ${\mu _o}$, with the generation process shown in Fig. 3(a). First, a layered spherical shell is constructed to simulate the ideal ICF capsules, and the complex index of refraction of each layer varies randomly around different benchmark values, which are derived from the data of previous ICF capsules [40]. The spherical shell diameter varies randomly from 0.1 mm to 1 mm to simulate different compression rates. Second, three-dimensional random distortions with different wavelengths and amplitudes are applied to the spherical shell to simulate implosion asymmetry and hydrodynamic instability, and random blurring is applied to simulate interface mixing. Third, the phase ${\varphi _o}$ and absorption ${\mu _o}$ at the object plane are calculated by Eq. (1) and Eq. (2).

 figure: Fig. 3.

Fig. 3. Schematic of the generation of (a) phase and absorption datasets and (b) intensity datasets.

Download Full Size | PDF

The intensity datasets comprise poor, ideal, and pure noise intensity, with the generation process shown in Fig. 3(b). First, we calculate the ideal complex amplitude of the object plane ${E_o} = \exp ( - {\mu _o} + i{\varphi _o})$, calculate the ideal intensity by diffraction simulation, and add noise to the ideal intensity to obtain the pure noise intensity. Subsequently, we model the backlight intensity ${I_{bg}}$ with several random Gaussian distributions to calculate the nonuniform complex amplitude of the object plane ${E_0}^{\prime} = {I_{bg}}^{1/2}\exp ( - {\mu _o} + i{\varphi _o})$, and perform diffraction simulation and add noise to obtain the poor intensity. The noise added to the intensity image is a mixture of Gaussian noise and Poisson noise, and their relative ratio varies randomly.

2.4 Training

The loss function L of ICF-PR-Net comprises object loss ${L_o}$, i.e., the loss of phase/absorption on the object plane, and image loss ${L_i}$, i.e., the loss of intensity on the image plane, both of which contain supervisory loss ${L_{sup}}$ and adversarial loss ${L_{gan}}$. The supervisory loss ${L_{sup}}$ denotes the distance between the output value and corresponding ground truth, which is the ${L_2}$ multiscale loss of M-Net in this paper and is described below. The adversarial loss ${L_{gan}}$ is related to GAN [41], which can be decomposed into the generator G and discriminator D. The generator is used to generate new data, and the discriminator D is used to differentiate between the generated and real data. When training the generator, an image z is inputted to the generator, and the output data $G(z)$ is expected to be good enough to fool the discriminator. Therefore, the loss function can be written as ${L_G} = 0.5 \cdot {[D(G(z)) - 1]^2}$, which is the adversarial loss ${L_{gan}}$ mentioned above.

Figure 4(a) depicts the training of the flat-field correcting network ${G_{IC}}$. We input the poor intensity I into ${G_{IC}}$ and get the corrected intensity ${I_c}$. In the image space, we compute the supervisory loss ${L_{sup\; i}}$ between ${I_c}$ and the pure noise intensity $I_c^\ast $, input ${I_c}$ into the image discriminator ${D_i}$ to compute the adversarial loss ${L_{gan\; i}}$, and the image loss is ${L_i} = {\alpha _{sup}}\; {L_{sup\; i}} + {\alpha _{gan}}{L_{gan\; i}}$, where $\alpha $ is the loss function weight. In the object space, we input ${I_c},I_c^\ast $ into the analytic phase retrieval operator $APR$ respectively to get the retrieved phases ${\varphi _c},\varphi _c^\ast $ and compute the supervisory loss ${L_{sup\; o}}$ between them, input ${\varphi _c}$ into the object discriminator ${D_o}$ to compute the adversarial loss ${L_{gan\; o\; }}$, and the object loss is ${L_o} = {\alpha _{sup}}\; {L_{sup\; o}} + {\alpha _{gan}}{L_{gan\; o}}$. The total loss is then $L = 0.5 \cdot ({L_o} + {L_i})$. Figure 4(b) depicts the training of the denoising network ${G_{DN}}$, which is the same as the flat-field correcting network ${G_{IC}}$, where ${I_n}$ is the denoised image, $I_n^\ast $ is the ideal image, and ${\varphi _n},\varphi _n^\ast $ are the retrieved phases obtained by inputting ${I_n},I_n^\ast $ to $APR$.

 figure: Fig. 4.

Fig. 4. Schematic of the training of (a) the flat-field correction network and (b) denoising network. The red element represents the generator to be trained. The blue element represents the analytic operator. The green element represents the discriminator and the adversarial loss. The pink element represents the supervisory loss.

Download Full Size | PDF

Figure 5 depicts the cooperative training of the phase retrieval network ${G_{PR}}$ and the absorption retrieval network ${G_{AR}}$. We obtain the initial phase ${\varphi _i}$ by the analytic phase retrieval operator $APR$ and the initial absorption ${\mu _i}$ by the absorption guessing operator $AG$. Both ${\varphi _i}$ and ${\mu _i}$ are concatenated in the channel with ${I_n}$ and inputted into ${G_{PR}}$ and ${G_{AR}}$ respectively to obtain the retrieved phase ${\varphi _o}$ and absorption ${\mu _o}$. In the object space, we compute the supervisory loss ${L_{sup\; \varphi }}$ between ${\varphi _o}$ and the true phase $\varphi _o^\ast $, input ${\varphi _o}$ into the phase discriminator ${D_\varphi }$ to get the adversarial loss ${L_{gan\; \varphi }}$, and the phase loss is ${L_\varphi } = {\alpha _{sup}}{L_{sup\; \varphi }} + {\alpha _{gan}}{L_{gan\; \varphi }}$. The absorption loss ${L_\mu }$ is calculated in the same way, and the object loss is then ${L_o} = 0.5 \cdot ({L_\varphi } + {L_\mu })$. In the image space, we obtain the retrieved intensities $I_{o1}^\ast ,I_{o2}^\ast ,I_{o3}^\ast $ by inputting ${\varphi _o}$ and ${\mu _o}$ into three diffraction operators with different parameters $\{ {{\mathbf {\cal H}}_1},{{\mathbf {\cal H}}_2},{{\mathbf {\cal H}}_3}\} $, calculate the supervised loss of each of the retrieved intensities with their corresponding true intensity, and calculate their weighted sum to obtain ${L_{sup\; i}}$. ${{\mathbf {\cal H}}_1}$ is matched to the experimental parameters of the image and has the greatest weight which is 0.7 in this paper, and the remaining two weights are 0.15. We input ${I_{o1}}$ into the image discriminator ${D_i}$ to get the adversarial loss ${L_{gan\; i}}$, and the image loss is ${L_i} = {\alpha _{sup}}{L_{sup\; i}} + {\alpha _{gan}}{L_{gan\; i}}$. The total loss is then $L = 0.5 \cdot ({L_o} + {L_i})$.

 figure: Fig. 5.

Fig. 5. Schematic of cooperative training of the phase and absorption retrieval networks. The red element represents the generator to be trained. The blue element represents the analytic operator. The green element represents the discriminator and the adversarial loss. The pink element represents the supervisory loss.

Download Full Size | PDF

The ICF-PR-Net training is innovative, as it incorporates phase loss in the flat-field correcting/denoising network and intensity multiparameter loss in the phase/absorption retrieval network. This ensures the consistency of the loss function composition of all networks in the ICF-PR-Net, i.e., the object loss + the image loss, to add adequate physical priori knowledge to the network training.

2.5 Generator and discriminator

The generator (${G_{IC}}$, ${G_{DN}}$, ${G_{AR}}$ and ${G_{PR}}$) is based on M-Net [42] with some adjustments, which consists of the encoder and decoder. The encoder is used for image feature extraction, and the decoder is used for feature fusion and image reconstruction. The M-Net has the following characteristics: (1) Multi-scale input, i.e., downsampling the input image and inputting it into different layers in the encoder, which enables the formation of the receptive field at different scales, so that the encoder can better notice the image features at different scales. (2) Multi-scale output, i.e., upsampling the differents layers’ outputs in the decoder to obtain multiple outputs ${I^{(m)}}$, which enables the decoder to balance the overall and detailed quality of the output image. (3) Multi-scale loss function, i.e., training network by multi-label loss ${L_{ms}}$, which can effectively alleviate the vanishing gradient problem and improve the network’s performance.

$${L_{ms}} = \mathop \sum \limits_{m = 1}^M {\alpha _m}{({{I^{(m )}} - {I_0}} )^2}$$
where ${I_0}$ is the ground-truth and ${\alpha _m}$ is the multiscale weight. The above input-output structure and loss function design enable M-Net to have better multi-scale performance in phase retrieval compared to traditional convolutional neural network such as U-Net, which can increase the retrieval accuracy of the overall phase distribution as well as the detailed features at the inner interface of the ICF capsules. The structure of the M-Net is shown in Fig. 6, where the gray arrow indicates the basic convolution unit (3 × 3 convolution + batch normalization + ReLU); the green arrow represents the residual units (1 × 1 convolution + copy and add); the red arrow indicates max pooling; the orange arrow indicates the upsampling convolution unit (upsampling + basic convolution unit); the pink arrow indicates the copy + concatenate in the channel; and the purple arrow represents 1 × 1 convolution.

 figure: Fig. 6.

Fig. 6. Schematic of the generator adjusted by M-Net.

Download Full Size | PDF

The discriminator (${D_i}$, ${D_o}$, ${D_\varphi }$ and ${D_\mu }$) is the PatchGAN discriminator [43], and its structure is shown in Fig. 7. The traditional GAN’s discriminator outputs only one value, which is an overall evaluation of the input image. However, for the PatchGAN discriminator, the image goes through a series of convolution layers and outputs a matrix, and each element of the matrix represents the value of a patch of the input image. The PatchGAN discriminator evaluates an image using a matrix rather than a value, thus offers a more accurate and comprehensive evaluation, which helps the generator to output a higher quality image.

 figure: Fig. 7.

Fig. 7. Schematic of the PatchGAN discriminator

Download Full Size | PDF

3. Validation results

3.1 Comparison with other methods

In this section, we compare the performance of ICF-PR-Net with that of other typical phase retrieval methods by numerical experiments. For datasets, the sample sizes of the training and validation sets were 6400 and 1600, respectively, and all images in the datasets were 256 × 256 pixels. For the diffraction simulation, we assumed an X-ray source with energy of 4 keV and a size of 5 µm. The propagation distance $R = {R_2}/M$ was 20 cm, and the pixel size was 10 µm. For the training, we set the loss weight ${\alpha _{sup}} = 0.95$ and ${\alpha _{gan}} = 0.05$, and we trained all the networks for 40 epochs using the Adam optimizer with a batch size of six. The initial learning rates of the generator and discriminator were 0.0001 and 0.0004, respectively, and were halved every 5 epochs. Our method was implemented using PyTorch in Python, which was on the device with Intel i7-12700 H CPU, 16.0 GB RAM, and Nvidia RTX3070ti GPU. Training was performed on GPU, and each epoch took about 30 min for the phase/absorption retrieval network, and took about 24 min for other networks.

The phase contrast image of a simulated ICF capsule was obtained by numerical diffraction (Fig. 8(a)). Phase and absorption retrieval were performed using different methods, including the Paganin-TIE method [44] shown in Fig. 8(b), which is a deterministic method based on a linear formulation obtained by solving TIE under the assumption of a homogeneous sample; the NLTikh method mentioned in section one shown in Fig. 8(c); the end-to-end U-Net [45] method shown in Fig. 8(d), which inputs the intensity to the U-Net and outputs the retrieved phase and absorption, and is the simplest deep phase retrieval method; and the U-Net-BP network method shown in Fig. 8(e), which differs from the U-Net methods (shown in Fig. 9) in that it numerically back-propagates the intensity to the object plane to obtain the phase and absorption estimation and takes them as network input, and this input structure can provide the network with physical priori knowledge of image formation to improve the accuracy of phase retrieval, which is popular in many current DL phase retrieval methods as mentioned in section one. Our proposed ICF-PR-Net results are shown in Fig. 8(f), and a zoom-in area of the red box region is shown in Fig. 8(h). Figure 8(g) shows the ground truth of the phase and absorption. The Paganin-TIE and NLTikh methods were implemented using the HoloTomo Toolbox [46] in Matlab. The U-Net and U-Net-BP methods were based on PyTorch, and the dataset, network size, training parameters and training device remained the same as those of the ICF-PR-Net, and each epoch takes about 24 min.

 figure: Fig. 8.

Fig. 8. Comparison of ICF-PR-Net with other typical phase retrieval methods. (a) The simulated phase contrast intensity of an ICF capsule. (b–f) Phases and absorptions retrieved by different methods including our ICF-PR-Net. (g) Ground truth. (h) Zoom-in area of the red box region of the intensity and retrieved phase by our method. Below each image is the 1-D distribution along the blue dashed line in (a).

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Schematic of the difference between the U-Net and U-Net-BP

Download Full Size | PDF

Although Paganin-TIE results can roughly show the phase distribution, the nonuniform background and noise in the input image induce serious gradient background errors and cloud-like artifacts in the retrieved phase. Despite presenting a relatively weak background error owing to its appropriate regularization, NLTikh results suffer from worsening cloud-like artifacts. Both the U-Net and U-Net-BP results have white spot-like artifacts in the center and halo-like artifacts at the edges, and the internal details of the capsule are barely visible. The quality of U-Net-BP improves only marginally compared to U-Net, highlighting the insignificant role of back-propagation inputs in ICF-XPCI. ICF-PR-Net markedly outperforms others because it can accurately retrieve the overall phase and absorption distribution while precisely reconstructing the internal interface of the ICF capsule.

We chose the root mean square error (RMSE), structural similarity index (SSIM) [47], and universal quality index (UQI) [48] as the quantitative evaluation metrics of retrieval quality. The ICF-PR-Net markedly outperforms the other methods in all evaluation metrics (Table 1).

Tables Icon

Table 1. Quantitative Evaluation of ICF-PR-Net and Other Typical Phase Retrieval Methods

3.2 Performance under different conditions

In this section, we compare the performance of ICF-PR-Net in different conditions to analyze its stability and robustness. First, we changed the X-ray source size. Increasing the source size worsened spatial coherence, resulting in thicker and weaker diffraction fringes. Second, we changed the image SNR. Decreasing the SNR yields stronger image noise, resulting in a less effective phase contrast signal. Third, we changed the background value defined by the image plane intensity I when imaging without a sample, which is $({I_{max}} - {I_{min}})/\overline I $, where ${I_{max}}$ is the maximum value, ${I_{min}}$ is the minimum value, and $\overline I $ is the mean value. At large background values, the nonuniform background markedly affected the effective phase contrast signal.

Note that we retrained the network when the source size was increased owing to the change in the diffraction parameters and datasets. In addition, to minimize the analysis error, the networks were trained five times, and then ten different simulated ICF capsule phase contrast images were inputted into the trained networks. We calculated the RMSE and SSIM of all the 5 × 10 outputs, with their average values serving as the final evaluation metrics. To save time, the networks were trained for just 20 epochs, and other parameters remained unchanged. Figure 10 shows part of the typical retrieval results, while Fig. 11 depicts the complete quantitative analysis.

 figure: Fig. 10.

Fig. 10. Some typical retrieval results by the ICF-PR-Net in (a)(b)(c) different source sizes, (d)(e)(f) different image SNRs, and (g)(h)(i) different background values compared with (j) ground truth.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Quantitative analysis of retrieval quality in (a) different source sizes, (b) different image SNRs, and (c) different background values. The red line represents the absorption error, the blue line represents the phase error, the solid line with circle points represents the RMSE, and the dashed line with triangle points represents the SSIM.

Download Full Size | PDF

To analyze only the effects of image blurring and the relative changes in phase and absorption contrast on the retrieval when the source size increases, as shown in Fig. 10(a)–(c) and Fig. 11(a), the light source intensity remains unchanged, and the noise intensity is adjusted in diffraction simulation to realize the consistent constancy of the image SNR. However, in real experiments, an increase in the source size induces a decrease in the incident light intensity when the source power remains constant and causes a reduction in the overall image contrast. If the detector exposure time remains unchanged, the image SNR drops rapidly.

Considering Fig. 10 and Fig. 11, we reached the following conclusions: First, the absorption error decreases and phase error increases when the source size increases. We speculate that this is because when the source size increases under constant light source intensity, the absorption contrast increases more than the phase contrast, facilitating absorption retrieval, and the phase contrast fringes become blurred, which reduces the phase retrieval resolution. Second, the decreased SNR and increased background value all induce increased phase and absorption errors. The reason for this is that the effective signal is overwhelmed by noise when the SNR decreases and is interfered by the nonuniform background when the background value increases. Overall, to improve the retrieval quality of ICF-PR-Net, we need to reduce the source size to less than 10 µm, improve the image SNR, and reduce the background value.

3.3 Difference between retrieved phase and absorption

To show the difference between the phase and absorption retrieval in ICF capsule diagnosis, we simulated the X-ray phase contrast image of a simulated static cryogenic capsule before ICF implosion experiment (Fig. 12(g)), and retrieved the phase images and absorption images using different methods(Fig. 12(a-d)) and our ICF-PR-Net, with the same simulation and retrieval parameters as in Fig. 8. It can be seen that in the retrieved phase that the fuel ice layer inside the ablator layer of the capsule is clearly visible, but not visible in the retrieved absorption. The reason is that the fuel ice layer is made of low atomic number materials like deuterium or tritium, so the absorption of X-ray is very weak, and the phase modulation of X-ray is much larger. The 1-D distribution curves along the red dashed line show that the retrieved phase curve of ICF-PR-Net is close to the ground-truth and has a mutation at the interface between the fuel ice and gas, while the retrieved absorption curve does not. The results of other phase retrieval methods are relatively biased from the ground-truth and have poor contrast at the interface between the fuel ice and gas.

 figure: Fig. 12.

Fig. 12. Retrieved phase and absorption of a local region of a simulated static cryogenic ICF capsule by (a-d) different phase retrieval methods and (e) the ICF-PR-NET, compared with (f) ground-truth, including the 1-D distribution along the red dashed line. (g) is the numerical generated X-ray phase contrast intensity.

Download Full Size | PDF

4. Experimental results

We applied our ICF-PR-Net to the experimental X-ray phase contrast images of different static ICF capsules recorded by the Micro-XCT device. Figure 13 shows the imaging system. The X-ray source was a microfocus tube with a W anode operated at 40 kV, and its focal spot size was 5 µm. The detector was a Charge Coupled Device (CCD) with 2048 × 2048 pixels, and we downsampled the captured raw images to 256 × 256 pixels before inputting them into ICF-PR-Net. We set the source-to-object distance to 127 cm and the object-to-image distance to 25 cm, and the corresponding propagation distance was approximately 20.88 cm.

 figure: Fig. 13.

Fig. 13. Schematic of the X-ray phase contrast imaging system

Download Full Size | PDF

Three types of capsules were imaged, and their specifications are illustrated in Fig. 14, including (a) capsule A, a single-layer spherical shell capsule made of hydrocarbon polymer (CH) with a diameter of 310 µm and thickness of 20 µm; (b) capsule B, a single-layer spherical shell capsule made of CH with a diameter of 430 µm and thickness of 7 µm; (c) capsule C, a triple-layer spherical shell capsule with a diameter of 370 µm, which has an outer layer made of CH with a thickness of 6 µm, a middle layer made of polyvinyl alcohol (PVA) with a thickness of 7 µm, and an inner layer made of polystyrene (PS) with a thickness of 6 µm.

 figure: Fig. 14.

Fig. 14. Schematic of the structure of three types of capsules being imaged.

Download Full Size | PDF

Figure 15 shows the experimental phase contrast images and absorption/phase retrieval results. The image sizes of Fig. 15(a) and (b–c) are 573.44 and 593.92 µm, respectively. From Fig. 15(a-c), ICF-PR-Net can retrieve the phases and absorptions of all three capsules, where the retrieval quality near the spherical shell is good, whereas inside the spherical shell, the retrieved phase and absorption may show weak ring or cloud-like artifacts. In addition, for capsule C, its three-layer shell structure is not visible in the retrieved phase or absorption owing to the insufficient resolution of the image with 256 × 256 pixels, resulting in the network’s inability to retrieve the tiny multilayer structure. Therefore, we cropped out the upper left 1/8 region of capsule C’s original image, downsampled it to 256 × 256 pixels, and inputted it to the retrained network. The results are shown in Fig. 15(d), and the three-layer structure is clearly visible in the retrieved phase and absorption. Figure 15(e) is a further comparison of the 1-D distribution of the origin intensity and retrieved phase in Fig. 15(d). The locations of the capsule interfaces are easy to identify in the retrieved phase but are difficult to identify in the origin intensity due to signal fluctuation caused by strong noise and light–dark diffraction fringes. This shows the necessity of our ICF-PR-Net’s phase retrieval in the XPCI diagnosis of ICF capsules.

 figure: Fig. 15.

Fig. 15. Retrieval results by ICF-PR-Net of the capsules’ experimental images, including (a) capsule A, (b) capsule B, (c) capsule C, and (d) upper left 1/8 region of capsule C. Below each image is the 1-D distribution along the blue dashed line in the intensity images. (e) Comparison of the 1-D distribution of the intensity and phase of (d).

Download Full Size | PDF

Figure 16 shows the comparison of our ICF-PR-Net with other phase retrieval methods for the experimental images of capsule C (Fig. 15(c)) and upper left 1/8 region of capsule C(Fig. 15(d)), which contains the 2-D retrieved phase images and the 1-D distribution along the red dashed line. It can be seen from both 1-D and 2-D data that our ICF-PR-Net’s retrieved phase is superior to other methods, because our result can more accurately show the structure of the capsule, the interfaces of the capsule’s shell are crisp and clear, and there is almost no noise, artifacts, non-uniformity in our result.

 figure: Fig. 16.

Fig. 16. Comparison of the phase retrieval results of ICF-PR-Net with other phase retrieval methods for the experimental image of (a) capsule C and (b) upper left 1/8 region of capsule C, including the retrieved phase images and the 1-D distributions along the red dashed line

Download Full Size | PDF

In order to quantitatively compare the retrieval accuracy of ICF-PR-Net with other phase retrieval methods for the experimental images without real phase and absorption, the intensity images were simulated by the retrieved phases and the absorptions, and the errors between the simulated images and the experimental images were calculated, which are shown in Table 2. This intensity errors cannot directly represent the retrieval accuracy because of the uneven backlight, noise and artifacts in the experimental intensity images, but it can be used to compare the retrieval accuracy of different methods. It can be seen that ICF-PR-Net has higher retrieval accuracy. It should be noted that since the Paganin-TIE and NLTikh methods cannot retrieve the absorption, and thus cannot simulate the intensity, so the intensity error cannot be calculated.

Tables Icon

Table 2. Quantitative Evaluation of ICF-PR-Net and Other Methods for the Experimental Image

Based on the above absorption and phase retrieval results, we roughly characterize the thickness and diameter of the three capsules and compare them with the design values. From Fig. 15(a–c), we obtain the 1-D absorption/phase curves at the capsule diameter in the horizontal, vertical, and 45° directions and determine the interface location by the mutation points to infer the diameter and thickness. Because the bottom of the capsule was connected to the sample holder, we could not obtain the exact location of its bottom outer boundary. Therefore, for each capsule, we measured the diameter in three directions, except for the vertical direction (Table 3 for the results). We measured the thickness in seven positions, except for the bottom (Fig. 17 for the results). From Fig. 15(d), we obtain the 1-D absorption/phase curves in the 45-degree direction and measure the thicknesses of the three layers of capsule C. Table 4 contains the results.

 figure: Fig. 17.

Fig. 17. Thickness distribution of the three capsules measured by the retrieved absorption/phase in Fig. 15(a–c), including (a) capsule A, (b) capsule B, and (c) capsule C. The red line represents the thickness measured by the retrieved absorption, the blue line represents the thickness measured by the retrieved phase, and the black dashed line represents the design values.

Download Full Size | PDF

Tables Icon

Table 3. Diameter of the Three Capsules Measured by the Retrieved Absorption/Phase Compared with the Design Values

Tables Icon

Table 4. Thickness of Each Layer of Capsule C Measured by the Retrieved Absorption/Phase

The measured values roughly agree with the design values, with the values obtained from absorption and phase being equal in many cases. However, owing to the small pixel numbers (about 2–10 pixels) in the layer regions of the absorption and phase images of the capsules, many identical values occur. In addition, a non-negligible error exists between the measured and design values, especially when the measured scale is small (e.g., the thickness of layer of capsule B and different layers of capsule C). The reasons for this are as follows: First, the imperfect preparation of the capsules triggers a deviation between their real thickness and design values, and the thickness may vary at different locations. Second, the relatively large pixel size of the absorption/phase image induces a large error. For the images with 256 × 256 pixels, an error of ±1 pixel occurs in determining the capsule’s interface positions, so we can infer that errors of ±2.24 µm exist in the horizontal/vertical direction and ±3.17 µm in the 45-degree direction for Fig. 15(a), ± 2.31 µm in the horizontal/vertical direction and ±3.28 µm in the 45-degree direction for Fig. 15(b–c), and ±0.82 µm in the 45° direction for Fig. 15(d). Hence, for precise measurements of layer thickness and other small scales in ICF capsules through retrieved absorption/phase, enhancing our method to accommodate images with higher pixel numbers, contingent on the limiting resolution of the images, is crucial.

5. Conclusion

We introduced ICF-PR-Net, a deep learning-based phase retrieval method for the XPCI of ICF capsules. The phase contrast image first passed through the flat-field correcting and denoising networks, after which the initial phase and absorption images were obtained by analytic operators. These images were respectively inputted into the phase/absorption retrieval network together with the intensity to obtain the retrieved phase and absorption. This unique network architecture reduced the negative influence of nonuniform background and noise on retrieval and provided physical priori knowledge to the network under the weak diffraction conditions of ICF-XPCI. We built the object and intensity datasets based on the shape features of the ICF capsules and introduced phase loss into the training of the flat-field correcting/denoising network and multiparameter intensity loss into the training of the phase/absorption retrieval network, thereby equipping all the networks with a loss function comprising object and image losses, to add image formation physics to the training. Numerical and experimental results showed that our ICF-PR-Net could accurately retrieve the phase and absorption of the ICF capsule from a single inline X-ray phase contrast image, especially the weak phase information of the inner interface, outperforming other phase retrieval methods. This proposed method also maintained high retrieval ability under different conditions.

In the future, we will quantitatively analyze the ICF-PR-Net accuracy under different experimental parameters, image quality, and capsule shapes in more detail to further verify its robustness. We will improve ICF-PR-Net by, for example, introducing image stitching to enable it to handle high-pixel images. We will also perform a simple 3D electron density analysis of the capsules using the 2-D retrieved phase by methods such as the Abel inverse transform. We will apply ICF-PR-Net to the characterization of the solid deuterium layer of the cryogenic ICF capsule. Finally, we will conduct XPCI experiments of real ICF imploding capsules at the Shenguang Serise Laser Facility and use our ICF-PR-Net to retrieve the phase for the analysis of the electron density distribution and internal interface structure of the capsules. We believe that our ICF-PR-Net can provide a new perspective for the high-quality diagnosis of imploding capsules to solve problems hindering successful ignition, such as hydrodynamic instability growth.

Funding

National Key Research and Development Program of China (2023YFA1608400); Foundation of Science and Technology on Near-Surface Detection Laboratory (6142414220607); National Natural Science Foundation of China (12075221, 12235014).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. S. C. Mayo, A. W. Stevenson, and S. W. Wilkins, “In-Line Phase-Contrast X-ray Imaging and Tomography for Materials Science,” Materials 5(12), 937–965 (2012). [CrossRef]  

2. A. Bravin, P. Coan, and P. Suortti, “X-ray phase-contrast imaging: from pre-clinical applications towards clinics,” Phys. Med. Biol. 58(1), R1–R35 (2013). [CrossRef]  

3. D. S. Montgomery, “Invited article: X-ray phase contrast imaging in inertial confinement fusion and high energy density research,” Rev. Sci. Instrum. 94(2), 021103 (2023). [CrossRef]  

4. K. Wang, F. Dai, W. Lin, et al., “Characterization of the solid deuterium layer in the inertial confinement fusion cryogenic target without the requirement of cryocooler deactivation,” Fusion Eng. Des. 180, 113160 (2022). [CrossRef]  

5. E. L. Dewald, O. L. Landen, L. Masse, et al., “X-ray streaked refraction enhanced radiography for inferring inflight density gradients in ICF capsule implosions,” Rev. Sci. Instrum. 89(10), 10G108 (2018). [CrossRef]  

6. E. L. Dewald, O. L. Landen, D. Ho, et al., “Direct observation of density gradients in ICF capsule implosions via streaked Refraction Enhanced Radiography (RER),” High Energy Density Phys. 36, 100795 (2020). [CrossRef]  

7. A. Do, C. R. Weber, E. L. Dewald, et al., “Direct Measurement of Ice-Ablator Interface Motion for Instability Mitigation in Indirect Drive ICF Implosions,” Phys. Rev. Lett. 129(21), 215003 (2022). [CrossRef]  

8. M. R. Teague, “Deterministic phase retrieval: a Green’s function solution,” J. Opt. Soc. Am. 73(11), 1434 (1983). [CrossRef]  

9. T. E. Gureyev, A. Roberts, and K. A. Nugent, “Phase retrieval with the transport-of-intensity equation: matrix solution with use of Zernike polynomials,” J. Opt. Soc. Am. A 12(9), 1932 (1995). [CrossRef]  

10. T. E. Gureyev and K. A. Nugent, “Rapid quantitative phase imaging using the transport of intensity equation,” Opt. Commun. 133(1-6), 339–346 (1997). [CrossRef]  

11. D. Paganin and K. A. Nugent, “Noninterferometric Phase Imaging with Partially Coherent Light,” Phys. Rev. Lett. 80(12), 2586–2589 (1998). [CrossRef]  

12. J. Sun, C. Zuo, and Q. Chen, “Iterative optimum frequency combination method for high efficiency phase imaging of absorptive objects based on phase transfer function,” Opt. Express 23(21), 28031 (2015). [CrossRef]  

13. J. Zhang, Q. Chen, J. Sun, et al., “On a universal solution to the transport-of-intensity equation,” Opt. Lett. 45(13), 3649 (2020). [CrossRef]  

14. P. Cloetens, W. Ludwig, J. Baruchel, et al., “Holotomography: Quantitative phase tomography with micrometer resolution using hard synchrotron radiation x rays,” Appl. Phys. Lett. 75(19), 2912–2914 (1999). [CrossRef]  

15. X. Wu, H. Liu, and A. Yan, “X-ray phase-attenuation duality and phase retrieval,” Opt. Lett. 30(4), 379–381 (2005). [CrossRef]  

16. S. Huhn, L. M. Lohse, J. Lucht, et al., “Fast algorithms for nonlinear and constrained phase retrieval in near-field X-ray holography based on Tikhonov regularization,” Opt. Express 30(18), 32871 (2022). [CrossRef]  

17. R. W. Gerchberg and W. O. Saxton, “A Practical Algorithm for the Determination of Phase from Image and Diffraction Plane Pictures,” Optik 35, 237–250 (1972).

18. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758 (1982). [CrossRef]  

19. D. R. Luke, “Relaxed Averaged Alternating Reflections for Diffraction Imaging,” Inverse Problems 21(1), 37–50 (2005). [CrossRef]  

20. J. A. Rodriguez, R. Xu, C.-C. Chen, et al., “Oversampling smoothness: an effective algorithm for phase retrieval of noisy diffraction intensities,” J Appl Crystallogr 46(2), 312–318 (2013). [CrossRef]  

21. C. A. Metzler, A. Maleki, and R. G. Baraniuk, “BM3D-PRGAMP: Compressive phase retrieval based on BM3D denoising,” in 2016 IEEE International Conference on Image Processing (ICIP) (IEEE, 2016), pp. 2504–2508.

22. I. Waldspurger, A. d’Aspremont, and S. Mallat, “Phase recovery, MaxCut and complex semidefinite programming,” Math. Program. 149(1-2), 47–81 (2015). [CrossRef]  

23. E. Candes, X. Li, and M. Soltanolkotabi, “Phase Retrieval via Wirtinger Flow: Theory and Algorithms,” IEEE Trans. Inform. Theory 61(4), 1985–2007 (2015). [CrossRef]  

24. V. Katkovnik, “Sparse phase retrieval from noisy data: variational formulation and algorithms,” in Imaging and Applied Optics 2016 (OSA, 2016), p. JT3A.42.

25. M. Beleggia, M. A. Schofield, V. V. Volkov, et al., “On the transport of intensity technique for phase retrieval,” Ultramicroscopy 102(1), 37–49 (2004). [CrossRef]  

26. K. Ishizuka and B. Allman, “Phase measurement of atomic resolution image using transport of intensity equation,” Microscopy 54(3), 191–197 (2005). [CrossRef]  

27. A. Pogany, D. Gao, and S. W. Wilkins, “Contrast and resolution in imaging with a microfocus x-ray source,” Rev. Sci. Instrum. 68(7), 2774–2782 (1997). [CrossRef]  

28. U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, et al., “Learning approach to optical tomography,” Optica 2(6), 517 (2015). [CrossRef]  

29. B. Zhu, J. Z. Liu, S. F. Cauley, et al., “Image reconstruction by domain-transform manifold learning,” Nature 555(7697), 487–492 (2018). [CrossRef]  

30. T. Liu, K. De Haan, Y. Rivenson, et al., “Deep learning-based super-resolution in coherent imaging systems,” Sci. Rep. 9(1), 3926 (2019). [CrossRef]  

31. J. Wu, K. Liu, X. Sui, et al., “High-speed computer-generated holography using an autoencoder-based deep neural network,” Opt. Lett. 46(12), 2908 (2021). [CrossRef]  

32. Y. Rivenson, Y. Wu, and A. Ozcan, “Deep learning in holography and coherent imaging,” Light: Sci. Appl. 8(1), 85 (2019). [CrossRef]  

33. A. Sinha, J. Lee, S. Li, et al., “Lensless computational imaging through deep learning,” Optica 4(9), 1117 (2017). [CrossRef]  

34. Y. Rivenson, Y. Zhang, H. Günaydın, et al., “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2017). [CrossRef]  

35. A. Goy, K. Arthur, S. Li, et al., “Low Photon Count Phase Retrieval Using Deep Learning,” Phys. Rev. Lett. 121(24), 243902 (2018). [CrossRef]  

36. Y. Wu, Y. Rivenson, Y. Zhang, et al., “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704 (2018). [CrossRef]  

37. F. Wang, Y. Bian, H. Wang, et al., “Phase imaging with an untrained neural network,” Light: Sci. Appl. 9(1), 77 (2020). [CrossRef]  

38. Y. Zhang, M. A. Noack, P. Vagovic, et al., “PhaseGAN: A deep-learning phase-retrieval approach for unpaired datasets,” Opt. Express 29(13), 19593 (2021). [CrossRef]  

39. A. Groso, R. Abela, and M. Stampanoni, “Implementation of a fast method for high resolution phase contrast tomography,” Opt. Express 14(18), 8103 (2006). [CrossRef]  

40. B. J. Kozioziemski, J. A. Koch, A. Barty, et al., “Quantitative characterization of inertial confinement fusion capsules using phase contrast enhanced x-ray imaging,” J. Appl. Phys. 97(6), 063103 (2005). [CrossRef]  

41. I. Goodfellow, J. Pouget-Abadie, M. Mirza, et al., “Generative adversarial networks,” Commun. ACM 63(11), 139–144 (2020). [CrossRef]  

42. H. Fu, J. Cheng, Y. Xu, et al., “Joint Optic Disc and Cup Segmentation Based on Multi-Label Deep Network and Polar Transformation,” IEEE Trans. Med. Imaging 37(7), 1597–1605 (2018). [CrossRef]  

43. J. Y. Zhu, T. Park, P. Isola, et al., “Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks,” in 2017 IEEE International Conference on Computer Vision (ICCV) (IEEE, 2017), pp. 2242–2251.

44. D. Paganin, S. C. Mayo, T. E. Gureyev, et al., “Simultaneous phase and amplitude extraction from a single defocused image of a homogeneous object,” J. Microsc. (Oxford, U. K.) 206(1), 33–40 (2002). [CrossRef]  

45. O. Ronneberger, P. Fischer, T. Brox, et al., eds., Lecture Notes in Computer Science (Springer International Publishing, 2015), 9351, pp. 234–241.

46. L. M. Lohse, A.-L. Robisch, M. Töpperwien, et al., “A phase-retrieval toolbox for X-ray holography and tomography,” J. Synchrotron Radiat. 27(3), 852–859 (2020). [CrossRef]  

47. Z. Wang, A. C. Bovik, H. R. Sheikh, et al., “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

48. Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Process. Lett. 9(3), 81–84 (2002). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1.
Fig. 1. Schematic of X-ray inline phase contrast imaging and phase retrieval of the ICF capsule.
Fig. 2.
Fig. 2. Schematic of the ICF-PR-Net network architecture. The red element represents the deep network. The blue element represents the analytic operator.
Fig. 3.
Fig. 3. Schematic of the generation of (a) phase and absorption datasets and (b) intensity datasets.
Fig. 4.
Fig. 4. Schematic of the training of (a) the flat-field correction network and (b) denoising network. The red element represents the generator to be trained. The blue element represents the analytic operator. The green element represents the discriminator and the adversarial loss. The pink element represents the supervisory loss.
Fig. 5.
Fig. 5. Schematic of cooperative training of the phase and absorption retrieval networks. The red element represents the generator to be trained. The blue element represents the analytic operator. The green element represents the discriminator and the adversarial loss. The pink element represents the supervisory loss.
Fig. 6.
Fig. 6. Schematic of the generator adjusted by M-Net.
Fig. 7.
Fig. 7. Schematic of the PatchGAN discriminator
Fig. 8.
Fig. 8. Comparison of ICF-PR-Net with other typical phase retrieval methods. (a) The simulated phase contrast intensity of an ICF capsule. (b–f) Phases and absorptions retrieved by different methods including our ICF-PR-Net. (g) Ground truth. (h) Zoom-in area of the red box region of the intensity and retrieved phase by our method. Below each image is the 1-D distribution along the blue dashed line in (a).
Fig. 9.
Fig. 9. Schematic of the difference between the U-Net and U-Net-BP
Fig. 10.
Fig. 10. Some typical retrieval results by the ICF-PR-Net in (a)(b)(c) different source sizes, (d)(e)(f) different image SNRs, and (g)(h)(i) different background values compared with (j) ground truth.
Fig. 11.
Fig. 11. Quantitative analysis of retrieval quality in (a) different source sizes, (b) different image SNRs, and (c) different background values. The red line represents the absorption error, the blue line represents the phase error, the solid line with circle points represents the RMSE, and the dashed line with triangle points represents the SSIM.
Fig. 12.
Fig. 12. Retrieved phase and absorption of a local region of a simulated static cryogenic ICF capsule by (a-d) different phase retrieval methods and (e) the ICF-PR-NET, compared with (f) ground-truth, including the 1-D distribution along the red dashed line. (g) is the numerical generated X-ray phase contrast intensity.
Fig. 13.
Fig. 13. Schematic of the X-ray phase contrast imaging system
Fig. 14.
Fig. 14. Schematic of the structure of three types of capsules being imaged.
Fig. 15.
Fig. 15. Retrieval results by ICF-PR-Net of the capsules’ experimental images, including (a) capsule A, (b) capsule B, (c) capsule C, and (d) upper left 1/8 region of capsule C. Below each image is the 1-D distribution along the blue dashed line in the intensity images. (e) Comparison of the 1-D distribution of the intensity and phase of (d).
Fig. 16.
Fig. 16. Comparison of the phase retrieval results of ICF-PR-Net with other phase retrieval methods for the experimental image of (a) capsule C and (b) upper left 1/8 region of capsule C, including the retrieved phase images and the 1-D distributions along the red dashed line
Fig. 17.
Fig. 17. Thickness distribution of the three capsules measured by the retrieved absorption/phase in Fig. 15(a–c), including (a) capsule A, (b) capsule B, and (c) capsule C. The red line represents the thickness measured by the retrieved absorption, the blue line represents the thickness measured by the retrieved phase, and the black dashed line represents the design values.

Tables (4)

Tables Icon

Table 1. Quantitative Evaluation of ICF-PR-Net and Other Typical Phase Retrieval Methods

Tables Icon

Table 2. Quantitative Evaluation of ICF-PR-Net and Other Methods for the Experimental Image

Tables Icon

Table 3. Diameter of the Three Capsules Measured by the Retrieved Absorption/Phase Compared with the Design Values

Tables Icon

Table 4. Thickness of Each Layer of Capsule C Measured by the Retrieved Absorption/Phase

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

φ o ( x , y ) = 2 π λ δ ( x , y , z ) d z
μ o ( x , y ) = 2 π λ β ( x , y , z ) d z
E i ( x , y ) = E o ( x , y ) h ( x , y ; R ) d x d y
h ( x , y ; R ) = 1 i λ R exp [ i π λ R ( x 2 + y 2 ) ]
F { E i ( x , y ) } = H ( u , v ; R ) F { E o ( x , y ) }
H ( u , v ; R ) = exp [ i π λ R ( u 2 + v 2 ) ]
I i ( x , y ) = | F 1 { H ( u , v ) F { E o ( x , y ) } } | 2
I i ( x , y ) = | F 1 { H [ M u , M v ; R 2 M ] F { E o ( x M , y M ) } } | 2 I s ( R 1 R 2 x , R 1 R 2 y )
I i ( x , y ) = H { E o ( x , y ) }
E o ( x , y ) = H 1 { I i ( x , y ) }
E o = argmin E o { | H { E o } I i | 2 + R ( E o ) }
N θ = argmin θ L [ N θ ( I i n ) , ( μ o n , φ o n ) ] , ( I i n , μ o n , φ o n ) T
L m s = m = 1 M α m ( I ( m ) I 0 ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.