Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Three-dimensional laser damage positioning by a deep-learning method

Open Access Open Access

Abstract

A holographic and deep learning-based method is presented for three-dimensional laser damage location. The axial damage position is obtained by numerically focusing the diffraction ring into the conjugate position. A neural network Diffraction-Net is proposed to distinguish the diffraction ring from different surfaces and positions and obtain the lateral position. Diffraction-Net, which is completely trained by simulative data, can distinguish the diffraction rings with an overlap rate greater than 61% which is the best of results reported. In experiments, the proposed method first achieves the damage pointing on each surface of cascade slabs using diffraction rings, and the smallest inspect damage size is 8µm. A high precision result with the lateral positioning error less than 38.5µm and axial positioning error less than 2.85mm illustrates the practicability for locating the damage sites at online damage inspection.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The high-power laser facility has hundreds of large-aperture optical elements [13], and laser damage will be inevitable under high energy densities [4,5]. Online damage inspection and positioning is an important way to assess the safety of high power laser operation, and increase optical components lifetime combine with optics recycle strategies [6] such as CO2 laser ablation [7] and programmable spatial shaper blocking [8]. National Ignition Facility designed the Final Optics Damage Inspection (FODI) system [9] which uses an optical telescope system at the center of the target chamber to inspect the optics of each beamline after each laser shot. The inspect size of damage sites is down to 50µm in FODI, but as for 20µm to 50µm damage also has a high growth probability which needs some off-line method to inspect [10,11].

Traditional online damage detection schemes directly detect and characterize damage by imaging of optical components. Due to the optical resolution, noise, shadows, and reflections, the small-size damage points cannot be inspected accurately. Barry Y. [12] proposed a method of indirectly locating the damage point using a diffraction ring generated during the transmission of the damage point. They have used the Gradient Phase Matching (GDM) to detect the diffractive ring. The site of damage is obtained by using detected rings to fit the point diffraction formula. However, this method can only acquire the position for the amplitude-type damage, which is not appropriate to the phase-type damage location. Besides, the GDM-based diffraction ring detection method needs to select different templates to detect diffraction rings of different sizes, which is unacceptable in online automatic detection [13].

In this paper, a three-dimensional damage localization method which is insensitive to the type of damage is proposed to solve the localization problem of complex amplitude damage. The diffraction rings from the complex amplitude damage site are used to convert the diffraction intensity into the holographic phase. The axial distance between the damage site and the imaging system is obtained numerically by the principle of holographic focusing, which can focus in the conjugate position. Since diffraction rings come from different distances elements and have overlap, it is necessary to distinguish each diffraction ring to accurately determine the location of each damage site. Snice the deep learning can be successfully used in the field of high power laser system [14,15], Diffraction-Net is designed to effectively obtain the bounding box of each diffraction ring. After Diffraction-Net is trained by using the optimization algorithm to minimize the related loss function, the model can compute the bounding box of each diffraction ring by the gradient direction of diffraction intensity and realize the lateral positioning. The Diffraction-Net only uses the dataset for training which generate analytically and using computational Fresnel diffraction to get the diffraction filed at the specified position. This method solves the problem which needs lots of labeled data to train the weight of the neural network. The practicability of the proposed method is demonstrated theoretically and experimentally. To our knowledge this research of overlapping diffraction rings to detect damage in cascaded medium has never been performed.

2. Principle

Conventional damage detection require complex microscopic imaging and multiple measurement methods which is a “direct” diagnostic method, but in high-power laser system, the damage diffraction ring which carries all information of damage is essential for locating damage sites especially for small damage. Through this kind of “indirect” processing of damage diffraction rings, the three-dimension position of damage sites can be accurately obtained. Therefore, it is crucial to extract axial information and develop a localization method using diffraction rings.

2.1 Damage axial positioning based on holographic focusing

In high-power laser systems, there is a common holographic focusing phenomenon called hot image [1618] due to optical Kerr effect [19], which can raise damage on the downstream optical components. When the input intensity is high enough, the nonlinear index of refraction in slabs varies with intensity. It can raise to an effective “Fresnel-like” lens which can focus into downstream conjugate position about record plane. Since the focus position and damage position are symmetric according to the nonlinear medium, the damage position can be obtained by measuring the focus position. Figure 1 shows the formation mechanism of holographic focusing damage location, which can be divided into three steps: a linear free space propagation from the damage plane to the record plane, the nonlinear holographic phase transformation in the virtual nonlinear media, and free space propagation to the focus plane. The distance between damage plane and record plane is ${L_0}$, and the record plane to the focus plane is ${L_1}$.

 figure: Fig. 1.

Fig. 1. Schematic diagram of holographic focusing damage positioning.

Download Full Size | PDF

The input plane wave which has complex amplitude is modulated by a circular obscuration in the damage plane, and the transmittance of damage plane is described with Eq. (1).

$${t_0}({x_1},{y_1}) = \left\{ {\begin{array}{{c}} {\tau {e^{i\theta }},\textrm{ }r \le {r_0}}\\ {1\textrm{ },\textrm{ }r > {r_0}} \end{array}} \right.,$$
where $\tau ,\theta$ are the amplitude and phase of the damage region, ${r_0}$ is the radius of damage point. After ${L_0}$ distance free space propagation, the reference background wave ${U_R}$ interfere with the object wave ${U_o}$ in the Record plane, which is described with Fresnel diffraction with Eq. (2).
$$U({x_2},{y_2}) = \frac{A}{{i\lambda {L_0}}}\;exp(ik{L_0})\left\{ {\begin{array}{{c}} {\int {\int e } xp\left\{ {\frac{{ik}}{{2{L_0}}}[{{({x_2} - {x_1})}^2} + {{({y_2} - {y_1})}^2}]} \right\}d{x_1}d{y_1}}\\ {\textrm{ - }\int {\int t } ({x_1},{y_1})exp\left\{ {\frac{{ik}}{{2{L_0}}}[{{({x_2} - {x_1})}^2} + {{({y_2} - {y_1})}^2}]} \right\}d{x_1}d{y_1}} \end{array}} \right\},$$
where $\lambda $ is the wavelength and $k$ is the wave number, ${x_1},{y_1}$ is the coordinate of damage plane and ${x_2},{y_2}$ is the coordinate of record plane. Under high power laser irradiation, the amplitude distribution will change the refractive index distribution of the nonlinear crystal in high power laser system because of the optical Kerr effect. The output wave can be described as Eq. (3).
$${U_{out}}({x_2},{y_2}) = U({x_2},{y_2}){e^{i\phi }},\textrm{ }\phi = M|U({x_2},{y_2}){|^2},$$
where $M = k\frac{{{n_2}}}{{2{n_0}}}d$ is focal factor, ${e^{i\phi }}$ is holographic phase, $d$ is crystal length and ${n_0},{n_2}$ are the linear and nonlinear refractive index of the nonlinear crystal which caused by optical Kerr effect. From our early analysis [20], after the nonlinear crystal the axial intensity of both side of focus plane can be described as Eq. (4).
$$\begin{array}{c} I = {I_0}[1 + 2{\eta ^2}\textrm{ - 2}\eta cos\psi - 2\eta \sqrt {1 + {\eta ^2} + 2\eta cos\psi } \;cos(2C + \Phi )]\\ \eta = \sqrt {1 + {B^2}(1 + {\tau ^2} - 2\tau cos\theta ) + 2B\tau sin\theta } ,\\ \;tan\psi = \frac{{B(\tau cos\theta - 1)}}{{1 + B\tau sin\theta }} \end{array}$$
where $\;sin(\Phi ) = \;sin\psi /\sqrt {1 + {\eta ^2} - 2\eta cos\psi }$, $C = k{r_0}^2/4\Delta z$, $\Delta z$ is the distance from the conjugate plane, and $B$ is B integral in the nonlinear crystal. The wave which have holographic phase will converge in the conjugate plane and form a focal point. That means the holographic phase causing from the optical Kerr effect is similar to the “Fresnel-like” lens [18] which focused at the focus plane, and the focal length is ${L_1}$ and ${L_0} = {L_1}$.

From the principle of on-axis holography, the nonlinear crystal corresponds to a holographic plate with transmittance ${e^{i\phi }}$. This phenomenon can be obtained only using an appropriate M and record diffraction filed virtually in the computer. Using the holographic phase propagated backwards numerically and recording the maximum intensity in each diffraction position Peak Distribution Line (PDL) can be obtained. The position of the focus plane is the diffraction position corresponding to the maximum of PDL. Since ${L_0} = {L_1}$, the axial position of the damage point can be obtained.

With the help of the PDL, the axial distance of the damage point can be obtained directly and quickly by numerical calculation instead of complex imaging measurements and achieve insensitive to amplitude type and phase type damage.

2.2 Damage lateral positioning based on deep learning

Different from small size damage detection, diffraction rings in the recording plane are easier to locate. Equation (2) reveals that the center of the diffraction ring is coaxial with the center of the damage point and the lateral position can be obtained by the center of the diffraction ring. The traditional diffraction ring center detection methods are to extract the diffraction image features and then calculate the similarity with the ideal diffraction ring [12]. However, these methods are not effective for detecting multiple diffraction rings with large size difference or high overlap rate. Besides, axial positioning requires to separately calculate each of the diffraction ring, otherwise the PDL superposition of multiple damage points will affect the judgment of the maximum diffraction intensity. So the bounding range of the diffraction ring is also required.

Inspired by the object detection framework Faster RCNN [21], which uses the neural network to classify the objects and regress the location area, we establish an innovative diffraction rings detection neural network called Diffraction-Net and the structure is schematically shown in Fig. 2. The input of Diffraction-Net is the intensity of record plane, and output is the bounding box of each diffraction rings.

 figure: Fig. 2.

Fig. 2. The schematic of Diffraction-Net.

Download Full Size | PDF

In traditional deep learning-based object detection, images are directly used for training the parameter of neuro network. However, in order to enhance the generalization ability of the model, datasets of different targets under different backgrounds are required which makes the generation of datasets too complicated to increases the training time. Because the diffraction ring is a single oscillating structure, the characteristics of the diffraction ring can be extracted firstly and then input into the network. Using the gradient direction can make a good distinction between background illumination and diffraction because the background light is stationary. It allows the detection of the diffraction ring to be independent of the illumination which has different intensity and wavelength.

The gradient direction of diffraction intensity on record plane with the size of 800×800 pixels was used instead of the diffraction intensity to enter the neural network to reduce the influence of the background and improve the detection accuracy. After preprocessing, the gradient direction feature is sent to the convolution feature extraction module Resnet50, which consists of one input layer and R1-R5 residual block. Each residual contains several convolutional layers, maximum pooling layers and activation functions. Using Resnet50 can ensure the extract multidimensional features in the gradient direction of the diffraction ring effectively; it can also overcome the problem that the learning efficiency becomes lower and the accuracy cannot be effectively improved due to the deepening of the network. The classical Faster RCNN structure and Yolov2 structure directly use the feature mapping result of the last layer of Resnet50 as the input of the module to calculate the boundary box, which simplifies the calculation and has a lower error rate for the detection accuracy of targets with large size changes.

In order to represent the information better of each dimension of the input gradient direction, we use the FPN framework [22] to obtain the features of different scales extracted by Resnet50. Firstly, a convolution layer of $1 \times 1$ kernel is used to reduce the dimension of features in R3-R5, and the result are marked as C3-C5. The reconstructed C4 was obtained by adding $2 \times $ upper sampling of C5 and C4. Using the same method C3 can be reconstructed. Then C5 and reconstrued C4, C3 employ a $3 \times 3$ kernel convolution layer to reduce the data aliasing effect caused by downsampling and obtain P3-P5. And P6, P7 were computed via a $3 \times 3$ kernel convolution layer from the output features of the previous layer. All the weight values are random initialized with He-normal, and the biases are initialized to 0.

After FPN multi-scale sampling, each pyramid layer (P1-P7), was sent to class subnet and box subnet. Each class subnet consists of four convolution layers and one sigmoid activation, and the box subnet consist of four convolution layers. All of class subnet and box subnet are connected separately to form classification module and regression module to determine the existence of the diffraction ring in the current region and make bounding box regression; the training loss is computed by the loss function.

To make a better training accuracy and reduce training time, the loss function we used on network training can be divided into two parts [23]: classification loss and regression loss, which are described by Eq. (5).

$$Loss = \frac{1}{N}[{FL({y_{pre}},{y_{true}}) + \mu \textrm{ }{L_{reg}}({t_{pre}},{t_{true}})} ],$$
Where ${y_{pre}},{t_{pre}}$ is the prediction about the classification probability and bounding box of the diffraction rings, ${y_{true}},{t_{true}}$ is the ground-truth diffraction rings location, $N$ is the number of diffraction rings in the record plane and $\mu $ is weight factor. The classification loss $FL$ called focal loss is a cross-entropy loss function of dynamic scaling and avoids overwhelming the detector with large number of simple negative samples during training. And the regression loose ${L_{reg}}$ called smooth L1 can prevent gradient explosions and reduce the effects of some extreme values.

To train a neural network, a large amount of labeled data is needed, especially in object detection. For diffraction rings detection, it requires many plane optical elements with different sizes of damage points. Different from the conventional object, the distribution of diffraction rings which generated by the damage point can be well defined by Eq. (2). The training datasets of Diffraction-Net can be obtained directly by numerical simulation.

On the other hand, the truth bounding box data of diffraction ring also needs to be judged and selected manually, which is inefficient and inaccurate. In order to get the labeled bounding data faster, we proposed a way to directly calculate the range of bounding box from diffraction ring which can be described by Eq. (6)

$$\coprod \left[{{{({R_{center}} - {\rho_{center}})}^2} \otimes {{({R_{center}} - {\rho_{center}})}^2}} \right]> T,$$
where ${R_{center}}$ is a line intensity in the center of diffraction ring, ${\rho _{center}}$ is the mean value of ${R_{center}}$, ${\otimes} $ is correlation operation, $\coprod $ is normalize operation and $T$ is threshold. The boundary range is determined by ${R_{center}}$ subtracting ${\rho _{center}}$ squared whose normalize autocorrelation is greater than the threshold. To balance the computation with the diffraction ring details, we choose $T = 0.1$. As shown in Fig. 3, This method can detect the main peak of ${R_{center}}$ and ensure that the main region of the diffraction ring in the bounding box, also reduces the size of the calculated region.

 figure: Fig. 3.

Fig. 3. The bounding of diffraction rings in different damage size and different diffraction distance. (a)${r_0} = 50\mu m,{L_0} = 60mm$, (b)${r_0} = 50\mu m,{L_0} = 300mm$, (c)${r_0} = 15\mu m,{L_0} = 60mm$, (d) ${r_0} = 100\mu m,{L_0} = 300mm$.

Download Full Size | PDF

The distribution of diffraction rings generated by the damage point can be defined analytically, therefore the training datasets of the Diffraction-Net can be obtained numerically. Since the diffraction rings which need to be detected have different size, positions and overlaps, the radius of training diffraction ring dataset are generated from 10µm to 150µm and diffraction distances ranging from 50mm to 300mm. The total training diffraction images are 4000 which has $6.6mm \times 6.6mm$ calculation region and $800 \times 800$ pixels. And the overlap of two diffraction rings ranging from 10% to 100%. Each damage distributes at damage plane randomly and the overlap between diffraction ring is also random to ensure the robustness for rings detection.

To accelerate the training speed, we use Restnet50 as the feature extraction module and the weight had been pre-trained on ImageNet. The drop probability of the dropout layer was set to 0.2. The Adaptive Moment Estimation (Adam) was adopted to optimize the network weights and the learning rate is $3 \times {10^{ - 4}}$. All the program was implemented in Python 3.6 using Keras, and running in a workstation which has two graphic cards (NVIDIA RTX2080TI) and 64GB memory. After 120 epochs about 20 hours iteration, the loss is reduced to 0.001. Diffraction-Net has achieved a high speed which is 20 Frame Per Second (FPS) for diffraction rings detection, and the mean Average Precision (mAP) converge to 94%.

Since the small size damage also can modulate the input wave, it only has different oscillation frequency compared with the large size damage. Our method is appropriate for small size damage detection, only the modulation can make CCD response. The different types of damage such as amplitude-type, phase-type, and complex amplitude type damage have similar diffraction rings. Diffraction-Net can detect efficiently for the diffraction ring from different types of damage.

3. Results and discussions

3.1 Numerical simulation

3.1.1 Axial positioning

Here we use Fresnel diffraction to obtain the amplitude distribution of the damage point of a specific size at ${L_0}$. After converting the amplitude to phase, we multiply the focusing factor M to form the holographic phase, here $M = 2\pi $. The PDL is obtained by propagating the holographic phase backward. The range of PDL is 10mm-600mm, and the maximum value of backpropagation diffraction intensity is obtained every 1mm.

As the radius of damage point increases, the light intensity at the conjugate surface is no longer the maximum of PDL [20]. The position of the maximum PDL can be expressed as Eq. (7)

$$\Delta z = \frac{{{r^2}}}{{\lambda (2q + 1 - \Phi /\pi )}}\textrm{ (}\;q = 0, \pm 1, \pm 2\ldots \textrm{)}\textrm{.}$$
That means that the maximum value lies on both sides of the conjugate surface and is distributed symmetrically. As shown in Fig. 4, the PDL which is obtained from diffraction intensity of the damage point ${r_0} = 30\mu m$ at ${L_0} = 120mm$, and a local minimum value exists at 120mm. So using the maximum value of PDL directly as the conjugate distance of the damage point is not accurate when the damage point is large. However, since the conjugate position is in the middle of the maximum value, the envelope of PDL can be acquired simply by a low-pass filter [24]. The maximum value of the envelope is the conjugate position. In Fig. 4, the local minimum at 120mm becomes the global maximum by low-pass filtering. Therefore, taking the maximum value of the filtered PDL curve can accurately obtain the position of the conjugate position.

 figure: Fig. 4.

Fig. 4. The envelope of PDL using low-pass filter.

Download Full Size | PDF

In order to illustrate the accuracy of PDL at different distances and different damage sizes, we use $\tau = 0,\;\theta = 0$ amplitude type and $\tau = 1,\;\theta = \pi $ phase type of damage respectively, and obtained PDL curves under ${r_0} = 15,\;30,\;45,\;60\mu m$ and ${L_0} = 60,\;130,\;180,\;240mm$, as shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. PDL result of (a) amplitude-type damage with ${r_0} = 30\mu m$ radius at ${L_0} = 60,\;120,\;180,\;240\;mm$; (b) amplitude-type damage with ${r_0} = 15,\;30,\;45,\;60\;\mu m$ at ${L_0} = 180mm$; (c) phase-type damage with ${r_0} = 30\mu m$ radius at ${L_0} = 60,\;120,\;180,\;240\;mm$ (d) phase-type damage with ${r_0} = 15\mu m,\;30\mu m,\;45\mu m,\;60\mu m$ at ${L_0} = 180mm$.

Download Full Size | PDF

By comparing the simulated peak position of PDL with the true value of ${L_0}$, the average axial positioning error of amplitude damage simulation in the Figs. 5(a) and 5(b) is 1.5mm, and that in phase damage simulation in the Figs. 5(c) and 5(d) is 1.4mm. It demonstrates that the holography based on the holographic focusing method is robust to both amplitude-type and phase-type damage axial positioning. The main reason for the positioning error is the resolution of the PDL is rough and the position offset caused by low-pass filtering. But to make a trade-off between computational complexity and positioning accuracy, this error is negligible in high-power laser systems with large size and long spacing optical elements.

3.1.2 Lateral positioning

The conventional approach for automatically detecting diffraction ring in downstream optic images caused by the upstream damage site uses GDM [13]. But GDM has a high false alarm rate for overlapping diffraction rings. Besides GDM needs to specify the matching template size, which is difficult to strike a balance between small size detection and false alarm rate.

In order to quantify the algorithm, it is necessary to consider the accuracy of center position and the correctness of identification quantity equally. Therefore, the evaluation method of center error $\Gamma $ we used is shown in Eq. (8).

$$\Gamma = \frac{1}{{2{N_p}}}{||{{C_p} - {C_t}} ||_2} + \frac{1}{2}P,\;\;\;{\kern 1pt} P = \left\{ {\begin{array}{{c}} {0\;\;\;{\kern 1pt} \textrm{if}\;{N_p} = {N_t}}\\ {1\;\;\;{\kern 1pt} {\kern 1pt} \textrm{if}\;{N_p} \ne {N_t}} \end{array}} \right.,$$
where ${N_p},\;{N_t}$ are the number of prediction diffraction rings and truth diffraction rings, ${C_p},\;{C_t}$ are the position of prediction center points and truth points.

To demonstrate the superiority of Diffraction-Net in diffraction ring detection, we generate two phase-type damage points with a radius of 30µm at different spacing, and the overlap rate of diffraction rings at 120mm ranges from 10% to 90%. The overlap rate is calculated from the overlap of the boundary box defined in Eq. (6). The input illumination is a first order Gaussian beam.

As shown in the Fig. 6, the yellow box in Figs. 6(a)–6(c) is the result from Box Subnet of Diffraction-Net, and the score is confidence from Class Subnet. In each detection, the average running time is 40ms which is two times faster than the 100ms of the GDM which using massive convolution operation.

 figure: Fig. 6.

Fig. 6. Diffraction rings detection result. (a)-(c) Diffraction-Net result, the yellow box indicates the bounding box result and the confidence of classification; (d)-(f) GDM result, the red asterisk is the detected rings center. The overlap of (a), (d) is 50%; (b), (e) is 70%; (e), (f) is 90%; (g) center error comparation of GDM and Different-Net at the different overlap. The scale bar is 800µm

Download Full Size | PDF

By comparing the test results of Diffraction-Net with GDM in the Fig. 6(g), the GDM center error $\Gamma $ becomes much bigger than Diffraction-Net when the overlap rate is 50%, and as shown in the Fig. 6(d) such center error is unacceptable. When the overlap rate is 70%, GDM fails to distinguish two diffraction rings. But Diffraction-Net can achieve accurate detection of center error less than 0.005 with at least an 80% overlap rate.

Based on deep learning, Diffraction-Net can not only locate the diffraction ring but also obtain the size of the diffraction ring, which can be better applied to the detection of the diffraction ring under complex conditions.

3.2 Experiment

To illustrate the practicability of the proposed method, we have used a phase-type Spatial Light Modulator (SLM) to test the diffraction ring detection capability under different diffraction rings overlap. And two different material slabs that have some damage on the surface are used to indicate the axial location precision and damage diagnostic ability for the cascade medium.

Figure 7 shows the optical component layout. The laser (Thorlabs HNL020LB, 632.8nm) is attenuated by P and collimated into an appropriate spot size by O, I and L. SLM (Holoeye PLUTO) is used to generate different radius phase-type damage points. G1 is a fused silica slab with the size $10cm \times 5cm \times 10mm$, and G2 is a Nd:Glass slab [25,26] with the size $10cm \times 10cm \times 12mm$. The surface of G1 and G2 have several different damages. After damage modulation, the plane wave from BS propagates to the CCD (Allied Vision GT2300).

 figure: Fig. 7.

Fig. 7. The experiment layout. P, polarizer; O objective lens; I, iris; L, lens; BS, Beam splitter; SLM Spatial Light Modulator; G1, fused silica; G2. Nd:Glass.

Download Full Size | PDF

3.2.1 Lateral positioning for overlap diffraction rings

Firstly, we remove G1 and G2 in the light path and use SLM to generate two phase-type damage points with different spacing. The damage radius is 24µm since each pixel of SLM is 8µm, and the diffraction distance is 100mm. The detection result is shown in Fig. 8.

 figure: Fig. 8.

Fig. 8. Detection result of different rings spacing. (a) 608µm; (b) 480µm; (c) 352µm; (d) 224µm; (e) 96µm; (f) 48µm. The scale bar is 800µm.

Download Full Size | PDF

Each detection time is 40ms, and until the damage spacing distance is 224µm, and when the overlap rate is 61% the diffraction rings cannot be distinguished by Diffraction-Net. And that is lower in simulation result because of image noise. In Figs. 8(e) and 8(f), the overlap rates are 74% and 92%, The Diffraction-Net misrepresents the diffraction field as one diffraction ring.

Using Diffraction-Net for lateral positioning solves the problem that traditional method mixes the diffraction rings when the overlap rate greater than 50%, and achieves an exact lateral pointing with the average center positioning error is 12µm. As for single damage site detection, Diffraction-Net can detect the diffraction ring from at last 8µm (one pixel on SLM) phase-type damage that can’t be inspected in the FODI system. That also demonstrates the advantage of the diffraction rings based method in small-size damage diagnostic.

3.2.2 Axial positioning on the surface of single medium

To demonstrate the accuracy of the axial and lateral positioning, we first used an ultrafast laser to generate two damage points on the front and posterior surfaces of G1, with a radius of about 50µm. The SLM and G2 are removed from the optical path and the diffraction field generated from G1 which is recorded directly by CCD. The gradient direction of the diffraction intensity is used to carry out bounding box calculation in Diffraction-Net. The lateral positioning result is shown in the Fig. 9(a). The main area of each diffraction ring was detected with high confidence. The diffraction ring on the left upper is caused by the damage point on the front surface, and the diffraction ring on the right lower is caused by the damage point on the posterior surface.

 figure: Fig. 9.

Fig. 9. Single glass surface positioning (a) lateral positioning result; (b) axial positioning result. The scale bar is 800µm.

Download Full Size | PDF

After the amplitude is converted into phase, the holographic phase is formed by combining the focusing factor, and the results obtained by calculating PDL of the diffraction ring in the boundary box are shown in the Fig. 9(b). Same as the simulation result, the original PDL produces a dip near the conjugate point. The peak point spacing between the two filtered PDL curves is 7mm. The fused silica has a refractive index of 1.45 at 632nm wavelength [27], so the 7mm free space diffraction distance converts to the glass spacing is 10.15mm.

It should be noted that PDL is a direct calculation of the diffraction distance of free propagation, which requires to be combined with the actual optical element distribution to determine the location of the damage. Besides this method can not only determine the location of the damage site on the surface of the plate, but also accurately locate the damage site inside the slab.

3.2.3 Axial positioning on cascade medium

In the high-power laser system, there are many different materials of magnifying medium and nonlinear crystal. When the damage sites on the different optical elements are propagated by a certain distance, the diffraction rings from different optical devices may be detected on the same plane by CCD. Therefore, the problem can be truly solved in the practical application only when the diffraction rings in the same imaging plane can be accurately deduced and distinguished in cascaded devices.

In order to demonstrate the veracity of axial positioning on cascade medium, G2 is inserted into the light path and move G1 and G2 into appreciate position which ensure that diffraction rings generated by two glasses damage can be captured by CCD. The two damage points with radius of about 200µm and 1mm were made on G2 by an ultrafast laser. As shown in Fig. 10(a), the left upper diffraction ring is generated from the damage point on the posterior surface of G2 with a radius of 200µm. The middle diffraction ring is generated by the damage point on the front surface of G1 with a radius of 50µm. The right lower diffraction ring from the dust scatter on the front surface of L.

 figure: Fig. 10.

Fig. 10. Cascade medium positioning result, (a) lateral result and (b) axial PDL result with the posterior surface damage on G2; (c) lateral result and (d) axial result with the front surface damage and fringes illumination on G2. The scale bar is 800µm

Download Full Size | PDF

Each diffraction rings from cascades medium have been located accurately by Diffraction-Net is shown in the Fig. 10(a). The average lateral positioning error was 33µm which is 6 pixels by comparing the relative spacing between the center points of the boundary box of each detected diffraction ring with the actual damage point spacing.

The axial positioning result is shown in the Fig. 10(b). The three PDL are obtained by calculating peak diffraction intensity of the region at 0-600mm which the bounding box computing from Diffraction-Net. Since PDL calculates the propagation distance in free space, it is necessary to convert the actual damage distribution according to the position, thickness and refractive index of the real slabs in the optical path. After converting, the maximum position of the red PDL line in which the compute region from the upper left bounding box is the position of G2 posterior surface. The maximum position of the blue PDL line in which the compute region from the middle bounding box is the position of G1 front surface. And the maximum position of the green PDL is the position of L. Comparing the maximum value of each PDL with the true position, the average axial positioning error is 1.13mm.

To demonstrate the robustness of our method, the diffraction intensity of different sizes and different illumination was tested. The damage site with a radius of 1mm on the G2 front surface was moved into the field of view, and rotated G2 to produce interference fringes.

The positioning results obtained under such conditions are shown in Figs. 10(c) and 10(d). The average lateral positioning error was 38.5µm which is 7 pixels by comparing the truth center spacing with the bounding box center spacing. After converting the free space distance of PDL to the real damage distribution, the maximum position of the red PDL line in Fig. 10(d) is the front surface position of G1. At Fig. 10(b) the maximum position of red PDL is 129mm, and the maximum position of the red PDL in Fig. 10(d) is 136mm which the spacing difference is 7mm. Since the reflective index of Nd: Glass at 632nm wavelength is 1.52 [28], the real spacing of two red PDL at slab is 10.64mm. It is smaller than the real stick of G2 12mm because the damage occurred not only on the surface of the slab, but also to a depth of 1mm inside the slab. This also shows that the proposed method can accurately locate the damage occurring inside the slabs.

The above experimental results illustrate that the proposed method can be applied to the damage positioning of cascaded media with different materials and illumination. It only takes one image to calculate the location of damage to all the elements in the recording area. At the same time, the gradient direction features to ensure that the Diffraction-Net can inspect the diffraction ring distribution of complex illumination under the simple datasets training.

4. Conclusion

In this article, a three-dimensional laser damage positioning method based on holographic focusing and deep learning is proposed. The diffraction rings from the damage in the laser system are used to compute the axial position by PDL and the lateral position by Diffraction-Net. Since PDL is independent of the type of damage, using the nonlinear hot image focus point can achieve accurate axial positioning. For lateral positioning, Diffraction-Net using the gradient direction as the input greatly reduces the training time and improves the adaptability of the neural network in complex conditions. The training dataset is completely generated by the simulation that greatly improves the practicability of the presented neural network. Experimental results show it can accurately locate a three-dimensional positioning accuracy in cascade medium. It should be emphasized that the proposed method has solved the practical inspect problems in the complex optical environment with one intensity recording and provide a new way for online damage location in high-power laser system. It will benefit to the laser damage control when combine with laser recycle strategy in the future.

Funding

Shanghai Sailing Program (18YF1425900); National Natural Science Foundation of China (11774364); Bureau of International Cooperation, Chinese Academy of Sciences (181231KYSB20170022); Chinese Academy of Sciences (XDA25020302).

Disclosures

The authors declare no conflicts interest.

References

1. J. Q. Zhu, “Review of special issue on high power facility and technical development at the NLHPLP,” High Power Laser Sci. Eng. 7(1), e12 (2019). [CrossRef]  

2. C. N. Danson, C. Haefner, J. Bromage, T. Butcher, J.-C. F. Chanteloup, E. A. Chowdhury, A. Galvanauskas, L. A. Gizzi, J. Hein, D. I. Hillier, N. W. Hopps, Y. Kato, E. A. Khazanov, R. Kodama, G. Korn, R. Li, Y. Li, J. Limpert, J. Ma, C. H. Nam, D. Neely, D. Papadopoulos, R. R. Penman, L. Qian, J. J. Rocca, A. A. Shaykin, C. W. Siders, C. Spindloe, S. Szatmari, R. M. G. M. Trines, J. Zhu, P. Zhu, and J. D. Zuegel, “Petawatt and exawatt class lasers worldwide,” High Power Laser Sci. Eng. 7(3), e54 (2019). [CrossRef]  

3. G. Xu, T. Wang, Z. Li, Y. Dai, Z. Lin, Y. Gu, and J. Zhu, “1 kJ Petawatt Laser System for SG-II-U Program,” Rev. Laser Eng. 36(APLS), 1172–1175 (2008). [CrossRef]  

4. P. A. Baisden, L. J. Atherton, R. A. Hawley, T. A. Land, J. A. Menapace, P. E. Miller, M. J. Runkel, M. L. Spaeth, C. J. Stolz, T. I. Suratwala, P. J. Wegner, and L. L. Wong, “Large Optics for the National Ignition Facility,” Fusion Sci. Technol. 69(1), 295–351 (2016). [CrossRef]  

5. D. F. Zhao, M. Y. Sun, R. Wu, X. Q. Lu, Z. Q. Lin, and J. Q. Zhu, “Laser-induced damage of fused silica on high power laser: beam intensity modulation, optics defect, contamination,” Proc. SPIE 9632, 96320G (2015). [CrossRef]  

6. S. Trummer, G. Larkin, L. Kegelmeyer, M. Nostrand, C. Karkazis, D. Martin, R. Aboud, and T. Suratwala, “Automated repair of laser damage on National Ignition Facility optics using machine learning,” Proc. SPIE10805 (2018).

7. E. Mendez, K. M. Nowak, H. J. Baker, F. J. Villarreal, and D. R. Hall, “Localized CO2 laser damage repair of fused silica optics,” Appl. Opt. 45(21), 5358–5367 (2006). [CrossRef]  

8. G. Xia, W. Fan, D. Huang, H. Cheng, J. Guo, and X. Wang, “High damage threshold liquid crystal binary mask for laser beam shaping,” High Power Laser Sci. Eng. 7(1), e9 (2019). [CrossRef]  

9. A. Conder, J. Chang, L. Kegelmeyer, M. Spaeth, and P. Whitman, “Final Optics Damage Inspection (FODI) for the National Ignition Facility,” Proc. SPIE 7797, 77970P (2010). [CrossRef]  

10. F. Ravizza, M. Nostrand, L. Kegelmeyer, R. Hawley, and M. Johnson, “Process for rapid detection of fratricidal defects on optics using linescan phase-differential imaging,” Proc. SPIE 7054, 75041B (2009). [CrossRef]  

11. C. Miller, L. Kegelmeyer, M. Nostrand, R. Raman, D. Cross, Z. Liao, R. Garcha, and C. Carr, “Characterization and repair of small damage sites and their impact on the lifetime of fused silica optics on the National Ignition Facility,” Proc. SPIE 10805, 47 (2018). [CrossRef]  

12. B. Y. Chen, L. M. Kegelmeyer, J. A. Liebman, J. T. Salmon, J. Tzeng, and D. W. Paglieroni, “Detection of laser optic defects using gradient direction matching,” Proc. SPIE 6101, 61011L (2006). [CrossRef]  

13. A. H. Yang, Z. Li, D. Liu, J. Miao, and J. Q. Zhu, “Direct prejudgement of hot images with detected diffraction rings in high power laser system,” High Power Laser Sci. Eng. 6(3), e52 (2018). [CrossRef]  

14. T. Galvin, S. Herriot, B. Ng, W. Williams, S. Talathi, T. Spinka, E. Sistrunk, C. Siders, and C. Haefner, Proc. SPIE, 10751 (2018).

15. H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018). [CrossRef]  

16. J. T. Hunt, K. R. Manes, and P. A. Renard, “Hot images from obscuraions,” Appl. Opt. 32(30), 5973–5982 (1993). [CrossRef]  

17. H. Jia, L. Zhou, and F. Wang, “Dark spot downstream from nonlinear hot image,” Appl. Opt. 51(19), 4285–4290 (2012). [CrossRef]  

18. K. R. Manes, M. L. Spaeth, J. J. Adams, M. W. Bowers, J. D. Bude, C. W. Carr, A. D. Conder, D. A. Cross, S. G. Demos, J. M. G. Di Nicola, S. N. Dixit, E. Feigenbaum, R. G. Finucane, G. M. Guss, M. A. Henesian, J. Honig, D. H. Kalantar, L. M. Kegelmeyer, Z. M. Liao, B. J. MacGowan, M. J. Matthews, K. P. McCandless, N. C. Mehta, P. E. Miller, R. A. Negres, M. A. Norton, M. C. Nostrand, C. D. Orth, R. A. Sacks, M. J. Shaw, L. R. Siegel, C. J. Stolz, T. I. Suratwala, J. B. Trenholme, P. J. Wegner, P. K. Whitman, C. C. Widmayer, and S. T. Yang, “Damage mechanisms avoided or managed for NIF large optics,” Fusion Sci. Technol. 69(1), 146–249 (2016). [CrossRef]  

19. Y. R. Shen, “Electrostriction, optical Kerr effect and self-focusing of laser beams,” Phys. Lett. 20(4), 378–380 (1966). [CrossRef]  

20. K. You, Y. Zhang, X. Zhang, M. Sun, and J. Zhu, “Structural evolution of axial intensity distribution during hot image formation,” Appl. Opt. 56(16), 4835–4842 (2017). [CrossRef]  

21. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017). [CrossRef]  

22. T. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature Pyramid Networks for Object Detection,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 936–944.

23. T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal Loss for Dense Object Detection,” in 2017 IEEE International Conference on Computer Vision (ICCV) (2017), pp. 2999–3007.

24. D. Wei and C. Meiyun, “Reconstruction of partial envelope of interference pattern based on chirp Z-transform,” Opt. Express 27(10), 13803–13808 (2019). [CrossRef]  

25. K. Yao, X. Xie, J. Tang, C. Fan, S. Gao, Z. Lu, Z. Chen, Q. Xue, K. Zheng, and Q. Zhu, “Diode-side-pumped joule-level square-rod Nd:glass amplifier with 1 Hz repetition rate and ultrahigh gain,” Opt. Express 27(23), 32912–32923 (2019). [CrossRef]  

26. J. Guo, J. Wang, X. Pan, X. Lu, G. Xia, X. Wang, S. Zhang, W. Fan, and X. Li, “Suppression of FM-to-AM conversion in a broadband Nd:glass regenerative amplifier with an intracavity birefringent filter,” Appl. Opt. 58(5), 1261–1270 (2019). [CrossRef]  

27. I. H. Malitson, “Interspecimen Comparison of the Refractive Index of Fused Silica*,†,” J. Opt. Soc. Am. 55(10), 1205–1209 (1965). [CrossRef]  

28. M. J. Weber, R. A. Saroyan, and R. C. Ropp, “Optical properties of Nd3+ in metaphosphate glasses,” J. Non-Cryst. Solids 44(1), 137–148 (1981). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Schematic diagram of holographic focusing damage positioning.
Fig. 2.
Fig. 2. The schematic of Diffraction-Net.
Fig. 3.
Fig. 3. The bounding of diffraction rings in different damage size and different diffraction distance. (a) ${r_0} = 50\mu m,{L_0} = 60mm$ , (b) ${r_0} = 50\mu m,{L_0} = 300mm$ , (c) ${r_0} = 15\mu m,{L_0} = 60mm$ , (d) ${r_0} = 100\mu m,{L_0} = 300mm$ .
Fig. 4.
Fig. 4. The envelope of PDL using low-pass filter.
Fig. 5.
Fig. 5. PDL result of (a) amplitude-type damage with ${r_0} = 30\mu m$ radius at ${L_0} = 60,\;120,\;180,\;240\;mm$ ; (b) amplitude-type damage with ${r_0} = 15,\;30,\;45,\;60\;\mu m$ at ${L_0} = 180mm$ ; (c) phase-type damage with ${r_0} = 30\mu m$ radius at ${L_0} = 60,\;120,\;180,\;240\;mm$ (d) phase-type damage with ${r_0} = 15\mu m,\;30\mu m,\;45\mu m,\;60\mu m$ at ${L_0} = 180mm$ .
Fig. 6.
Fig. 6. Diffraction rings detection result. (a)-(c) Diffraction-Net result, the yellow box indicates the bounding box result and the confidence of classification; (d)-(f) GDM result, the red asterisk is the detected rings center. The overlap of (a), (d) is 50%; (b), (e) is 70%; (e), (f) is 90%; (g) center error comparation of GDM and Different-Net at the different overlap. The scale bar is 800µm
Fig. 7.
Fig. 7. The experiment layout. P, polarizer; O objective lens; I, iris; L, lens; BS, Beam splitter; SLM Spatial Light Modulator; G1, fused silica; G2. Nd:Glass.
Fig. 8.
Fig. 8. Detection result of different rings spacing. (a) 608µm; (b) 480µm; (c) 352µm; (d) 224µm; (e) 96µm; (f) 48µm. The scale bar is 800µm.
Fig. 9.
Fig. 9. Single glass surface positioning (a) lateral positioning result; (b) axial positioning result. The scale bar is 800µm.
Fig. 10.
Fig. 10. Cascade medium positioning result, (a) lateral result and (b) axial PDL result with the posterior surface damage on G2; (c) lateral result and (d) axial result with the front surface damage and fringes illumination on G2. The scale bar is 800µm

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

t 0 ( x 1 , y 1 ) = { τ e i θ ,   r r 0 1   ,   r > r 0 ,
U ( x 2 , y 2 ) = A i λ L 0 e x p ( i k L 0 ) { e x p { i k 2 L 0 [ ( x 2 x 1 ) 2 + ( y 2 y 1 ) 2 ] } d x 1 d y 1  -  t ( x 1 , y 1 ) e x p { i k 2 L 0 [ ( x 2 x 1 ) 2 + ( y 2 y 1 ) 2 ] } d x 1 d y 1 } ,
U o u t ( x 2 , y 2 ) = U ( x 2 , y 2 ) e i ϕ ,   ϕ = M | U ( x 2 , y 2 ) | 2 ,
I = I 0 [ 1 + 2 η 2  - 2 η c o s ψ 2 η 1 + η 2 + 2 η c o s ψ c o s ( 2 C + Φ ) ] η = 1 + B 2 ( 1 + τ 2 2 τ c o s θ ) + 2 B τ s i n θ , t a n ψ = B ( τ c o s θ 1 ) 1 + B τ s i n θ
L o s s = 1 N [ F L ( y p r e , y t r u e ) + μ   L r e g ( t p r e , t t r u e ) ] ,
[ ( R c e n t e r ρ c e n t e r ) 2 ( R c e n t e r ρ c e n t e r ) 2 ] > T ,
Δ z = r 2 λ ( 2 q + 1 Φ / π )  ( q = 0 , ± 1 , ± 2 ) .
Γ = 1 2 N p | | C p C t | | 2 + 1 2 P , P = { 0 if N p = N t 1 if N p N t ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.