Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Front and back surface measurement of the transparent planar element based on multi-frequency fringe deflectometry

Open Access Open Access

Abstract

As a highly accurate metrology, phase measuring deflectometry (PMD) can be used for in-situ surface shape measurement. However, due to the reflection off the back surface, PMD cannot measure both the front and back surfaces of the transparent planar element simultaneously. Therefore, this paper proposes a method for measuring the front and back surfaces of the transparent planar element. The phase distribution corresponding to the front and back surfaces can be firstly acquired by multi-frequency fringe deflectometry. Then, the front and back surface shapes can be obtained by inverse ray-tracing and nonlinear optimization. Numerical simulation and experiment verify the proposed method. The surface shape of window glass with a thickness of 10 mm is measured in the experiment. The surface shape error is around 50 nm in the root mean square with a diameter of 51 mm.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The surface shape measurement of the optical element is becoming increasingly important in the laser inertial confinement fusion (ICF) facility. Many laser beams are generated, amplified, frequency-doubled, and finally focused on one target. The final optics assembly (FOA) in the ICF, as the output of the high-power laser system, reflects the comprehensive performance of the whole system and directly determines whether the physical experiment is successful or not. Many transparent planar elements in the FOA include optical flat windows, frequency-doubling crystal plates, neodymium plates, etc [1,2]. The stringent requirements on the surface shape error are crucial to realize laser fusion successfully. Especially, the double surface shapes of the transparent planar element are necessary to be accurately measured in-situ. The commonly used optical metrology is interferometry [3,4]. The Intellium H2000 interferometer has already been used for the in-situ measurement of the Giant Magellan Telescope (GMT) [5,6]. Although interferometry is of high accuracy, it is extremely sensitive to environmental disturbance. So, there is a need for an in-situ testing method with simplicity in system structure and strong robustness.

Phase Measuring Deflectometry (PMD), as an in-situ surface metrology [7,8], consists of a liquid crystal display (LCD) screen, the surface under test (SUT) and a camera. In PMD, an LCD screen is used to display fringe patterns. The camera captures the distorted fringe patterns reflected by the SUT. Then the surface slope of the SUT can be calculated by the phase extracting method. Finally, the surface shape of the SUT can be reconstructed. Unfortunately, when traditional PMD is used to measure the transparent planar element, the fringe patterns reflected off the front and back surfaces will be superimposed on the camera sensor plane [9]. The superimposed fringe patterns make the traditional PMD unable to obtain the surface shape accurately. The binary pattern deflectometry [10] has been proposed to measure the front surface shape of the transparent element. However, this method can be realized by setting a suitable threshold to distinguish the regions from the back surface.

Another method is the multi-frequency method, which can obtain the phase distribution corresponding to the front and back surfaces [11]. The multi-frequency method regards the double surfaces reflection as the superposition of two sinusoidal signals. Tao [12] and Leung [13] proposed an envelope curve algorithm based on the multi-frequency method. The first zero point of the envelope curve is calculated as the initial value of the least-squares iteration to obtain the phase distribution of the front and back surfaces. Ye [14,15] proposed a method to apply the phase iteration algorithm based on the multi-frequency method to acquire the phase distribution of the front and back surfaces and reconstruct the surface shape. The selection of the initial phase values in the phase iteration algorithm is critical to guarantee the correct convergence of the solution. Wang [16] proposed multi-frequency fringe deflectometry (MFD) to untangle the double surface reflections using the power spectrum estimation. MFD can be realized by using many fringe patterns sampled at a set of fringe spatial frequencies.

Despite using many fringe patterns, MFD is still appealing since it requires neither additional special hardware nor a change in the existing PMD setup. Therefore, this paper proposes a method for measuring the front and back surface shapes of the transparent planar element. The MFD is used to decouple the double surface reflections and obtain the phase distribution of the front and back surfaces. Based on this, the front and back surface shapes can be acquired using inverse ray-tracing and nonlinear optimization.

2. Measurement of the transparent planar element

2.1 Principle of PMD

PMD consists of an LCD screen, the SUT and a camera, as shown in Fig. 1. The sinusoidal fringe patterns are displayed on the LCD screen. The reflected fringe pattern from the SUT is deformed and captured by the camera. According to the inverse Hartmann principle, the ray emitted from the projection center of the camera (point C) is reflected by the SUT at point M and then intersects the LCD screen at point S.

 figure: Fig. 1.

Fig. 1. The schematic of the PMD testing system.

Download Full Size | PDF

The slope of the SUT can be calculated using the law of reflection [17], as shown in Eq. (1):

$$tan{\alpha _x} = \frac{{\frac{{{x_s} - {x_m}}}{{{d_{m2s}}}} + \frac{{{x_c} - {x_m}}}{{{d_{m2c}}}}}}{{\frac{{{z_{m2s}} - W}}{{{d_{m2s}}}} + \frac{{{z_{m2c}} - W}}{{{d_{m2c}}}}}},tan{\alpha _y} = \frac{{\frac{{{y_s} - {y_m}}}{{{d_{m2s}}}} + \frac{{{y_c} - {y_m}}}{{{d_{m2c}}}}}}{{\frac{{{z_{m2s}} - W}}{{{d_{m2s}}}} + \frac{{{z_{m2c}} - W}}{{{d_{m2c}}}}}},$$
where ${a_x}$, ${a_y}$ represent the angle between the normal and the z-axis in the x-direction and y-direction; The LCD screen coordinate $({\textrm{x}_\textrm{s}}\,,{\textrm{y}_\textrm{s}})$ can be obtained using the phase-shifting algorithm [18]; The coordinates of the SUT and point C $({\textrm{x}_\textrm{c}}\,,{\textrm{y}_\textrm{c}})$ can be obtained using the calibration process [19];$\,{z_{m2c}}$ denotes the distance between the SUT and the camera in the z-direction; ${z_{m2s}}$ denotes the distance between the SUT and the LCD screen in the z-direction;$\,{d_{m2c}}$ represents the distance between the SUT and the camera; ${d_{m2s}}$ represents the distance between the SUT and the LCD screen; W denotes the surface shape of the SUT. The x- and y-slope can be calculated by using Eq.(1), then the surface shape of the SUT can be reconstructed using a polynomial fitting, such as Zernike polynomials [20].

2.2 Determining the initial coordinate of the reflection point

As shown in Fig. 2, the reference flat and LCD screen are placed at the position, which is perpendicular to the z-axis. And the pinhole of the camera is regarded as a point source. According to the law of reflection, the incident angle is equal to the reflection angle. Then, Eq. (2) can be drawn:

$$tan{\theta _x} = \frac{{{x_{mr}} - {x_c}}}{{z_{m2c}^{}}} = \frac{{{x_{sr}} - {x_{mr}}}}{{z_{m2s}^{}}}, tan{\theta _y} = \frac{{{y_{mr}} - {y_c}}}{{z_{m2c}^{}}} = \frac{{{y_{sr}} - {y_{mr}}}}{{z_{m2s}^{}}}.$$

 figure: Fig. 2.

Fig. 2. The calibration of the testing system.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. The transparent planar element's front and back surface reflections.

Download Full Size | PDF

Equation (2) can be simplified as:

$${x_{mr}} = \frac{{{x_{sr}}\cdot z_{m2c}^{} + {x_c}\cdot z_{m2s}^{}}}{{z_{m2c}^{} + z_{m2s}^{}}}, {y_{mr}} = \frac{{{y_{sr}}\cdot z_{m2c}^{} + {y_c}\cdot z_{m2s}^{}}}{{z_{m2c}^{} + z_{m2s}^{}}},$$
where $({{x_{mr}},{y_{mr}}} )$ is the initial coordinate of the reflection point; $({{x_{sr}},{y_{sr}}} )$ is the coordinate of the LCD screen; ${z_{m2s}}$ and ${z_{m2c}}$ are the z-distances between the LCD and reference flat and between the pinhole and reference flat. After the initial coordinate of the reflection point is obtained, the next step is to place the front surface of the transparent element at the position of the reference flat.

2.3 Reconstruction of the front surface shape

For the transparent planar element, the ray emitted from point C is reflected off the front surface at point ${M_1}$ and then intersects the LCD screen at point ${S_1}$. At the same time, the ray refracted at point ${M_1}$ intersects with the back surface and front surface at points ${M_2}$, ${M_3}$, and finally strikes the LCD screen at point ${S_2}$, as shown in Fig. 3. Therefore, the intensity of a single-pixel can be expressed as:

$$I(f) = A + {B_1}\cdot cos(2\pi \cdot {x_{s1}}\cdot f) + {B_2}\cdot cos(2\pi \cdot {x_{s2}}\cdot f), $$
where $\textrm{A}$ denotes the background intensity;$\,{x_{s1}}$ and ${x_{s2}}$ represent the coordinates of points ${S_1}$ and ${S_2}$;$\,{B_1}$ and ${B_2}$ denote the modulation of the front and back surfaces;$\,f$ represents the frequency of fringe patterns displayed on the LCD screen. In actual situation, the modulation intensity varies with the variation of the fringe frequency [14]. So the Gaussian low-pass model is adopted, Eq. (4) can be rewritten as:
$$I(f) = A + {B_1}\cdot {e^{ - {{(\frac{f}{{{f_{\max }}}})}^2}}}\cdot cos(2\pi \cdot {x_{s1}}\cdot f) + {B_2}\cdot {e^{ - {{(\frac{f}{{{f_{\max }}}})}^2}}}\cdot cos(2\pi \cdot {x_{s2}}\cdot f), $$
where ${f_{max}}$ denotes the maximum frequency.

The normalized screen coordinates ${\mu _1}$, ${\mu _2}$ and the number of fringe periods $\mathrm{\tau }$ can be defined as:

$${\mu _1} = \frac{{{x_{s1}}}}{L}, {\mu _2} = \frac{{{x_{s2}}}}{L}, \tau = L\cdot f,$$
where L denotes the screen width. Equation (5) can be expressed as Eq. (7):
$$I(\tau ) = A + {B_1}\cdot {e^{ - {{(\frac{\tau }{{{\tau _{\max }}}})}^2}}}\cdot cos(2\pi \cdot {\mu _1}\cdot \tau ) + {B_2}\cdot {e^{ - {{(\frac{\tau }{{{\tau _{\max }}}})}^2}}}\cdot cos(2\pi \cdot {\mu _2}\cdot \tau ). $$

In order to obtain ${\mu _1}$ and ${\mu _2}$, the power spectrum distribution $G(\mu )$ of $I(\tau )$ can be calculated using the Fourier transform, as shown in Eq. (8):

$$\left\{ {\begin{array}{c} {\mathrm{{\cal F}}(\mu ) = \int\limits_{ - \infty }^{ + \infty } {I(\tau )\cdot } w(\tau )\cdot {e^{( - i2\pi \mu \cdot \tau )}}d\tau }\\ {G(\mu ) = {{|{\mathrm{{\cal F}}(\mu )} |}^2}} \end{array}} \right., $$
where $(\mu )$ denotes the Fourier transform of $I(\tau )$ and $w(\tau )$ represents the rectangular window function. The coordinates of points ${S_1}$ and ${S_2}$ can be calculated by recognizing the peak positions of spectral lines in power spectrum distribution.

Since the power spectrum estimation is a biased estimation of the random sequence [21], the nonlinear optimization algorithm is used to correct the effect of spectrum leakage. Therefore, the normalized screen coordinates ${\mu _1}$ and ${\mu _2}$ are used as initial values of nonlinear optimization. The optimization process is described as following and shown in Fig. 4:

  • 1. The normalized coordinate ${\mu _1}$ and ${\mu _2}$ can be obtained by power spectrum estimation;
  • 2. The $\textrm{A}$, ${B_1}$ and $\,{B_2}$ can be calculated by non-negative least squares;
  • 3. The model value I can be obtained by substituting $\textrm{A}$, ${B_1}$, $\,{B_2}$, ${\mu _1}$ and ${\mu _2}$ into Eq. (7);
  • 4. The difference between the measured value ${I_{test}}$ and the model value I is minimized by optimizing ${\mu _1}$ and ${\mu _2}$;
  • 5. The optimized normalized screen coordinates ${\mu _1}$ and ${\mu _2}$ are output, then the screen coordinates ${x_{\textrm{s}1}}$ and ${x_{\textrm{s}2}}$ can be acquired.

 figure: Fig. 4.

Fig. 4. The optimized process of the screen coordinate.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. The ray tracing of the back surface.

Download Full Size | PDF

The coordinates of the points ${S_1}({{x_{s1}},{y_{s1}}} )$ and ${S_2}({{x_{s2}},{y_{s2}}} )$ are obtained using power spectrum estimation. In order to obtain the coordinate and surface shape of the SUT, the iterative reconstruction strategy is constructed by using Eq. (1), Eq. (9) and Eq. (10), and the coordinate of the SUT is corrected. Finally, the coordinate and surface shape of SUT can be obtained [22].

$${x_m} = {x_{mr}} - W\cdot \frac{{{x_{mr}} - {x_c}}}{{{z_{m2c}}}},{y_m} = {y_{mr}} - W\cdot \frac{{{y_{mr}} - {y_c}}}{{{z_{m2c}}}},$$
$$W = \int_{} {(tan{\alpha _x}dx + tan{\alpha _y}} dy). $$

2.4 Reconstruction of the back surface shape

The back surface shape can be reconstructed using the inverse ray-tracing method and nonlinear optimization in three steps as shown in Fig. 5:

  • Step 1: The refraction at the front surface.

    Accordinhg to the law of refraction, the direction cosine of the refracted ray $\overrightarrow {{t_1}} = ({{t_{1x}},{t_{1y}},{t_{1z}}} )$ can be determined by substituting the incident ray $\overrightarrow i$, the normal of the front surface $\overrightarrow {{n_1}} $ and the refractive index ${n_0}$ into Eq. (11):

    $$\overrightarrow {{t_1}} = \frac{1}{{{n_0}}}\left\{ {\overrightarrow i + [\sqrt {n_0^2 - 1 + {{(\overrightarrow i \cdot \overrightarrow {{n_1}} )}^2}} - \overrightarrow i \cdot \overrightarrow {{n_1}} ]\cdot \overrightarrow {{n_1}} } \right\}, $$
    where the normal of the front surface can be expressed as $\overrightarrow {{N_1}} $=$\left( {\frac{{\partial {Z_{M1}}}}{{\partial {X_{M1}}}},\frac{{\partial {Z_{M1}}}}{{\partial {Y_{M1}}}}, - 1} \right)$ and $\overrightarrow {{n_1}} $=$\frac{{\overrightarrow {{N_1}} }}{{\overrightarrow {{N_1}} }}$ is the normalized normal of the front surface [23]. And the direction cosine of the incident ray ${\overrightarrow i _{}}$ can be determined by the calibration process.

    Assuming that the back surface shape can be expressed as ${Z_{M2}} = \widehat {{C_2}} \cdot Zernpoly({{X_{M2}},{Y_{M2}}} )$, where $Zernpoly({{X_{M2}},{Y_{M2}}} )$ is the expression of the Zernike polynomial [24] and $\widehat {{C_2}}$ is the coefficient of the Zernike polynomial. Then, the x- and y-coordinates of point ${M_2}$ need to be determined.

    The reference plane is defined as a virtual plane perpendicular to the z-axis of the world coordinate system, as shown in Fig. 6(a). The reference plane is regarded as an initial guess of coordinate iteration [22], $Z = {Z_{m2r}}$. Direction cosine $\overrightarrow {{t_1}} $ and the coordinate of point ${M_1}$ are substituted into Eq. (12) to calculate the intersection of $\overrightarrow {{t_1}} $ and the reference plane:

    $$\left\{ \begin{array}{l} {X_{M2r}} = {X_{M1}} + \frac{{{t_{1x}}}}{{{t_{1z}}}}\cdot ({Z - {Z_{M1}}} )\\ {Y_{M2r}} = {Y_{M1}} + \frac{{{t_{1y}}}}{{{t_{1z}}}}\cdot ({Z - {Z_{M1}}} )\end{array} \right.. $$

    The x- and y-coordinates of point ${\textrm{M}_{2r}}$ are substituted into ${Z_{M2}} = \widehat {{C_2}} \cdot Zernpoly({{X_{M2r}},{Y_{M2r}}} )$ to obtain the back surface shape. Once the back surface is reconstructed, the x- and y-coordinates of point ${\textrm{M}_2}$ are corrected with Eq. (13):

    $$\left\{ \begin{array}{l} {X_{M2}} = {X_{M1}} + \frac{{{t_{1x}}}}{{{t_{1z}}}}\cdot ({{Z_{M2}} - {Z_{M1}}} )\\ {Y_{M2}} = {Y_{M1}} + \frac{{{t_{1y}}}}{{{t_{1z}}}}\cdot ({{Z_{M2}} - {Z_{M1}}} )\end{array} \right.. $$

    Then the coordinate of the corrected point ${M_2}$ is substituted back to ${Z_{M2}} = \widehat {{C_2}} \cdot Zernpoly({{X_{M2}},{Y_{M2}}} )$ to obtain the new back surface shape. The flowchart of the coordinate iteration is shown in Fig. 6(b). The output is the coordinate of point ${M_2}$= $({{X_{M2}},{Y_{M2}},{Z_{M2}}} )$ when :

    $$|{Z_{M2}^{j + 1} - Z_{M2}^j} |< {\varepsilon _1}, $$
    where ${\varepsilon _1}$ is the condition of convergence and j is the iteration ordinal.

  • Step 2: The reflection at the back surface.

    After the coordinate of point ${M_2}$ is determined, the reflected ray direction cosine $\overrightarrow {{r_2}} = ({{r_{2x}},{r_{2y}},{r_{2z}}} )$ needs to be calculated. The $\overrightarrow {{t_1}} $ and the normal of the back surface $\overrightarrow {{n_2}} $ are substituted into Eq. (15) to calculate $\overrightarrow {{r_2}} $:

    $$\overrightarrow {{r_2}} = \overrightarrow {{t_1}} - 2(\overrightarrow {{t_1}} \cdot \overrightarrow {{n_2}} )\overrightarrow {{n_2}}, $$
    where the normal of the back surface can be expressed as $\overrightarrow {{N_2}} $=$\left( {\frac{{\partial {Z_{M2}}}}{{\partial {X_{M2}}}},\frac{{\partial {Z_{M2}}}}{{\partial {Y_{M2}}}}, - 1} \right)$ and $\overrightarrow {{n_2}} $=$\frac{{\overrightarrow {{N_2}} }}{{\overrightarrow {{N_2}} }}$ is the normalized normal of the back surface.

    Since ${M_1}$ and ${M_3}$ are in the same planar plane, the z-coordinate of point ${M_3}$ can be expressed as ${Z_{M3}} = {C_1} \cdot Zernpoly({{X_{M3}},{Y_{M3}}} )$.The x- and y-coordinates of point ${M_3}$ can also be calculated in the same way as the coordinate of point ${M_2}$ using the coordinate iteration algorithm.

  • Step 3: The determination of the screen coordinate.

    After refracted at point ${M_3}$, the direction cosine of the refracted ray $\overrightarrow {{t_2}} = ({{t_{2x}},{t_{2y}},{t_{2z}}} )$ needs to be determined. The normal of the front surface $\overrightarrow {{n_3}} $, the refractive index ${n_0}$ and $\overrightarrow {{r_2}} $ are substituted into Eq. (16) to calculate the $\overrightarrow {{t_2}} $ :

    $$\overrightarrow {{t_\textrm{2}}} = \left\{ {{n_0} \bullet \overrightarrow {{r_2}} - [\sqrt {1 - n_0^2 + {{(\overrightarrow {{r_2}} \cdot \overrightarrow {{n_3}} \cdot {n_0})}^2}} + \overrightarrow {{r_2}} \cdot \overrightarrow {{n_3}} \cdot {n_0}]\cdot \overrightarrow {{n_3}} } \right\}, $$
    where the normal of the front surface can be expressed as $\overrightarrow {{N_3}} $=$\left( {\frac{{\partial {Z_{M3}}}}{{\partial {X_{M3}}}},\frac{{\partial {Z_{M3}}}}{{\partial {Y_{M3}}}}, - 1} \right)$ and $\overrightarrow {{n_3}} $=$\frac{{\overrightarrow {{N_3}} }}{{\overrightarrow {{N_3}} }}$ is the normalized normal of the front surface.

    After $\overrightarrow {{t_2}} $ is obtained, the coordinate of point ${M_3}$ is substituted into Eq. (17) to obtain the LCD screen coordinate $\widehat {{S_2}}({\widehat {{x_{s2}}},\widehat {{y_{s2}}}} )$ :

    $$\left\{ {\begin{array}{c} {\widehat {{x_{s2}}} = {X_{M3}} + \frac{{{t_{2x}}}}{{{t_{2z}}}}\cdot ({{Z_{s2}} - {Z_{M3}}} )}\\ {\widehat {{y_{s2}}} = {Y_{M3}} + \frac{{{t_{2y}}}}{{{t_{2z}}}}\cdot ({{Z_{s2}} - {Z_{M3}}} )} \end{array}} \right., $$
    where ${Z_{s2}}$ is equal to ${Z_{s1}}$. From Eq. (11)-(17), the screen coordinate $\widehat {{S_2}}({\widehat {{x_{s2}}},\widehat {{y_{s2}}}} )$ is the function of $\widehat {{C_2}}$. The objective function is minimized by nonlinear optimization to obtain the back surface coefficient $\widehat {{C_2}}$, then the back surface shape can be reconstructed. The objective function is shown in Eq. (18):
    $$[{\widehat C_2}] = min|{\widehat S_2}(\widehat {{x_{s2}}},\widehat {{y_{s2}}}) - {S_2}({{x_{s2}},{y_{s2}}} )|. $$

This nonlinear least-squares problem can be solved by the Levenberg-Marquardt algorithm. Output the parameter $\widehat {{C_2}}$ of the back surface when:

$$min|{\widehat S_2}(\widehat {{x_{s2}}},\widehat {{y_{s2}}}) - {S_2}({{x_{s2}},{y_{s2}}} )|{\varepsilon _2}. $$
where ${\varepsilon _2}$ is the condition of convergence.

 figure: Fig. 6.

Fig. 6. The process of the coordinate iteration algorithm: (a) the diagram of ray-tracing, (b) the flowchart of coordinate iteration algorithm.

Download Full Size | PDF

3. Numerical simulation

The proposed method is verified by numerical simulation. An LCD screen with a resolution of 1600 × 1200 pixels is used to display the fringe patterns. The camera with a focal length of 16 mm and a resolution of 1296 × 966 pixels is used to capture the fringe patterns, as shown in Fig. 7(a). In the simulation analysis, the camera is used as a point source and the LCD screen is regarded as an ideal plane, which is perpendicular to the z-axis. The z-distance between the transparent element and the camera and between the transparent element and the LCD screen are both set to 825 mm. The refractive index of the transparent element with a thickness of 10 mm is assumed to be 1.5, and the diameter is 51 mm. The maximum of $\tau $ is limited by the falling contrast of multi-frequency fringe pattern captured by camera due to Modulation Transfer Function (MTF). And the sampling interval of $\tau $ can be calculated by the Nyquist sampling theory:

$$\frac{1}{{\Delta \tau }}2{\mu _{max}}, $$
where ${\mu _{max}}$ is the maximum of the normalized screen coordinate, which could be assumed to be 1. In this simulation, the number of fringe periods $\tau $ increases linearly from 0.2 to 200 with an interval of 0.2.

 figure: Fig. 7.

Fig. 7. The simulated process: (a) The simulated testing setup, (b) The superimposed fringe patterns in two directions, (c) The temporal intensity sequence of pixel (101,101), (d) The power spectrum distribution of (c).

Download Full Size | PDF

Assuming that the front and back surface shapes are pre-known, the ray emitted from the pinhole of the camera is reflected off the front surface and then intersects the LCD screen at point ${S_1}$. At the same time, the ray refracted at the front surface intersects with the back surface, and finally strikes the LCD screen at point ${S_2}$ [25].Two out of those superimposed fringe patterns are shown in Fig. 7(b). In order to demonstrate the calculation process, pixel (101,101) in the simulation is picked as an example. The temporal intensity sequence of a single-pixel can be acquired by extracting all superimposed fringe patterns with different $\mathrm{\tau }$, as shown in Fig. 7(c). The power spectrum distribution of the temporal intensity sequence can be calculated using Fourier transform, as shown in Fig. 7(d). Similarly, the temporal intensity and power spectrum distribution of the other pixels can also be acquired pixel by pixel.

The spectrum positions of the front and back surfaces near 0.2 and separately correspond to the two close high peaks in the power spectrum distribution, as marked in Fig. 7(d). The normalized screen coordinates of points ${\textrm{S}_1}$ and ${\textrm{S}_2}$ can be obtained by recognizing the peak positions of the spectral lines in Fig. 7(d). The coordinate of point ${S_1}$ is substituted into Eq. (1) to calculate the slope of the front surface, then the 150-term Zernike polynomials are used to fit the slope to reconstruct the front surface shape, as shown in Fig. 8. Figure 8(a) shows the front surface shape obtained by our proposal, and Fig. 8(b) shows the true surface shape. Figure 8(c) shows that the front surface shape error is 1.2 nm in root mean square (RMS).

 figure: Fig. 8.

Fig. 8. The front surface shape: (a) the front surface shape obtained by our proposal, (b) the true surface shape, (c) the surface error between (a) and (b).

Download Full Size | PDF

In the next step, the back surface shape is reconstructed. The back surface is assumed to be expressed as ${Z_{M2}} = \widehat {{C_2}} \cdot Zernpoly({{X_{M2}},{Y_{M2}}} )$, where $\widehat {{C_2}}$ is an unknown parameter and the term of Zernike polynomials is 150. Figure 9 indicates that the value of the objective function declines along iterations. When the objective function is less than the setting threshold (${\varepsilon _2}$), the Zernike coefficient $\widehat {{C_2}}$ is the output.

 figure: Fig. 9.

Fig. 9. Iterative optimization of the back surface.

Download Full Size | PDF

The value of the objective function is taken logarithmically and plotted as Fig. 9. The Zernike coefficients of the back surface can be output after seventy optimization iterations (approximately 20 minutes). The result is shown in Fig. 10. Figure 10(a) shows the back surface shape obtained by our proposal, and Fig. 10(b) shows the true back surface shape. Figure 10(c) shows that the back surface shape error is 5.2 nm in RMS.

 figure: Fig. 10.

Fig. 10. The back surface shape: (a) the back surface shape obtained by our proposal, (b) the true surface shape, (c) the surface error between (a) and (b).

Download Full Size | PDF

4. Experiment

The proposed method is further verified by experiment. As shown in Fig. 11, the experimental setup consists of the pinhole camera, the transparent element and the LCD screen. The camera (Baumer TXG12) lens with a focal length of 16 mm and a resolution of 1296 × 966 pixels is used to record the fringe patterns. An LCD screen (MTIPH E-2M21GM) with a resolution of 1600 × 1200 pixels is used to display fringe patterns. The optical platform is used for vibration isolation in the experiment. The LCD screen, camera and SUT are fixed on the optical platform. The laboratory is kept at a constant temperature and the humidity is less than 50%. The entire experiment takes around 1.5 hours.

 figure: Fig. 11.

Fig. 11. Experimental setup of the testing system.

Download Full Size | PDF

The window glass with a thickness of 10.159 mm is measured by the spherometer (Multi-function Lens test station: LensMT-300). The measured aperture is around 51 mm in diameter, and the refractive index acquired by the Abbe refractometer is 1.5192.

4.1 Determining initial coordinate of the reflection point

The geometrical relationship of the testing system needs to be determined before measurement. A point source microscope (PSM) is used in the calibration of the testing system, as shown in Fig. 12(a). In the experiment, the PSM is mounted on an x-y-z stage. By moving the PSM along the x and y axis through x-y-z stage, the path of the objective focus defines a virtual plane, which can be used as a reference plane to align the reference flat and the LCD screen. The PSM is moved along the x and y axis to adjust the LCD screen and the reference flat to coincide with the virtual plane. And the PSM and the x-y-z stage are also used to determine the coordinate of the pin-hole camera [26], the z-distances between the pin-hole of the camera and the reference flat and between the LCD screen and the reference flat.

 figure: Fig. 12.

Fig. 12. (a) The PSM and the x-y-z stage, (b) The setup of the testing system.

Download Full Size | PDF

A high-quality flat (flatness below 1/10 wavelength) without the back surface reflection is regarded as reference flat. The phase-shifting fringe patterns reflected off the reference flat are captured by the camera. Then, the coordinate of the LCD screen is obtained by phase-shifting algorithm. The coordinates of camera pinhole and the LCD screen are brought into Eq. (3) to obtain the initial coordinate of the reflection point.

4.2 Testing a high-quality flat to demonstrate the accuracy

The geometrical relationship of the testing system is established in Section 4.1. Another high-quality flat without the back surface reflection is measured to demonstrate the accuracy. The multi-frequency fringe patterns displayed on the LCD screen, are captured by camera. Then the temporal intensity sequence can be obtained by extracting all fringe patterns, as shown in Fig. 13(a). The power spectrum distribution of the temporal intensity sequence can be calculated using Fourier transform, as shown in Fig. 13(b). The normalized LCD screen coordinates can be obtained by recognizing the highest peak position of the spectral line. The LCD screen coordinate and the initial coordinate of the reflection point are substituted into the iterative reconstruction strategy to obtain the coordinate and the surface shape of the optical flat.

 figure: Fig. 13.

Fig. 13. Power spectrum estimation: (a) Temporal intensity sequence of a single pixel, (b) Power spectrum distribution of (a) (In order to demonstrate calculation process, pixel (101,101) in the experiment is picked as an example.).

Download Full Size | PDF

Piston and tilt terms are removed to show the measured result of our proposal, as shown in Fig. 14(a). As a comparison, the result measured by Fizeau interferometer is demonstrated in Fig. 14(b). The Zernike coefficients are shown in Fig. 14(c). It can be seen that the measurement accuracy of our proposal can experimentally reach 24.8nm RMS. To further improve the measurement accuracy, these problems, such as the non-ideal camera model and imperfect screen with non-planarity and refraction of the covering glass, should be addressed in future work.

 figure: Fig. 14.

Fig. 14. Results of high-quality flat: (a) The surface shape measured using our proposal, (b) The surface shape measured using Fizeau interferometer, (c) Comparison of Zernike coefficients of our proposal and Fizeau interferometer (Piston and tilt in all surface maps are removed).

Download Full Size | PDF

4.3 Reconstruction of the front and back surface

a) The reconstruction of the front surface

For the measurement of the window glass, the PSM is used to put the front surface of the window glass at the same position as the reference flat. ${\pm} {45^\circ }$ diagonal fringe patterns are displayed on the screen, and $\mathrm{\tau }$ is increased from 0.2 to 200 with an interval of 0.2. The multi-frequency fringe patterns displayed on the LCD screen are captured by the camera, as shown in Fig. 15.

 figure: Fig. 15.

Fig. 15. The superimposed patterns captured by camera.

Download Full Size | PDF

Then the temporal intensity sequence can be obtained by extracting all superimposed fringe patterns, as shown in Fig. 16(a). The power spectrum distribution of the temporal intensity sequence can be calculated using Fourier transform, as shown in Fig. 16(b).

 figure: Fig. 16.

Fig. 16. Power spectrum estimation: (a) Temporal intensity sequence of a single pixel, (b) Power spectrum distribution of (a) (In order to demonstrate calculation process, pixel (101,101) in the experiment is picked as an example.).

Download Full Size | PDF

The spectrum positions of the front and back surfaces near 0.2 and separately correspond to the two close high peaks in the power spectrum distribution, as marked in Fig. 16(b). The normalized coordinates of points ${\textrm{S}_1}$ and ${\textrm{S}_2}$ can be obtained by recognizing the peak positions of the spectral lines. The coordinate of point ${\textrm{S}_1}$ and the initial coordinate of reflection point are substituted into the iterative reconstruction strategy to obtain the front surface shape.

Figure 17 shows the front surface shape with the piston and tilt terms removed. Figure 17(a) shows the front surface shape acquired by our proposal, and Fig. 17(b) shows the front surface shape acquired by the Fizeau interferometer. Figure 17(c) shows the surface shape error is 51.8 nm in RMS. The comparison of the Zernike coefficients between Fig. 17(a) and Fig. 17(b) is given in Fig. 18. Compared with the Fizeau interferometer, the front surface shape errors are mainly defocus and astigmatism terms [27]. The surface RMS calculated by our proposal includes not only the surface shape, but also the systematic errors. So, the improvement of accuracy can be achieved by reducing the uncertainty of coordinate measurement.

 figure: Fig. 17.

Fig. 17. The front surface shape: (a) the front surface shape obtained by our proposal, (b) the front surface shape obtained by the Fizeau interferometer, (c) the front surface error between (a) and (b) (Piston and tilt terms in all surface maps are removed).

Download Full Size | PDF

 figure: Fig. 18.

Fig. 18. The comparison of the Zernike coefficients between our proposal and interferometer.

Download Full Size | PDF

b) The reconstruction of the back surface

In the next step, the surface coefficients of the back surface shape can be calculated using our proposal. It is assumed that the back surface can be represented by 150-term Zernike polynomial. The value of the objective function is taken logarithmically and plotted as Fig. 19. The coefficients of the back surface are output after ninety optimization iterations (approximately 20 minutes).

 figure: Fig. 19.

Fig. 19. Iterative optimization of the back surface.

Download Full Size | PDF

Figure 20 shows the back surface shape with the piston and tilt terms removed. Figure 20(a) shows the back surface shape reconstructed by our proposal, and Fig. 20(b) shows the back surface shape obtained by the Fizeau interferometer. Figure 20(c) shows the back surface shape error is 67 nm in RMS. The comparison of the Zernike coefficient between Fig. 20(a) and Fig. 20(b) is given in Fig. 21. The inaccuracy of the positional and structural parameters not only affects the result of the front surface, but also brings additional trefoil and primary spherical into the result of the back surface.

 figure: Fig. 20.

Fig. 20. The back surface shape: (a) the back surface shape obtained by our proposal, (b) the back surface shape obtained by the Fizeau interferometer, (c) the back surface error between (a) and (b) (Piston and tilt terms in all surface maps are removed).

Download Full Size | PDF

 figure: Fig. 21.

Fig. 21. The comparison of the Zernike coefficients of our proposal and interferometer.

Download Full Size | PDF

5. Discussion

5.1 Error budget

It can be seen from Eq. (1) that the precision of the deflectometry is related to the coordinates of the pinhole, the SUT and the LCD screen [28]. All the degrees of freedom (DOF) in the PMD system needs to be investigated. There are three DOF (${x_c},{y_c},{z_c}$) for pin-hole camera. In the coordinate system shown in Fig. 7(a), the rotation of the SUT around z-axis doesn’t bring the error into the measurement result, only the measurement result needs to be rotated. Thus, for the coordinate measurement of the SUT, there are five DOF with three translations $({{x_{m1}},{y_{m1}},{z_{m1}}} )$ and two rotations (mirror tilt x, tilt y). There are six DOF in all with three directions of translation $({{x_s},{y_s},{z_s}} )$ and rotation (screen tilt x, tilt y, tilt z), respectively. Considering the imaging noise, the multi-reflection off the back surface, the thickness and refractive index of the transparent element, the 18 parameters need to be analyzed. In this section, we will discuss those factors, respectively.

a) The error from the imaging noise

5% Gaussian random noise is added to the gray scale of the generated fringe patterns. The SNR of multi-frequency fringe patterns with noise is 30dB. The proposed method is used to calculate the front and back surface shapes. Figure 22 shows that the RMS of the surface error is 1.2nm for the front surface and 5.1nm for the back surface. The simulated results demonstrate that the proposed method has better robustness to noise.

 figure: Fig. 22.

Fig. 22. The surface errors: (a) the front surface error, (b) the back surface error.

Download Full Size | PDF

b) The error from the multi-reflection off the back surface

According to the Fresnel formula, the reflectivity and refractive index of ray incident on the surface of the element at a certain angle can be calculated. As shown in Fig. 23, the incident ray is reflected off the front surface of the transparent element directly (${r_1}$). At the same time, the incident ray refracted at the front surface undergoes one internal reflection (${r_2}$) and two internal reflections (${r_3}$) within the element. Then, the reflective index of those rays can be calculated, $\,{r_1}$=4%, ${r_2}$=3.8%, ${r_3}$=0.06%. Therefore, the influence of multi-reflection on the measured results is analyzed by numerical simulation. The surface reconstruction results are shown in Fig. 24. The surface errors are 1.6 nm and 5.1 nm. Compared with Fig. 8(c) and Fig. 9(c), the error of multi-reflection off the back surface can be ignored.

 figure: Fig. 23.

Fig. 23. Theintensity distribution of the reflected ray.

Download Full Size | PDF

 figure: Fig. 24.

Fig. 24. The surface errors: (a) the front surface error, (b) the back surface error.

Download Full Size | PDF

c) The error from the positional parameters

To know what surface errors induced by the inaccuracy of each positional parameter, the uncertainty is added into each positional parameter [29,30]. The values obtained by ray tracing are considered as the true values. Next, the slope data of the front surface can be calculated using Eq. (1) and integrated using the Zernike polynomials to obtain the front surface, respectively. Then, the differences of Zernike coefficients between the true front surface and the simulated results with different uncertainties can be obtained.

Once each uncertainty and its resulting change are computed, the root square sum (RSS) is used to estimate the comprehensive effects on the surface error from the positional errors [31]. When each positional parameter of the testing system is perturbed, it can be seen from RSS that the surface errors are mainly reflected by the 4th-6th term of the Zernike polynomials. With all the uncertainties considered, the RSS value of the front surface error is 42.2 nm, including defocus (Z4 is 36.1 nm) and astigmatism (Z5 is 19 nm, and Z6 is 10.8 nm).

When the surface error is introduced into the front surface, the back surface can be reconstructed using our proposal. Similar to the analysis method of the front surface, the differences of Zernike coefficients between the true back surface and the reconstructed results can be obtained in the same way as shown in Table 2. Compared with Table 1, the back surface errors are more complicated. With all the uncertainties considered, the RSS value of the back surface error is 55.1 nm. It can be seen from RSS that the uncertainty of the positional parameter brings additional defocus (Z4 is 40.8 nm), astigmatism (Z5 is 21.7 nm and Z6 is 13.3 nm) and coma (Z7 is 10 nm and Z8 is 9.5 nm) terms into the back surface. And, the uncertainty of each positional parameter also introduces additional trefoil (Z9 is 11.7 nm and Z10 is 4.6 nm) and primary spherical (Z11 is 19.2 nm) terms into the back surface shape. The surface shape error from translational perturbation is smaller than that from rotational perturbation, so the rotational parameters in the system should be more accurately determined.

Tables Icon

Table 1. Sensitivity analysis for the front surface Unit: nm.

Tables Icon

Table 2. Sensitivity analysis for the back surface. Unit: nm.

d) The error from the structural parameters

When the proposed method is used to reconstruct the back surface, the refractive index and thickness are not involved in the nonlinear optimization, but are obtained using an Abbe refractometer and a spherometer. The uncertainty of the structural parameters on the measured result needs to be discussed. The uncertainty of the thickness and the refractive index is induced artificially. The reconstructed result of the back surface is shown in Table 3. The back surface error is around 4 nm in RMS. Therefore, the thickness and refractive index can be measured multiple times. Then the average result is substituted into Section 4.2 to reconstruct the measured result.

Tables Icon

Table 3. The back surface error about the structural parameter. Unit: nm.

d) The error budget

The error budget of our proposal is summarized in Table 4. With all the factors considered, the RSS of the front surface error and the RSS of the back surface error are 42.2nm and 55.4nm, respectively.

Tables Icon

Table 4. The error budget of our proposal. Unit: nm.

5.2 Reasons for choosing diagonal fringe patterns

The optical axis of the camera, in both simulation and experiment, is aligned to be nearly perpendicular to the y-axis, and the angle between the optical axis and z-axis is 15-deg in simulation. Thus, the y-axis difference between the screen coordinates of points ${S_1}$ and ${S_2}$ is minor. When vertical multi-frequency patterns are displayed, the power spectrum estimation fails to decouple phase distribution of the superimposed fringe patterns. Figure 25 demonstrates a pair of screen coordinates. Transferring the horizontal (x) and vertical (y) directions into the +45-deg diagonal (${y_d}$) and -45-deg diagonal directions, the problem can be solved.

 figure: Fig. 25.

Fig. 25. The screen coordinate transformation (The red point represents the LCD coordinates of $\,{S_1}$ and the blue point represents the LCD coordinates of ${S_2}$).

Download Full Size | PDF

5.3 Related measurement techniques

There are several related techniques linked to the measurement of the transparent element. Their differences are discussed in this part.

a). Comparison with the interferometric testing.

There is a problem of multi-surface interference when the traditional phase-shifting interferometry is used to measure the optical element. In order to avoid the influence of the superimposed fringe patterns on the measurement results, Deck [9] proposed the Fourier-transform phase-shifting interferometry, which performs Fourier transform on the fringe patterns to obtain the surface shape of the front and back surfaces. Another commonly used method is wavelength-tunning interferometry [32]. It does not need to move any mechanical parts of the system, which greatly simplifies the measurement device. At the same time, the influence of multi-surface interference is also removed. Compared with interferometry, our proposed meets the requirements of sub-micron precision and has strong robustness to environmental disturbance.

b). Comparison with the deflectometric testing.

There is also the problem of double-surface reflections when deflectometry is used to measure transparent element. The multi-frequency approach can decouple the superimposed patterns. Reference 15 proposed a phase iterative algorithm to reconstruct the front and back surfaces of the transparent element. The selection of initial values is critical to the reconstruction result of the transparent element.

6. Conclusion

This paper proposes a method for measuring the front and back surfaces of the transparent planar element, simultaneously. The phase distribution of the superimposed fringe patterns can be decoupled by MFD. The coordinates of the front and back surfaces can be obtained by inverse ray tracing, and the Zernike coefficients of the back surface can be acquired by nonlinear optimization. We verify the proposed method with simulation and experimental measurement. Simulation shows that the proposed method is theoretically feasible. The imaging noise, the positional and structural parameters on the proposed method are also analyzed. The measurement of the window glass with a thickness of 10 mm is demonstrated. The error of the front surface shape is 51 nm in RMS, and that of the back surface shape is 67 nm in RMS, respectively. Further investigation will be carried out to the in-situ measurement of the frequency-doubling crystal plate.

Funding

National Natural Science Foundation of China (U20A20215, 61875142); Sichuan University (2020SCUNG205).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. R. E. Bonanno, “Assembling and installing line-replaceable units for the National Ignition Facility,” Opt. Eng. 43(12), 2866–2872 (2004). [CrossRef]  

2. M. A. Lane, D. W. Larson, and C. R. Wuest, “NIF laser line-replaceable units (LRUs),” in Optical Engineering at the Lawrence Livermore National Laboratory II: The National Ignition Facility (2004).

3. M. C. Knauer, C. Richter, O. Hybl, J. Kaminski, C. Faber, and G. Hausler, “Deflectometry Rivals Interferometry,” Tm-Tech Mess 76(4), 175–181 (2009). [CrossRef]  

4. T. Guo, G. Zhao, D. Tang, Q. weng, C. Sun, F. Gao, and X. Jiang, “High-accuracy simultaneous measurement of surface profile and film thickness using line-field white-light dispersive interferometer,” Opt. Lasers Eng. 137, 106388 (2021). [CrossRef]  

5. E. Atad-Ettedgui, J. H. Burge, D. Lemke, W. Davison, H. M. Martin, and C. Zhao, “Development of surface metrology for the Giant Magellan Telescope primary mirror,” Advanced Optical and Mechanical Technologies in Telescopes and Instrumentation (2008).

6. E. Atad-Ettedgui, J. H. Burge, J. Antebi, L. B. Kot, H. M. Martin, D. Lemke, R. Zehnder, and C. Zhao, “Design and analysis for interferometric measurements of the GMT primary mirror segments,” in Optomechanical Technologies for Astronomy (2006).

7. P. H. Lehmann, G. Häusler, W. Osten, C. Faber, E. Olesch, A. Albertazzi, and S. Ettl, “Deflectometry vs. interferometry,” in Optical Measurement Systems for Industrial Inspection VIII (2013).

8. M. C. Knauer, J. Kaminski, and G. Hausler, “Phase Measuring Deflectometry: a new approach to measure specular free-form surfaces,” in Optical Metrology in Production Engineering, pp. 366–376 (2004).

9. L. L. Deck, “Fourier-transform phase-shifting interferometry,” Appl Optics (OSA, 2003), pp. 2354–2365.

10. R. Y. Wang, D. H. Li, K. Y. Xu, X. W. Zhang, and P. Luo, “Parasitic reflection elimination using binary pattern in phase measuring deflectometry,” Opt. Commun. 451, 67–73 (2019). [CrossRef]  

11. M. C. K. Christian Faber and Gerd Häusler, “Can Deflectometry Work in Presence of Parasitic Reflections?” Proc. DGaO (2009).

12. T. Siwei, Y. Huimin, C. Hongli, W. Tianhe, C. Jiawei, W. Yuxiang, and L. yong, “Elimination of parasitic reflections for objects with high transparency in phase measuring deflectometry,” Results Phys. 15, 102734 (2019). [CrossRef]  

13. Y.-C. Leung and L. Cai, “Untangling parasitic reflection in phase measuring deflectometry by multi-frequency phase-shifting,” Appl Optics (OSA, 2022), pp. 208–222.

14. J. Q. Ye, Z. Q. Niu, X. C. Zhang, W. Wang, and M. Xu, “Simultaneous measurement of double surfaces of transparent lenses with phase measuring deflectometry,” Opt. Lasers Eng. 137, 106356 (2021). [CrossRef]  

15. J. Q. Ye, Z. Q. Niu, X. C. Zhang, W. Wang, and M. Xu, “In-situ deflectometic measurement of transparent optics in precision robotic polishing,” Precis. Eng. 64, 63–69 (2020). [CrossRef]  

16. R. Y. Wang, D. H. Li, L. Li, K. Y. Xu, L. Tang, P. Y. Chen, and Q. H. Wang, “Surface shape measurement of transparent planar elements with phase measuring deflectometry,” Opt. Eng. 57, 104104 (2018). [CrossRef]  

17. P. Su, R. E. Parks, L. Wang, R. P. Angel, and J. H. Burge, “Software configurable optical test system: a computerized reverse Hartmann test,” Appl. Opt. 49(23), 4404–4412 (2010). [CrossRef]  

18. C. Zuo, S. J. Feng, L. Huang, T. Y. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

19. K. E. D. Li, L. Yang, G. Guo, M. Li, X. Wang, T. Zhang, and Z. Xiong, “Novel method for high accuracy figure measurement of optical flat,” Opt. Lasers Eng. 88, 162–166 (2017). [CrossRef]  

20. C. Zhao and J. H. Burge, “Orthonormal vector polynomials in a unit circle, Part I: Basis set derived from gradients of Zernike polynomials,” Opt. Express 15(26), 18014–18024 (2007),. [CrossRef]  

21. S. M. Kay and S. L. Marple, Spectrum Analysis - a Modern Perspective, (Ieee, 1981), pp. 1380–1419.

22. R. Y. Wang, D. H. Li, and X. W. Zhang, “Systematic error control for deflectometry with iterative reconstruction,” Measurement 168, 108393 (2021). [CrossRef]  

23. L. Huang, J. Xue, B. Gao, C. McPherson, J. Beverage, and M. Idir, “Modal phase measuring deflectometry,” Opt. Express 24(21), 24649–24664 (2016),. [CrossRef]  

24. R. J. Noll, “Zernike polynomials and atmospheric turbulence*,” J. Opt. Soc. Am. 66(3), 207–211 (1976). [CrossRef]  

25. D. Korsch and W. R. Hunter, “Reflective optics,” in Proceedings of SPIE - The International Society for Optical Engineering (1991).

26. P. Su, M. A. H. Khreishi, T. Su, R. Huang, M. Z. Dominguez, A. Maldonado, G. Butel, Y. Wang, R. E. Parks, and J. H. Burge, “Aspheric and freeform surfaces metrology with software configurable optical test system: a computerized reverse Hartmann test,” Opt. Eng. 53(3), 031305 (2013). [CrossRef]  

27. Y. Xu, F. Gao, and X. Jiang, “A brief review of the technological advancements of phase measuring deflectometry,” in PhotoniX (2020).

28. P. Y. Chen, D. H. Li, Q. H. Wang, L. Li, K. Y. Xu, J. G. Zhao, and R. Y. Wang, “A method of sub-aperture slope stitching for testing flat element based on phase measuring deflectometry,” Opt. Lasers Eng. 110, 392–400 (2018). [CrossRef]  

29. R. Huang, P. Su, J. H. Burge, L. Huang, and M. Idir, “High-accuracy aspheric x-ray mirror metrology using Software Configurable Optical Test System/deflectometry,” Opt. Eng. 54(8), 084103 (2015). [CrossRef]  

30. R. Huang, P. Su, T. Horne, G. Brusa, and J. H. Burge, “Optical metrology of a large deformable aspherical mirror using software configurable optical test system,” Opt. Eng. 53(8), 085106 (2014). [CrossRef]  

31. R. Huang, High precision optical surface metrology using deflectometry, (The University of Arizona2015).

32. Y. Kim, K. Hibino, and M. Mitsuishi, “Interferometric profile measurement of optical-thickness by wavelength tuning with suppression of spatially uniform error,” Opt. Express 26(8), 10870–10878 (2018). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (25)

Fig. 1.
Fig. 1. The schematic of the PMD testing system.
Fig. 2.
Fig. 2. The calibration of the testing system.
Fig. 3.
Fig. 3. The transparent planar element's front and back surface reflections.
Fig. 4.
Fig. 4. The optimized process of the screen coordinate.
Fig. 5.
Fig. 5. The ray tracing of the back surface.
Fig. 6.
Fig. 6. The process of the coordinate iteration algorithm: (a) the diagram of ray-tracing, (b) the flowchart of coordinate iteration algorithm.
Fig. 7.
Fig. 7. The simulated process: (a) The simulated testing setup, (b) The superimposed fringe patterns in two directions, (c) The temporal intensity sequence of pixel (101,101), (d) The power spectrum distribution of (c).
Fig. 8.
Fig. 8. The front surface shape: (a) the front surface shape obtained by our proposal, (b) the true surface shape, (c) the surface error between (a) and (b).
Fig. 9.
Fig. 9. Iterative optimization of the back surface.
Fig. 10.
Fig. 10. The back surface shape: (a) the back surface shape obtained by our proposal, (b) the true surface shape, (c) the surface error between (a) and (b).
Fig. 11.
Fig. 11. Experimental setup of the testing system.
Fig. 12.
Fig. 12. (a) The PSM and the x-y-z stage, (b) The setup of the testing system.
Fig. 13.
Fig. 13. Power spectrum estimation: (a) Temporal intensity sequence of a single pixel, (b) Power spectrum distribution of (a) (In order to demonstrate calculation process, pixel (101,101) in the experiment is picked as an example.).
Fig. 14.
Fig. 14. Results of high-quality flat: (a) The surface shape measured using our proposal, (b) The surface shape measured using Fizeau interferometer, (c) Comparison of Zernike coefficients of our proposal and Fizeau interferometer (Piston and tilt in all surface maps are removed).
Fig. 15.
Fig. 15. The superimposed patterns captured by camera.
Fig. 16.
Fig. 16. Power spectrum estimation: (a) Temporal intensity sequence of a single pixel, (b) Power spectrum distribution of (a) (In order to demonstrate calculation process, pixel (101,101) in the experiment is picked as an example.).
Fig. 17.
Fig. 17. The front surface shape: (a) the front surface shape obtained by our proposal, (b) the front surface shape obtained by the Fizeau interferometer, (c) the front surface error between (a) and (b) (Piston and tilt terms in all surface maps are removed).
Fig. 18.
Fig. 18. The comparison of the Zernike coefficients between our proposal and interferometer.
Fig. 19.
Fig. 19. Iterative optimization of the back surface.
Fig. 20.
Fig. 20. The back surface shape: (a) the back surface shape obtained by our proposal, (b) the back surface shape obtained by the Fizeau interferometer, (c) the back surface error between (a) and (b) (Piston and tilt terms in all surface maps are removed).
Fig. 21.
Fig. 21. The comparison of the Zernike coefficients of our proposal and interferometer.
Fig. 22.
Fig. 22. The surface errors: (a) the front surface error, (b) the back surface error.
Fig. 23.
Fig. 23. Theintensity distribution of the reflected ray.
Fig. 24.
Fig. 24. The surface errors: (a) the front surface error, (b) the back surface error.
Fig. 25.
Fig. 25. The screen coordinate transformation (The red point represents the LCD coordinates of $\,{S_1}$ and the blue point represents the LCD coordinates of ${S_2}$).

Tables (4)

Tables Icon

Table 1. Sensitivity analysis for the front surface Unit: nm.

Tables Icon

Table 2. Sensitivity analysis for the back surface. Unit: nm.

Tables Icon

Table 3. The back surface error about the structural parameter. Unit: nm.

Tables Icon

Table 4. The error budget of our proposal. Unit: nm.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

t a n α x = x s x m d m 2 s + x c x m d m 2 c z m 2 s W d m 2 s + z m 2 c W d m 2 c , t a n α y = y s y m d m 2 s + y c y m d m 2 c z m 2 s W d m 2 s + z m 2 c W d m 2 c ,
t a n θ x = x m r x c z m 2 c = x s r x m r z m 2 s , t a n θ y = y m r y c z m 2 c = y s r y m r z m 2 s .
x m r = x s r z m 2 c + x c z m 2 s z m 2 c + z m 2 s , y m r = y s r z m 2 c + y c z m 2 s z m 2 c + z m 2 s ,
I ( f ) = A + B 1 c o s ( 2 π x s 1 f ) + B 2 c o s ( 2 π x s 2 f ) ,
I ( f ) = A + B 1 e ( f f max ) 2 c o s ( 2 π x s 1 f ) + B 2 e ( f f max ) 2 c o s ( 2 π x s 2 f ) ,
μ 1 = x s 1 L , μ 2 = x s 2 L , τ = L f ,
I ( τ ) = A + B 1 e ( τ τ max ) 2 c o s ( 2 π μ 1 τ ) + B 2 e ( τ τ max ) 2 c o s ( 2 π μ 2 τ ) .
{ F ( μ ) = + I ( τ ) w ( τ ) e ( i 2 π μ τ ) d τ G ( μ ) = | F ( μ ) | 2 ,
x m = x m r W x m r x c z m 2 c , y m = y m r W y m r y c z m 2 c ,
W = ( t a n α x d x + t a n α y d y ) .
t 1 = 1 n 0 { i + [ n 0 2 1 + ( i n 1 ) 2 i n 1 ] n 1 } ,
{ X M 2 r = X M 1 + t 1 x t 1 z ( Z Z M 1 ) Y M 2 r = Y M 1 + t 1 y t 1 z ( Z Z M 1 ) .
{ X M 2 = X M 1 + t 1 x t 1 z ( Z M 2 Z M 1 ) Y M 2 = Y M 1 + t 1 y t 1 z ( Z M 2 Z M 1 ) .
| Z M 2 j + 1 Z M 2 j | < ε 1 ,
r 2 = t 1 2 ( t 1 n 2 ) n 2 ,
t 2 = { n 0 r 2 [ 1 n 0 2 + ( r 2 n 3 n 0 ) 2 + r 2 n 3 n 0 ] n 3 } ,
{ x s 2 ^ = X M 3 + t 2 x t 2 z ( Z s 2 Z M 3 ) y s 2 ^ = Y M 3 + t 2 y t 2 z ( Z s 2 Z M 3 ) ,
[ C ^ 2 ] = m i n | S ^ 2 ( x s 2 ^ , y s 2 ^ ) S 2 ( x s 2 , y s 2 ) | .
m i n | S ^ 2 ( x s 2 ^ , y s 2 ^ ) S 2 ( x s 2 , y s 2 ) | ε 2 .
1 Δ τ 2 μ m a x ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.