Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Imaging quality characterization and optimization of segmented planar imaging system based on honeycomb dense azimuth sampling lens array

Open Access Open Access

Abstract

Segmented planar photoelectric imaging is an advanced computational imaging technology that utilizes photonic integrated circuits (PICs) to achieve the miniaturization of imaging systems. The original radial-spoke lens array has dense radial sampling and coarse azimuthal sampling. The sparsity and inhomogeneity of spatial frequency sampling lead to the loss of spatial frequency information and blurred reconstructed images. In this paper, a honeycomb dense azimuth sampling lens array is proposed, and three baseline pairing methods are designed, which can realize dense azimuth sampling, effectively increase spatial frequency sampling and improve the imaging quality. The signal transmission model of the segmented planar imaging system is established and the imaging process is simulated and analyzed. The simulation results show that the honeycomb lens array improves the azimuth sampling density and spatial frequency coverage, and its imaging quality is significantly improved compared with the hexagonal lens array and the radial-spoke lens array. Furthermore, the optimal choice of the baseline pairing method and the error range of the fill factor and are also given in this paper. The results indicate that the mixed pairing method first ensures low and medium-frequency dense sampling, and then increases high-frequency sampling, which makes the imaging results better than those of the other two baseline pairing methods in terms of image contour, contrast and image detail information. The sampling density of the spatial frequency and the imaging quality can be improved by increasing the fill factor. In the actual manufacturing process, the allowable error range of the fill factor of the lens array is within 5%. The research results will provide theoretical support for the design and development of segmented planar imaging system.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The quest for higher angular resolution in astronomy will inevitably require larger aperture telescopes, which will further increase the size, weight, power consumption, and manufacturing difficulty of telescopes. Consisting of large optics, supporting structures, and precise thermal controls, conventional optical telescopes can be bulky, heavy, and power consuming [1]. The diameter of the primary mirror of a space telescope is limited by the proportional law of manufacturing cost [2]. Radio astronomers have proposed a synthetic aperture imaging technology based on the interferometric imaging theory, which breaks through the limitation of the aperture size of the optical system and further improves the resolution. Examples include the CHARA array [3], the Very Large Telescope Interferometer [4], the Navy Precision Optical Interferometer [5], and the Very Long Baseline Interferometer [6]. The Segmented Planar Imaging Detector for Electro-optical Reconnaissance (SPIDER) is a typical ultra-thin computational photoelectric imaging system. SPIDER replaces the traditional optical telescope and digital focal plane detector array with a compact and dense interferometer imaging array, which can provide a large effective aperture for fine-resolution imaging while minimizing the system volume, quality, and cost [713]. SPIDER utilizes multiple baselines to sample the target visibility function in the spatial frequency domain, and then digitally reconstruct the target image [14]. Therefore, the UV spatial frequency distribution can be optimized by optimizing the arrangement and pairing method of the lens array, thereby improving the imaging quality of the imaging system.

Many related researches have been done in recent years. Chu et al. proposed the concept of adjustable baseline pairing method and established the algorithm [15]. Yu et al. designed a “checkerboard” imaging system with better imaging quality than the SPIDER imaging system [16]. But uniform sampling of spatial frequencies can only be achieved in the horizontal and vertical directions. Lv et al. proposed an optimization scheme for the baseline pairing method [17]. Gao et al. proposed a hierarchical multistage sampling lens array and an inhomogeneous multistage sampling lens array, which can effectively improve the sampling of low and medium-frequency information [18,19]. However, an even number of short and middle radial interference arms leads to the coincidence of sampling directions of spatial frequencies, and the sampling information is redundant. Ding et al. designed a densely arranged hexagonal lens array [20], but there are still gaps between adjacent lenses, and the aperture space is not fully utilized. Hu et al. proposed a segmented planar imager based on a dense azimuthal sampling lens array, which can effectively alleviate image artifacts and improve ideal image quality [21]. However, an even number of interference arms leads to the coincidence of sampling directions, which does not increase azimuth sampling, and spatial frequency sampling is redundant. Although the arrangement of the lens array and the baseline pairing method have been optimized to some extent, the problem of sparse azimuth sampling of the spatial frequency still exists. The reconstructed image has problems such as poor contrast, blurred image detail information, low resolution, and image artifacts.

Aiming at the problems of the above segmented planar imaging system, a honeycomb dense azimuth sampling lens array and baseline pairing method are proposed, which can increase the azimuth sampling density and the sampling amount of spatial frequency information, thereby improving imaging quality. Based on the imaging principle, the signal transmission model of the segmented planar imaging system is established and the imaging process is numerically simulated. Meanwhile, the effects of different baseline pairing methods and the fill factor of the lens array on the imaging quality are also investigated. The research results will provide a theoretical basis for improving and optimizing the segmented planar imaging system.

2. Structure of segmented planar imaging system based on honeycomb dense azimuth sampling lens array

2.1 Structure and imaging process of segmented planar imaging system

The structure and working principle of the segmented planar imaging system based on the honeycomb sampling lens array are shown in Fig. 1. The arrangement of the honeycomb lens array is difficult to map to 2D PIC, so the imaging system uses a combination of 3D PIC and 2D PIC, and the overall structure is a regular hexagonal pattern, as shown in Fig. 1(a). The imaging system consists of the sensor array and the information processing module. The sensor array is composed of the lens array, the waveguide array, the arrayed waveguide grating (AWG), the phase shifter and the 90-degree optical hybrid, where the optical waveguide array is integrated into the 3D PIC, and the latter three devices are integrated into the 2D PIC [22]. The information processing module includes the balanced quadrature detector and the digital signal processing part.

 figure: Fig. 1.

Fig. 1. Structure and working principle diagram of the segmented planar imaging system based on honeycomb sampling lens array.

Download Full Size | PDF

The light field signals from a distant target are received and converged by the lens array, and coupled into the waveguide array in the 3D PIC. In the complex lens array arrangement and baseline pairing scheme, the problem of optical waveguide crossing occurs in PIC chips. Therefore, using 3D PIC for transmission matching of optical waveguides can avoid the problems caused by the crossing loss between optical waveguides and the crossing of interference baselines. The optical waveguide path in 3D PIC is specially designed, and computer-aided design can be used to achieve precise optical path matching. 3D PICs can be fabricated using methods such as ultrafast laser inscription (ULI), multilayer stacking, and planar PIC coupling [22]. Then the optical signal is transmitted through the waveguide and coupled into the corresponding 2D PIC. The broad spectral signal is divided into multiple narrowband spectral channels by the AWG. The phase shifter adjusts the phase of the light field signals output by the corresponding spectral channels of the two paired lenses to meet the interference conditions, and then transmits them to the 90-degree optical hybrid and the balanced quadrature detector for interference and signal measurement. UV spatial frequency coverage is obtained by digital signal processing. By the Van Cittert-Zernike theorem, the inverse Fourier transform is performed on the UV spatial frequency distribution to obtain the light intensity distribution of the target.

2.2 Honeycomb dense azimuth sampling lens array

Aiming at the problems of low sampling rate of medium and high-frequency information and sparse azimuthal sampling of the lens array, a novel honeycomb dense azimuth sampling lens array is proposed, as shown in Fig. 2(a). The honeycomb lens array is composed of regular hexagonal lenses closely arranged, and there is neither gap nor overlap between each lens. Lenses with the same color are defined as the same layer and the number of layers N increases sequentially from the inside to the outside. The number of lenses in each layer is 6N, and as the number of layers of the lens array increases, the total number of lenses is L = 3N × (N + 1) + 1.

 figure: Fig. 2.

Fig. 2. Three different arrangement of sampling lens arrays. (a) The honeycomb lens array. (b) The hexagonal lens array. (c) The radial-spoke lens array.

Download Full Size | PDF

Assuming that the distance between adjacent lenses is D, and the distance between the opposite sides of the regular hexagonal lens is l, as shown in Fig. 3, the fill factor of the lens array is defined as: FF = l/D.

 figure: Fig. 3.

Fig. 3. The fill factor of the honeycomb lens array.

Download Full Size | PDF

The honeycomb lens array can be densely tiled without gaps between each lens, so the fill factor is 1. As shown in Fig. 2, under the same system aperture, the honeycomb lens array is more compact than the radial-spoke lens array and the hexagonal lens array, so the aperture space of the imaging system can be effectively utilized, and the larger aperture area can sample more spatial frequency information.

2.3 Baseline pairing method for lens array

The paired lenses form an interference baseline to sample the spatial frequency, so the imaging quality can be improved by optimizing the baseline pairing method to increase the spatial frequency sampling. Figure 4 shows three baseline pairing methods, in which the central lens is not paired with other lenses and mainly collects zero-frequency information of the target.

  • (1) Rotational pairing method

    As shown in Fig. 4(a), the lenses of the same layer in the lens array are paired in a rotating manner to form an interference baseline. In Fig. 5(a), the length of the baseline obtained by this pairing method is shorter, and the longest baseline is about half of the system aperture. It has advantages in sampling low and medium-frequency information.

  • (2) Centrosymmetric pairing method

    As shown in Fig. 4(b), in the same layer of lenses, two lenses that are centrosymmetric about the origin (0, 0) are paired to form an interference baseline. In Fig. 5(b), the baseline pairing method can obtain sparse short baselines, dense medium and long baselines, and can obtain the longest baseline in the lens array.

  • (3) Mixed pairing method

    As shown in Fig. 4(c), the 1st, 3rd, 5th, and other odd-numbered layers in the lens array adopt the rotational pairing method, and the 2nd, 4th, 6th, and other even-numbered layers in the lens array adopt the centrosymmetric pairing method. In Fig. 5(c), this pairing method can obtain dense short and medium baselines and sparse long baselines and can obtain the longest baseline in the lens array.

 figure: Fig. 4.

Fig. 4. Three baseline pairing methods for honeycomb lens array. (a) Rotational pairing method. (b) Centrosymmetric pairing method. (c) Mixed pairing method.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. The length distributions of the baselines corresponding to the three pairing methods. (a) Rotational pairing method. (b) Centrosymmetric pairing method. (c) Mixed pairing method (30 layers; 2791 lenses; 1395 baselines).

Download Full Size | PDF

3. Signal transmission model of segmented planar imaging system

The detection target of the segmented planar imaging system is generally a far-field target, which is an incoherent light source with limited spectral width and limited expansion in space. The basic detection theory is the Van Cittert-Zernike theorem, which describes the mutual intensity distribution of the far-field incoherent imaging target at different observation points in the near-field.

The signal transmission process of the segmented planar imaging system is shown in Fig. 6. The detection target σ is an extended incoherent light source with limited spectral width, which can be regarded as a collection of many point light sources. We take the point light source O(ξ, η) with a spectral width of Δλ on the detection target σ as an example for analysis. The polychromatic light emitted by O(ξ, η) can be regarded as a combination of monochromatic lights with different wavelengths, and the propagation process from the object plane to the lens array plane is far-field diffraction.

 figure: Fig. 6.

Fig. 6. Schematic diagram of signal transmission of the segmented planar imaging system.

Download Full Size | PDF

The diffraction light field distribution Uλ(x, y) of a monochromatic point light source with a wavelength of λ on the lens array plane can be expressed by the Fraunhofer diffraction formula as:

$${U_\lambda }({x,y} )= \frac{{{e^{jkz}} \cdot {e^{j\frac{k}{{2z}}({{x^2} + {y^2}} )}}}}{{j\lambda z}}\int {\int_{ - \infty }^{ + \infty } {{O_\lambda }({\xi ,\eta } ){e^{ - j\frac{k}{z}({x\xi + y\eta } )}}} } d\xi d\eta$$

Therefore, for a polychromatic point light source O(ξ, η) with a spectral width of Δλ, the light field distribution U(x, y) on the lens array plane can be expressed as the incoherent superposition of each monochromatic diffracted light field:

$$U({x,y} )= \sum\limits_\lambda {{U_\lambda }({x,y} )} = \sum\limits_\lambda {\left\{ {\frac{{{e^{jkz}} \cdot {e^{j\frac{k}{{2z}}({{x^2} + {y^2}} )}}}}{{j\lambda z}}\int {\int_{ - \infty }^{ + \infty } {{O_\lambda }({\xi ,\eta } ){e^{ - j\frac{k}{z}({x\xi + y\eta } )}}} } d\xi d\eta } \right\}}$$
where Σ represents the incoherent superposition, λ is the wavelength of the incident light, z is the distance between the object plane and lens array plane, and k is the wave number.

The lens array performs phase modulation on the incident wavefront, ignoring the constant phase delay caused by the thickness of the lens, the transmittance of the mth lens in the lens array can be expressed as:

$${t_m}({x,y} )= {e^{ - j\frac{\pi }{{\lambda f}}[{{{({x - {x_m}} )}^2} + {{({y - {y_m}} )}^2}} ]}} \cdot P({x - {x_m},y - {y_m}} )$$
where f is the focal length of the lens, (xm, ym) is the central coordinate of the mth lens. The first term is the phase factor of the lens, which indicates the phase modulation of the lens to the incident wavefront, and P(x-xm, y-ym) is the pupil function of the lens, indicating the limitation of the lens on the size range of the incident wavefront.

After the phase modulation of the lens array, the light field distribution U'm(x, y) on the rear surface of the lens is:

$$U_m^{\prime}({x,y} )= U({x,y} )\cdot {t_m}({x,y} )$$

The propagation process from the rear surface of the lens to the focal plane of the lens is near-field diffraction. According to the Fresnel diffraction formula, the light field distribution Rm(u, v) at the focal plane of the lens can be expressed as:

$${R_m}({u,v} )= \frac{{{e^{jkf}} \cdot {e^{j\frac{k}{{2f}}({{u^2} + {v^2}} )}}}}{{j\lambda f}}\int {\int_{ - \infty }^{ + \infty } {{U_m}^{\prime}({x,y} )} } \cdot {e^{j\frac{k}{{2f}}({{x^2} + {y^2}} )}}{e^{ - j\frac{k}{f}({ux + vy} )}}dxdy$$

The waveguide array spatially samples the light field at the focal plane of the lens, as shown in Fig. 7, and the light field coupled to the waveguide numbered (i, j) can be expressed as:

$${R_{ij}}({u,v} )= {\eta _{ij}} \cdot {E_{ij}}({u + i{b_1},v + j{h_1}} )$$
where ηij represents the coupling coefficient between the focal plane light field of the lens and the waveguide numbered (i, j), Eij(u + ib1, v + jh1) represents the fundamental mode field of the rectangular waveguide numbered (i, j), where ib1 and jh1 are the offsets relative to the central rectangular waveguide. According to the principle of superposition integration, the coupling coefficient can be expressed as:
$${\eta _{ij}} = \frac{{\int\!\!\!\int {{R_m}({u,v} )} E_{ij}^\ast ({u + i{b_1},v + j{h_1}} )dudv}}{{\int\!\!\!\int {{{|{{E_{ij}}({u + i{b_1},v + j{h_1}} )} |}^2}dudv} }}$$

 figure: Fig. 7.

Fig. 7. Schematic diagram of waveguide array.

Download Full Size | PDF

The polychromatic light field is transmitted to the AWG through the optical waveguide, and the incident light field is divided into multiple narrow spectral channels by the AWG. The focal field distribution at the receiving end of the output waveguide of the AWG can be expressed as:

$$E({u^{\prime},v^{\prime}} )\propto \frac{{{e^{jk\left\{ {{R_l} - \frac{{{{({u^{\prime}} )}^2} + {{({v^{\prime}} )}^2}}}{{2{R_l}}}} \right\}}}}}{{j\lambda {R_l}}}\mathrm{{\cal F}}{[{{E_{AW}}({{x_1},{y_1}} )} ]_{{f_x} = \frac{{u^{\prime}}}{{\lambda {R_l}}},{f_y} = \frac{{v^{\prime}}}{{\lambda {R_l}}}}} \cdot \sum\limits_{L = 1}^{{N_{AW}}} {\left\{ {{\eta_L} \cdot {e^{\frac{{ - j2\pi Ldu^{\prime}}}{{\lambda {R_l}}}}}} \right\}}$$
where Rl is the diameter of the Rowland circle, ηL is the coupling coefficient of the Lth arrayed waveguide, NAW is the number of arrayed waveguides, d is the spacing between adjacent arrayed waveguides, and EAW(x1, y1) is the fundamental mode field of the arrayed waveguide.

The light field coupled to the Nth output waveguide can be expressed as:

$${E_{ow,N}}({u^{\prime},v^{\prime}} )= {\eta _{ow,N}} \cdot {E_o}({u^{\prime} - N{d_o},v^{\prime}} )$$
where ηow,N is the coupling coefficient of the Nth output waveguide, do is the spacing between adjacent output waveguides, and Eo(u′-Ndo,v′) is the fundamental mode field of the Nth output waveguide.

After the light field signals are output from the output waveguide of the AWG, the phase shifter adjusts the phases of the light field signals output by the corresponding spectral channels of the two paired lenses, so that the two light field signals meet the interference conditions. The two light field signals are transmitted to the 90-degree optical hybrid and the balanced quadrature detector for interference and signal measurement. The input light fields ES and ER of the balanced quadrature detector can be expressed as:

$$\begin{aligned} &{E_S} = {E_s} \cdot {e^{j({{\omega_s}t + {\varphi_s}} )}}\\ &{E_R} = {E_r} \cdot {e^{j({{\omega_r}t + {\varphi_r}} )}} \end{aligned}$$
where ωs and ωr are the frequencies of the input light, φs and φr are the initial phases, and Es and Er are the amplitudes.

After the input light fields ES and ER pass through the 90-degree optical hybrid, the output four optical signals can be expressed as:

$$\left[ {\begin{array}{l} {{E_1}}\\ {{E_2}}\\ {{E_3}}\\ {{E_4}} \end{array}} \right] = \frac{1}{{\sqrt 2 }}\left[ {\begin{array}{cccc} 1&0&{ - j}&0\\ { - j}&0&1&0\\ 0&1&0&{ - j}\\ 0&{ - j}&0&1 \end{array}} \right]\left[ {\begin{array}{cccc} 1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&{{e^{ - j\frac{\pi }{2}}}} \end{array}} \right]\left[ {\begin{array}{cc} {\frac{1}{{\sqrt 2 }}}&0\\ {\frac{1}{{\sqrt 2 }}}&0\\ 0&{\frac{1}{{\sqrt 2 }}}\\ 0&{\frac{1}{{\sqrt 2 }}} \end{array}} \right]\left[ {\begin{array}{c} {{E_S}}\\ {{E_R}} \end{array}} \right] = \left[ {\begin{array}{l} {\frac{{{E_S} - j{E_R}}}{2}}\\ {\frac{{ - j{E_S} + {E_R}}}{2}}\\ {\frac{{{E_S} - {E_R}}}{2}}\\ {\frac{{ - j{E_S} - j{E_R}}}{2}} \end{array}} \right]$$

Then through the photoelectric conversion, the four outputs can be expressed as:

$$\left\{ \begin{array}{l} {I_1} = \frac{1}{4}R[{E_s^2 + E_r^2 - 2{E_s}{E_r}\sin (\Delta \varphi )} ]\\ {I_2} = \frac{1}{4}R[{E_s^2 + E_r^2 + 2{E_s}{E_r}\sin (\Delta \varphi )} ]\\ {I_3} = \frac{1}{4}R[{E_s^2 + E_r^2 - 2{E_s}{E_r}\cos (\Delta \varphi )} ]\\ {I_4} = \frac{1}{4}R[{E_s^2 + E_r^2 + 2{E_s}{E_r}\cos (\Delta \varphi )} ]\end{array} \right.$$
where Δφ is the phase difference, and R is the responsivity of the detector. After differential amplification by a balanced quadrature detector, I and Q are
$$\begin{aligned} &Q = {I_2} - {I_1} = R{E_s}{E_r}\sin ({\Delta \varphi } )\\ &I = {I_4} - {I_3} = R{E_s}{E_r}\cos ({\Delta \varphi } )\end{aligned}$$

The amplitude and phase of the interference fringes can be obtained by

$$\begin{aligned} &\Delta \varphi = \arctan \left( {\frac{Q}{I}} \right)\\ &{E_s}{E_r} = \frac{Q}{{R\sin ({\Delta \varphi } )}} \end{aligned}$$

We obtain the amplitude and phase information from multiple sets of interference fringes, that is, the mutual intensity information of the target $J = {E_s}{E_r} \cdot {e^{j\Delta \varphi }}$. According to the Van Cittert-Zernike theorem, the mutual intensity J(x2,y2; x1,y1) of the lenses (x1, y1) and (x2, y2) is proportional to the Fourier transform of the light source intensity I(ξ, η), as shown in formula (15):

$$J({{x_2},{y_2};{x_1},{y_1}} )= \frac{{\exp ({j\psi } )}}{{{{\bar{\lambda }}^2}{z^2}}}\int\!\!\!\int {I({\xi ,\eta } )\exp \left\{ {\left. { - j\frac{{2\pi }}{{\bar{\lambda }z}}[{\Delta x\xi + \Delta y\eta } ]} \right\}} \right.} d\xi d\eta$$
where Ψ is the phase factor, and Δx = x2-x1, Δy = y2-y1.

The intensity distribution of the target can be obtained by performing an inverse Fourier transform on the mutual intensity.

4. Simulation results and analysis

According to the signal transmission model of the segmented planar imaging system, the MATLAB model is established, and its imaging performance is simulated. The imaging quality of the segmented planar imaging system based on honeycomb lens array, hexagonal lens array and radial-spoke lens array is analyzed and compared. The system parameters used for the simulation are shown in Table 1.

Tables Icon

Table 1. System parameters used for simulations

The design parameters of the lens array are shown in Table 2. The number of lenses in the three lens arrays is basically the same, and the longest baseline Bmax is also basically the same. The arrangement of the three lens arrays is shown in Fig. 8.

 figure: Fig. 8.

Fig. 8. The arrangement of the three lens arrays. (a) The honeycomb lens array. (b) The hexagonal lens array. (c) The radial-spoke lens array.

Download Full Size | PDF

Tables Icon

Table 2. Design parameters of three lens arrays

The honeycomb lens array and the hexagonal lens array have the same geometric range (−0.1m∼0.1 m). Due to the sparse arrangement of lenses, the radial-spoke lens array has a larger geometric range (−0.23m∼0.23 m). The effective utilization ratios (UR) of the aperture space of the three lens arrays are 82.7%, 75%, and 14.3%, respectively. The higher the effective utilization ratio of the aperture space, the more frequency information can be sampled and the imaging quality can be improved.

The spatial frequency distributions of the three lens arrays are shown in Fig. 9. The simulation results show that the frequency sampling information of the imaging system based on the honeycomb lens array is more than that of the hexagonal lens array and the radial-spoke lens array. Since the honeycomb lens array improves the azimuth sampling density of spatial frequency information and increases the sampling of medium and high-frequency information, the frequency sampling information increases by 2.9% compared with the hexagonal lens array and 1.5 times compared with the radial-spoke lens array. The maximum frequency sampling distance of the radial-spoke lens array is L = 1.633, which is slightly greater than that of the honeycomb lens array and the hexagonal lens array, which are L = 1.578 respectively. This is because the longest baseline of the radial-spoke lens array is slightly larger than that of the other two lens arrays, as shown in Table 2. The longer the longest baseline, the larger the frequency sampling distance in the UV spatial frequency distribution. The frequency sampling distance of the honeycomb lens array with mutual intensity greater than 0.5 is l = 0.431, which is greater than the frequency sampling distances of the hexagonal lens array and radial-spoke lens array l = 0.425, l = 0.375.

 figure: Fig. 9.

Fig. 9. The spatial frequency distributions of (a) honeycomb lens arrays, (b) hexagonal lens arrays, and (c) radial-spoke lens arrays.

Download Full Size | PDF

Figure 10 shows the imaging results of the segmented planar imaging system based on three different lens arrays. The imaging results of the imaging system based on the honeycomb lens array are superior to those of the hexagonal lens array and the radial-spoke lens array in terms of overall image contour, image contrast, and image detail information. The peak signal-to-noise ratio (PSNR) of the imaging results of the honeycomb lens array is 1.74% and 59.6% higher than that of the imaging results of the hexagonal lens array and the radial-spoke lens array, respectively. The structural similarity index (SSIM) of the imaging results of the honeycomb lens array is 22.8% higher than that of the imaging results of the hexagonal lens array, and more than twice higher than that of the imaging results of the radial-spoke lens array. Therefore, the segmented planar imaging system based on the honeycomb lens array has better imaging performance.

 figure: Fig. 10.

Fig. 10. (a) Intensity distribution of target light source. The imaging results of the segmented planar imaging system based on (b) the honeycomb lens array, (c) the hexagonal lens array, (d) and the radial-spoke lens array.

Download Full Size | PDF

5. Performance optimization of segmented planar imaging system

The baseline pairing method and the fill factor of the honeycomb lens array have a significant impact on the imaging quality. Next, the influence of these two factors on the imaging quality is simulated and analyzed.

5.1 Simulation analysis of baseline pairing method

Now, the three baseline pairing methods proposed in Section 2.3 are simulated and compared. The spatial frequency distributions of the three baseline pairing methods are shown in Fig. 11. It can be seen that the rotational pairing method densely samples the low and medium-frequency information, but lacks high-frequency information sampling. The centrosymmetric pairing method densely samples medium and high-frequency information and lacks part of the low-frequency information sampling. The mixed pairing method first ensures dense sampling of low and medium-frequency information, and then sparsely samples high-frequency information. In the spatial frequency distribution, the maximum frequency sampling distance of the centrosymmetric pairing method and the mixed pairing method are L = 1.578, which is greater than that of the rotational pairing method L = 0.803. The frequency sampling distances with mutual intensity greater than 0.5 are l = 0.541, l = 0.534 and l = 0.534, respectively.

 figure: Fig. 11.

Fig. 11. The spatial frequency distribution with different baseline pairing methods. (a) Rotational pairing method (b) Centrosymmetric pairing method (c) Mixed pairing method.

Download Full Size | PDF

Figure 12 shows the PSFs of the imaging system with different baseline pairing methods. The full width at half maximum (FWHM) of the PSFs of the rotational pairing method, the centrosymmetric pairing method, and the mixed pairing method are 0.124 m, 0.077 m, and 0.085 m, respectively. The PSFs of the centrosymmetric pairing method and the mixed pairing method have a narrower FWHM, but there are two symmetrical side lobes, which cause image artifacts in the reconstructed image and reduce the imaging quality.

 figure: Fig. 12.

Fig. 12. The PSFs of the segmented planar imaging system with different baseline pairing methods.

Download Full Size | PDF

The imaging results of the segmented planar imaging system with different baseline pairing methods are shown in Fig. 13. The imaging results of the rotational pairing method and the mixed pairing method have better image contrast and clear image contour. Furthermore, Figs. 11(a) and (c) show that the mixed pairing method increases the sampling of medium and high-frequency information, so the imaging result is superior to the rotational pairing method in detail information restoration, as shown in Fig. 13(d) and (f). The centrosymmetric pairing method lacks some low-frequency information sampling, resulting in poor image contrast and image artifacts. Therefore, the mixed pairing method is the optimal choice, and the imaging results are better than the first two baseline pairing methods, which can effectively improve the imaging quality.

 figure: Fig. 13.

Fig. 13. The imaging results of the segmented planar imaging system with (a) rotational pairing method, (b) centrosymmetric pairing method, (c) mixed pairing method; (d)-(f) are enlarged images of the detailed information in the center of the image.

Download Full Size | PDF

5.2 Fill factor of the honeycomb lens array

Theoretically, there is no gap between adjacent lenses of the honeycomb lens array, and the fill factor is 1. Assembly errors are inevitable in manufacturing, resulting in gaps between adjacent lenses and reducing the fill factor of the lens array. The influence of different fill factors on the imaging quality of the system is analyzed. Figure 14 shows the spatial frequency distribution of different fill factors. As the fill factor increases, the sampling density of the spatial frequency also increases, increasing the sampling amount of frequency information. Compared with the sampling density when the fill factor is 1, the relative sampling density of different fill factors are 0.66, 0.77, 0.86, 0.94, and 0.97, respectively.

 figure: Fig. 14.

Fig. 14. The spatial frequency distributions of the segmented planar imaging system with fill factors of 0.6, 0.7, 0.8, 0.9, 0.95and 1.

Download Full Size | PDF

The PSF of the segmented planar imaging system with honeycomb lens array with different fill factors is shown in Fig. 15. As the fill factor of the lens array increases, the width of the main lobe and the amplitude of the side lobes of the PSF of the imaging system gradually decrease. The imaging results are shown in Fig. 16. As the fill factor of the lens array increases, the image contrast, overall image contour and detail information are also enhanced, image artifacts are weakened, and the imaging quality is improved. When the fill factor of the lens array is 0.95, the imaging quality is very close to that when the fill factor is 1. Therefore, in the actual manufacturing process, the allowable error range of the fill factor of the lens array is within 5%.

 figure: Fig. 15.

Fig. 15. The PSF of the segmented planar imaging system with honeycomb lens array with fill factors of 0.6, 0.7, 0.8, 0.9, 0.95 and 1.

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. The imaging results of the segmented planar imaging system with honeycomb lens array with fill factors of 0.6, 0.7, 0.8, 0.9, 0.95 and 1.

Download Full Size | PDF

6. Conclusion

In order to improve the azimuth sampling density of spatial frequency and increase the sampling of spatial frequency information, in this paper, a novel honeycomb dense azimuth sampling lens array with a high fill factor is proposed, and three baseline pairing methods are designed. The signal transmission model of the segmented planar imaging system based on the honeycomb dense azimuth sampling lens array is established and the imaging process is simulated and analyzed. The simulation results show that the imaging quality of the imaging system based on the honeycomb dense azimuth sampling lens array is better than that of the hexagonal lens array and the radial-spoke lens array, because the honeycomb dense azimuth sampling lens array increases the azimuth sampling density of the spatial frequency, and the high fill factor of the lens array can sample more frequency information. In addition, the effects of different baseline pairing methods and the fill factor of the lens array on the imaging quality are also studied. The results show that the mixed pairing method first ensures dense sampling of low and medium-frequency information, and then sparsely samples high-frequency information, which makes the imaging results better than those of the other two baseline pairing methods in terms of image contour, contrast and image detail information. With the increase of the fill factor of the lens array, the image contour, image contrast and image detail information are enhanced, and the image quality is improved. In the actual manufacturing process, the allowable error range of the fill factor of the lens array is within 5%. The research results provide a theoretical basis for the development of imaging systems.

Funding

Fundamental Research Funds for the Central Universities; National Natural Science Foundation of China (62005204, 62005206, 62075176).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. T. H. Su, R. P. Scott, C. Ogden, S. T. Thurman, R. L. Kendrick, A. Duncan, R. X. Yu, and S. J. B. Yoo, “Experimental demonstration of interferometric imaging using photonic integrated circuits,” Opt. Express 25(11), 12653–12665 (2017). [CrossRef]  

2. S. J. Chung, D. W. Miller, and O. L. de Weck, “ARGOS testbed: study of multidisciplinary challenges of future space borne interferometric arrays,” Opt. Eng. 43(9), 2156–2167 (2004). [CrossRef]  

3. T. A. Brummelaar, H. A. McAlister, S. T. Ridgway, W. G. Bagnuolo Jr, N. H. Turner, L. Sturmann, J. Sturmann, D. H. Berger, C. E. Ogden, R. Cadman, W. I. Hartkopf, C. H. Hopper, and M. A. Shure, “First Results from the CHARA Array. II. A Description of the Instrument,” Astrophys. J. 628(1), 453–465 (2005). [CrossRef]  

4. R. Petrov, F. Malbet, G. Weigelt, P. Antonelli, U. Beckmann, Y. Bresson, A. Chelli, M. Dugué, and G. Duvert, “AMBER, the near-infrared spectro-interferometric three-telescope VLTI instrument,” Astron. Astrophys. 464(1), 1–12 (2007). [CrossRef]  

5. J. T. Armstrong, D. Mozurkewich, L. J. Rickard, D. J. Hutter, J. A. Benson, P. F. Bowers, N. M. Elias II, C. A. Hummel, K. J. Johnston, D. F. Buscher, J. H. Clark III, L. Ha, L.-C. Ling, N. M. White, and R. S. Simon, “The navy prototype optical interferometer,” Astrophys. J. 496(1), 550–571 (1998). [CrossRef]  

6. J. P. Aufdenberg, A. Mérand, V. Coudé Du Foresto, O. Absil, E. Di Folco, P. Kervella, S. Ridgway, D. H. Berger, T. A. Ten Brummelaar, H. A. McAlister, J. Sturmann, L. Sturmann, and N. H. Turner, “First results from the CHARA array. VII. Long-baseline interferometric measurements of Vega consistent with a pole-on, rapidly rotating star,” Astrophy. J. 645(1), 664–675 (2006). [CrossRef]  

7. R. L. Kendrick, A. Duncan, C. Ogden, J. Wilm, D. M. Stubbs, S. T. Thurman, T. Su, R. P. Scott, and S. J. B. Yoo, “Flat-Panel space-based space surveillance sensor,” AMOS (2013).

8. R. P. Scott, T. Su, C. Ogden, S. T. Thurman, R. L. Kendrick, A. Duncan, R. Yu, and S. J. B. Yoo, “Demonstration of a photonic integrated circuit for multi-baseline interferometric imaging,” IEEE Photonics Conference (2014). [CrossRef]  

9. A. L. Duncan, R. L. Kendrick, C. Ogden, D. Wuchenich, S. T. Thurman, S. J. S. B. Yoo, T. H. Su, S. Pathak, and R. Proietti, “SPIDER: Next generation chip scale imaging sensor,” Advanced Maui Optical and Space Surveillance Technologies Conference (2015).

10. S. T. Thurman, R. L. Kendrick, A. Duncan, D. Wuchenich, and C. Ogden, “System design for a SPIDER Imager,” Frontiers in Optics, 2015. [CrossRef]  

11. N. K. Fontaine, R. P. Scott, L. Zhou, F. M. Soares, J. P. Heritage, and S. J. B. Yoo, “Real-time full-field arbitrary optical waveform measurement,” Nat. Photonics 14(4), 248–254 (2010). [CrossRef]  

12. T. J. Pearson and A. C. S. Readhead, “Image formation by self-calibration in radio astronomy,” Ann. Rev. Astron. Astrophys. 22(1), 97–130 (1984). [CrossRef]  

13. K. Badham, R. L. Kendrick, D. Wuchenich, C. Ogden, G. Chriqui, A. Duncan, S. T. Thurman, T. Su, W. Lai, J. Chun, S. Li, G. Liu, and S. J. B. Yoo, “Photonic integrated circuit-based imaging system for SPIDER,” in CLEO Pacific Rim (Optical Society of America, 2017), paper #3-3C-2. [CrossRef]  

14. T. H. Su and G. Y. Liu, “Interferometric imaging using Si3N4 photonic integrated circuits for a SPIDER imager,” Opt. Express 26(10), 12801–12812 (2018). [CrossRef]  

15. Q. H. Chu, Y. J. Shen, M. Yuan, and M. L. Gong, “Numerical simulation and optimal design of Segmented Planar Imaging Detector for Electro-Optical Reconnaissance,” Opt. Commun. 405, 288–296 (2017). [CrossRef]  

16. Q. H. Yu, B. Ge, Y. Li, Y. B. Yue, F. C. Chen, and S. L. Sun, “System design for a “checkboard” imager,” Appl. Opt. 57(35), 10218–10223 (2018). [CrossRef]  

17. G. M. Lv, Q. Li, Y. T. Chen, H. J. Feng, Z. H. Xu, and J. J. Mu, “An improved scheme and numerical simulation of segmented planar imaging detector for elector-optical reconnaissance,” Opt. Rev. 26(6), 664–675 (2019). [CrossRef]  

18. W. P. Gao, X. R. Wang, L. Ma, Y. Yuan, and D. F. Guo, “Quantitative analysis of segmented planar imaging quality based on hierarchical multistage sampling lens array,” Opt. Express 27(6), 7955–7964 (2019). [CrossRef]  

19. W. P. Gao, Y. Yuan, X. R. Wang, L. Ma, Z. Z. Zhao, and H. Yuan, “Quantitative analysis and optimization design of the segmented planar integrated optical imaging system based on an inhomogeneous multistage sampling lens array,” Opt. Express 29(8), 11869–11884 (2021). [CrossRef]  

20. C. Ding, X. C. Zhang, X. Y. Liu, H. R. Meng, and M. Xu, “Structure Design and Image Reconstruction of Hexagonal-Array Photonics Integrated Interference Imaging System,” IEEE Access 8, 139396–139403 (2020). [CrossRef]  

21. H. L. Hu, C. Y. Liu, Y. X. Zhang, Q. P. Feng, and S. Liu, “Optimal design of segmented planar imaging for dense azimuthal sampling lens array,” Opt. Express 29(15), 24300–24314 (2021). [CrossRef]  

22. S. J. B. Yoo, B. Guan, and R. P. Scott, “Heterogeneous 2D/3D photonic integrated microsystems,” Microsyst. Nanoeng. 2(1), 16030 (2016). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. Structure and working principle diagram of the segmented planar imaging system based on honeycomb sampling lens array.
Fig. 2.
Fig. 2. Three different arrangement of sampling lens arrays. (a) The honeycomb lens array. (b) The hexagonal lens array. (c) The radial-spoke lens array.
Fig. 3.
Fig. 3. The fill factor of the honeycomb lens array.
Fig. 4.
Fig. 4. Three baseline pairing methods for honeycomb lens array. (a) Rotational pairing method. (b) Centrosymmetric pairing method. (c) Mixed pairing method.
Fig. 5.
Fig. 5. The length distributions of the baselines corresponding to the three pairing methods. (a) Rotational pairing method. (b) Centrosymmetric pairing method. (c) Mixed pairing method (30 layers; 2791 lenses; 1395 baselines).
Fig. 6.
Fig. 6. Schematic diagram of signal transmission of the segmented planar imaging system.
Fig. 7.
Fig. 7. Schematic diagram of waveguide array.
Fig. 8.
Fig. 8. The arrangement of the three lens arrays. (a) The honeycomb lens array. (b) The hexagonal lens array. (c) The radial-spoke lens array.
Fig. 9.
Fig. 9. The spatial frequency distributions of (a) honeycomb lens arrays, (b) hexagonal lens arrays, and (c) radial-spoke lens arrays.
Fig. 10.
Fig. 10. (a) Intensity distribution of target light source. The imaging results of the segmented planar imaging system based on (b) the honeycomb lens array, (c) the hexagonal lens array, (d) and the radial-spoke lens array.
Fig. 11.
Fig. 11. The spatial frequency distribution with different baseline pairing methods. (a) Rotational pairing method (b) Centrosymmetric pairing method (c) Mixed pairing method.
Fig. 12.
Fig. 12. The PSFs of the segmented planar imaging system with different baseline pairing methods.
Fig. 13.
Fig. 13. The imaging results of the segmented planar imaging system with (a) rotational pairing method, (b) centrosymmetric pairing method, (c) mixed pairing method; (d)-(f) are enlarged images of the detailed information in the center of the image.
Fig. 14.
Fig. 14. The spatial frequency distributions of the segmented planar imaging system with fill factors of 0.6, 0.7, 0.8, 0.9, 0.95and 1.
Fig. 15.
Fig. 15. The PSF of the segmented planar imaging system with honeycomb lens array with fill factors of 0.6, 0.7, 0.8, 0.9, 0.95 and 1.
Fig. 16.
Fig. 16. The imaging results of the segmented planar imaging system with honeycomb lens array with fill factors of 0.6, 0.7, 0.8, 0.9, 0.95 and 1.

Tables (2)

Tables Icon

Table 1. System parameters used for simulations

Tables Icon

Table 2. Design parameters of three lens arrays

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

U λ ( x , y ) = e j k z e j k 2 z ( x 2 + y 2 ) j λ z + O λ ( ξ , η ) e j k z ( x ξ + y η ) d ξ d η
U ( x , y ) = λ U λ ( x , y ) = λ { e j k z e j k 2 z ( x 2 + y 2 ) j λ z + O λ ( ξ , η ) e j k z ( x ξ + y η ) d ξ d η }
t m ( x , y ) = e j π λ f [ ( x x m ) 2 + ( y y m ) 2 ] P ( x x m , y y m )
U m ( x , y ) = U ( x , y ) t m ( x , y )
R m ( u , v ) = e j k f e j k 2 f ( u 2 + v 2 ) j λ f + U m ( x , y ) e j k 2 f ( x 2 + y 2 ) e j k f ( u x + v y ) d x d y
R i j ( u , v ) = η i j E i j ( u + i b 1 , v + j h 1 )
η i j = R m ( u , v ) E i j ( u + i b 1 , v + j h 1 ) d u d v | E i j ( u + i b 1 , v + j h 1 ) | 2 d u d v
E ( u , v ) e j k { R l ( u ) 2 + ( v ) 2 2 R l } j λ R l F [ E A W ( x 1 , y 1 ) ] f x = u λ R l , f y = v λ R l L = 1 N A W { η L e j 2 π L d u λ R l }
E o w , N ( u , v ) = η o w , N E o ( u N d o , v )
E S = E s e j ( ω s t + φ s ) E R = E r e j ( ω r t + φ r )
[ E 1 E 2 E 3 E 4 ] = 1 2 [ 1 0 j 0 j 0 1 0 0 1 0 j 0 j 0 1 ] [ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 e j π 2 ] [ 1 2 0 1 2 0 0 1 2 0 1 2 ] [ E S E R ] = [ E S j E R 2 j E S + E R 2 E S E R 2 j E S j E R 2 ]
{ I 1 = 1 4 R [ E s 2 + E r 2 2 E s E r sin ( Δ φ ) ] I 2 = 1 4 R [ E s 2 + E r 2 + 2 E s E r sin ( Δ φ ) ] I 3 = 1 4 R [ E s 2 + E r 2 2 E s E r cos ( Δ φ ) ] I 4 = 1 4 R [ E s 2 + E r 2 + 2 E s E r cos ( Δ φ ) ]
Q = I 2 I 1 = R E s E r sin ( Δ φ ) I = I 4 I 3 = R E s E r cos ( Δ φ )
Δ φ = arctan ( Q I ) E s E r = Q R sin ( Δ φ )
J ( x 2 , y 2 ; x 1 , y 1 ) = exp ( j ψ ) λ ¯ 2 z 2 I ( ξ , η ) exp { j 2 π λ ¯ z [ Δ x ξ + Δ y η ] } d ξ d η
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.