Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Frequency-domain compression imaging for extending the field of view of infrared thermometers

Open Access Open Access

Abstract

We propose a computational imaging technique for expanding the field of view of infrared thermometers. The contradiction between the field of view and the focal length has always been a chief problem for researchers, especially in infrared optical systems. Large-area infrared detectors are expensive and technically arduous to be manufactured, which enormously limits the performance of the infrared optical system. On the other hand, the extensive use of infrared thermometers in COVID-19 has created a considerable demand for infrared optical systems. Therefore, improving the performance of infrared optical systems and increasing the utilization of infrared detectors is vital. This work proposes a multi-channel frequency-domain compression imaging method based on point spread function (PSF) engineering. Compared with conventional compressed sensing, the submitted method images once without an intermediate image plane. Furthermore, phase encoding is used without loss of illumination of the image surface. These facts can significantly reduce the volume of the optical system and improve the energy efficiency of the compressed imaging system. Therefore, its application in COVID-19 is of great value. We design a dual-channel frequency-domain compression imaging system to verify the proposed method's feasibility. Then, the wavefront coded PSF and optical transfer function (OTF) are used, and the two-step iterative shrinkage/thresholding (TWIST) algorithm is used to restore the image to get the final result. This compression imaging method provides a new idea for the large field of view monitoring systems, especially in infrared optical systems.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Since COVID-19 swept the world, the expenditure on prevention has been a significant burden for governments worldwide. Among them, infrared thermometers account for a considerable amount of these expenditures. Limited by the current semiconductor integration process, it is impossible to produce ultra-large area CCD or CMOS, and even more so in infrared optical systems. In the existing imaging system, to meet some strict requirements, we often choose to sacrifice FOV or focal length. Because the increase in the FOV and focal length will rapidly increase the image height, this significantly limits the performance of the infrared thermometer and brings more expensive expenses. Therefore, it is necessary to find a way to design an imaging system with a large FOV and long focal length, to provide a more extensive observation range and higher resolution.

Currently, the existing methods for solving the contradiction between FOV and focal length are at the great expense of other imaging system performance, such as volume, bandwidth, edge FOV image performance, and so on. Scanning is a good choice if it is impossible to obtain a large enough detector [1,2,3]. In the scanning imaging optical system, a high-speed scanning mechanism such as a scanning prism, rotating drum, or swing mirror is added to the optical path to realize the scanning of the FOV. The image stitching algorithm is used to get the final image. However, the scanning system tends to increase the system volume, bring difficulties to aberration correction, and reduce the time resolution. Moreover, the complex electromechanical control system will enormously reduce the system’s robustness. In the case of relatively high time resolution requirements, the multi-camera array is a better choice [4,5,6]. For instance, AWARE-2 is a multiscale camera with 120°-by-50° FOV and a 38 -$\mu rad$ instantaneous FOV. It includes 98 cameras, each with a 14-megapixel sensor. But it immensely increases the system volume and bandwidth and brings extreme pressure to the system layout and data processing. In addition, Zhu Jun et al. proposed a concept of field focal length (FFL) which couples the focal length and FOV. This method realizes the coexistence of large FOV and long focal length through the variance of focal length corresponding to the FOV. But it sacrifices the resolution of the edge FOV [7]. The images obtained have considerable distortion, and the system aperture is restricted, which limits the working distance of the optical system.

Therefore, a new imaging theory is urgently needed to overcome the challenge of achieving both large FOV and long focal length without sacrificing other imaging system performance. Compressed sensing and point spread function (PSF) engineering can break through the conventional imaging theory, making the optical system have more excellent performance. Compressed sensing breaks through the conventional Nyquist sampling theory and has many applications in the imaging field. It has derived a new research direction named compressive imaging. Compressive imaging is a new method to solve many contradictions in the imaging field. Jun Ke et al. used spatial compression imaging to improve the spatial resolution of the mid-wave infrared imaging system [8]. In addition, compression imaging is also widely used in high-speed imaging [9,10,11] and compressive spectral imaging [12,13,14]. PSF engineering captures more object-space information by adjusting the PSF of the optical system, thus breaking the limit of conventional imaging theory. Now it has been extensively researched in the fields of 3D imaging [15,16], fluorescence microscopy [17,18], expanding the depth of field [19,20], and anti-laser damage [21,22,26]. Although neither technique can solve the contradiction between FOV and focal length of the current optical system. It is both an interest and a challenge to fully use these novel technologies to break through this contradiction in conventional imaging.

In this article, we propose a new method to achieve the coexistence of a large FOV and a long focal length. It adopts the multi-channel mode and realizes sparse sampling in the frequency-domain by PSF engineering. And the compressed sensing algorithm is used to restore the image. Under the detector size and focal length conditions, the imaging effect of several times FOV is realized. At the same time, other parameters of the system remain unchanged. An example of an all-reflective dual-channel freeform surface imaging system has given below. It achieves twice the FOV imaging with the same focal length and detector.

2. Multi-channel frequency-domain compression imaging model

The proposed multi-channel frequency-domain compression imaging (MFCI) uses PSF engineering to compress the information obtained from multiple channels into single-channel imaging data. Firstly, hybrid image information is received by the multi-channel imaging optical system through the collective image plane design. It is worth noting that the image information of each channel is encoded by PSF engineering. Secondly, the detector captures the hybrid image information. Finally, the compressed sensing recovery algorithm restores each channel's original image. In addition, if the FOV of the optical system is continuous, the images acquired by each channel can be spliced with a long focal length and large FOV high-definition image through an image stitching algorithm. Figure 1 shows the schematic diagram.

 figure: Fig. 1.

Fig. 1. Schematic diagram of MFCI encoding and decoding process.

Download Full Size | PDF

2.1 Coding model of MFCI

To reduce the system bandwidth, the image plane of the multi-channel imaging optical system coincides. The object space information obtained by a multi-channel imaging optical system must be aliased in the spatial-domain. Therefore, it is known from the principle of compressed sensing that if the object space information of each channel is to be restored, the sampling of the optical system in sparse-domains must be different. The compression coding in the spatial domain will reduce the image surface illuminance, and the secondary imaging structure will enormously increase the system volume. The optical transfer function (OTF) of the optical system characterizes its sampling characteristics in the frequency-domain, which provides another possibility to separate the images of each channel.

The intensity $g(x,y)$ captured by the detector can be simplified as

$$g(x,y) = \sum\limits_i {{f_i}(x,y) \otimes PS{F_i}(x,y)}, $$
where ${\otimes}$ donates the convolution. ${f_i}(x,y)$ and $PS{F_i}(x,y)$ are the captured object space information and the PSF of the $i$-th channel optical system. According to the Fourier transform, Eq. (1) can be rewritten as
$$G(u,v) = {\cal F}[\sum\limits_i {{f_i}(x,y) \otimes PS{F_i}(x,y)} ] = \sum\limits_i {{F_i}(u,v)\ast OT{F_i}(u,v)}, $$
where $G(u,v)$ and ${F_i}(u,v)$ are the Fourier transform of $g(x,y)$ and ${f_i}(x,y)$. $OT{F_i}(u,v)$ is the OTF of the optical system. ${\cal F}[{\cdot} ]$ represents Fourier transform.

The image acquired by a single channel is also complex because the PSF of different FOV is also different. Contribution of the object information obtained by the $i$-th channel optical system to the detector

$${g_i}(x,y) = \int\limits_j {{f_{i,j}}({x_j},{y_j}) \otimes PS{F_{i,j}}({x_j},{y_j})}$$
where ${f_{i,j}}({x_j},{y_j})$ and $PS{F_{i,j}}({x_j},{y_j})$ are the captured object information and the PSF of the $j$-th FOV of the $i$-th channel optical system. $({x_j},{y_j})$ is the coordinates of the $j$-th FOV on the image plane.

Therefore, to simplify the model and image restoration algorithm, the PSF of each FOV in a single channel should be consistent through PSF engineering. So Eq. (3) can be simplified as

$${g_i}(x,y) = {f_i}(x,y) \otimes PS{F_i}(x,y), $$
where the meaning of each quantity is the same as that of Eqs. (2) and (3). So the imaging model degenerates to Eq. (1).

Because the imaging beams of each FOV only coincide at the pupil, we can obtain the system with the uniform coding effect of each FOV by placing a phase plate at the pupil. By adding different phase plates at the pupil of the optical system to modulate the wavefront of the imaging beam, different PSFs can be obtained. Therefore, different OTFs and PSFs can be obtained through PSF engineering to realize the requirement of varying sampling in the frequency-domain. Thus, the encoded PSF can be simplified as

$$PSF = {|{{\cal F}[P(x,y)\cdot \exp (\textrm{i} \ast phase(x,y))]} |^2}, $$
where $P(x,y)$ is the pupil function of the optical system and $phase(x,y)$ is the phase distribution of the phase plate. The simulation and design of PSF engineering can be based on Eq. (5). Figure 2 shows the four kinds of phase plates of the cube [19], vortex, axicon [21], and chirality [15] and their corresponding PSFs.

 figure: Fig. 2.

Fig. 2. The phase distribution of the four kinds of phase plates of the cubic, vortex, axicon, and chiral and their corresponding PSF. (a) Cubic. (b) Vortex. (c) Axicon. (d) Chiral.

Download Full Size | PDF

The multi-channel imaging optical system must meet the following requirements:

  • 1. The pupil or aperture of each channel must be separated.
  • 2. The image planes of each channel should coincide, and the image size should be the same.
  • 3. The FOVs can be continuous, and multiple FOVs can be spliced into a complete continuous FOV.

To realize the compression sampling in the frequency-domain and maximize the quality of reconstructed image, OTF and PSF of multi-channel imaging optical system must meet certain conditions:

  • 1. The PSFs should be such that the MTFs are more significant than zero below the Nyquist sampling frequency. Otherwise, the information will be lost.
  • 2. The PSFs should make the sampling position of each OTFs different in the image frequency-domain so that the image information of each channel is independent of each other in the frequency-domain and can be restored through the restoration algorithm.
  • 3. The PSFs of different FOVs in each channel should be the same as possible, significantly simplifying the design process and image restoration algorithm.

OTF represents the passband of the optical system. Regardless of how PSF engineering designs the PSF of an optical system, the OTF must have a considerable value at low frequencies to avoid losing the frequency information in object space. It causes the images acquired by the multi-channel imaging optical system to be seriously aliased at low frequencies. In the high-frequency region, the central sampling regions of each multi-channel imaging optical system channel are different. It provides a basis for image separation of each channel in the MFCI optical system.

With the increase of the encoding intensity of the phase plate, the passband of the OTF will decrease. Increasing the encoding power will reduce the amplitude of OTF, thereby reducing the image quality. In addition, as the encoding intensity increases, the PSF of each FOV tends to be consistent. Since the PSF of each FOV is the same, the image can be restored simply by deconvolution. Most importantly, increasing the coding intensity, degrading the image quality, and compressing the low-frequency area of the optical system will weaken the low-frequency aliasing effect of the MFCI optical system.

Therefore, in theory, the worse the MTFs of each channel below the Nyquist sampling frequency of the MFCI optical system, the better the final image restoration. But it must be ensured that there is no zero-point below the Nyquist sampling frequency and no information loss. The image blur caused by terrible MTFs does not affect the quality of the final image after recovery, because this is always a deconvolution problem.

2.2 Decoding model of MFCI

Since it is undesirable to increase the system bandwidth, the proposed method coincides with the image planes of each channel of the multi-channel imaging optical system, sharing one image plane. It makes the bandwidth of the system unchanged, but the amount of information increases several times. It also brings tremendous difficulties to image restoration. The multi-channel imaging optical system encodes the object information obtained by different channels separately, which can also be understood as distinct sampling methods in the frequency-domain. These divergent coding methods ensure that the image restoration algorithm can recover the information of each channel from a single image. Since the information captured by the MFCI system is aliased in the spatial-domain, it is necessary to convert it to other transform domains to decode the image and realize the information separation of each channel. Therefore, an MFCI decoding model is developed based on the compressed sensing restoration algorithm.

From the forward process represented by Eq. (1), it can be seen that the decoding model is the inverse problem, which can be described as

$$\mathop f\limits^ \wedge{=} \arg \min ||{g - f \otimes PSF} ||_2^2 + \tau {\Phi _{TV}}(f ), $$
where ${||\cdot ||_2}$ is the ${l_2}$ norm. f with the size of $({x,y,num} )$ represents the object space information obtained by $num$ channels, and $\mathop f\limits^ \wedge $ is the estimate of f. $PSF$ with the size of $({x,y,num} )$ represents the PSF of $num$ channels. $\tau$ is the regularization parameter, and ${\Phi _{TV}}(f )$ is the total variation (TV) regularization. Image deconvolution is applied to the solution of the inverse problem to obtain a smoother result. It is achieved by converting the issue to the frequency-domain through the compressed sensing restoration algorithm. In the image restoration process, images with sharp edges are expected to be obtained. Therefore, the nonisotropic discrete TV regularizer is used, which is given by
$${\Phi _{TV}}(f )= \sum\limits_y {\sum\limits_x {({|{{\Delta _x}f} |+ |{{\Delta _y}f} |} )} }, $$
where ${\Delta _x}$ and ${\Delta _y}$ denote horizontal and vertical first-order local difference operators on the 2-D lattice without boundary corrections. [23]

Converting to the frequency-domain, Eq. (6) can be rewritten as

$$\mathop F\limits^ \wedge{=} \arg \min ||{G - OTF\ast F} ||_2^2 + \tau {\Phi _{TV}}(F ), $$
where F is the Fourier transform of f and $OTF$ is the Fourier transform of $PSF$.

Therefore, Eq. (8) can be regarded as a linear inverse problem, and the compressed sensing algorithm can be used to restore the image. This method allows smoother and more robust image restoration and reduces low-frequency aliasing effects.

3. Design of dual-channel frequency-domain compression imaging optical system

Based on the design method depicted in Section 2, a dual-channel frequency-domain compression imaging (DFCI) system is designed. It can achieve twice the FOV imaging when the detector parameters and focal length are fixed. The design of the DFCI system has three parts, the design of the dual-channel optical system, PSF engineering designs of the PSF and OTF of the dual-channel optical system, and image restoration.

3.1 Design of dual-channel imaging optical system

The design method of the dual-channel optical system has been reported before. Dewen Cheng et al. pioneered the design of a reflective dual-channel foveated imaging optical system that combines two channels with different focal lengths and FOVs to achieve large FOV and long focal lengths [24]. But it increases the difficulty of assembly and adjustment. And using dual detectors will multiply the system bandwidth. Jun Zhu et al. demonstrated the design of three dual-channel imaging optical systems with different working distances, FOVs, and FOVs and focal lengths. They systematically proposed the design methods for these systems [25]. In addition, the working channel is selected by switching the pupil. But they can gate the two channels by moving the baffle. It will increase the system’s complexity, reduce the time resolution, and result in the loss of manoeuvring targets during the channel switching process.

Inspired by previous reports, an all-reflective dual-channel freeform surface imaging system has been designed. Specifications of the optical system are listed in Table 1.

Tables Icon

Table 1. Specifications of the optical system

The design results of the total reflection dual-channel freeform surface imaging system are shown in Fig. 3 and Fig. 4. The imaging system adopts a three-mirror structure. The aperture is located in the secondary mirror. The primary and tertiary mirrors of the two channels are shared, while the secondary mirrors of the two channels are separated to achieve independence of the two apertures. FOV size and focal length of the two channels are the same, so the image size is the same, and the two image planes coincide, fully meeting the above requirements.

 figure: Fig. 3.

Fig. 3. The layout of the dual-channel imaging optical system.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. MTFs of dual-channel imaging optical system. (a) Channel one. (b) Channel two.

Download Full Size | PDF

Both refractive and reflective optical systems can be used for MFCI, as long as they meet the requirements. Only a reflective dual-channel imaging optical system is shown here.

3.2 PSF engineering

Since the DFCI system has fewer channels, we can easily adjust the frequency-domain sampling method of each channel to be independent of each other. As the number of channels increases, PSF engineering will become more and more difficult. Due to the small number of channels, it is hoped that the DFCI optical system has more functions and can obtain more object space information. In this example, PSF engineering selects wavefront coding. Wavefront coding can extend the system’s depth of focus, improve the tolerance of aberration and reduce the sensitivity of system tolerance. Feng Yan and Xuejun Zhang verified the great application value of wavefront coding in off-axis three-reflection systems [27]. Chi-Feng Lee and Cheng-Chung Lee extended the depth of field by using wavefront coding in the reflecting telescope [28]. Wencai Zhou and Xiaoxiao Wei et al. studied the computational hologram (CGH) test method of the integrated coding unit in the reflective wavefront coding system [29].

By introducing different angles of cubic phases on the two secondary mirrors of the DFCI system, the encoding of the DFCI system is realized. We used a genetic algorithm to optimize the encoded phases and obtained the final results. The cubic phase is shown in Fig. 2(a). And the system structure is shown in Fig. 5. Its imaging performance is shown in Fig. 6.

 figure: Fig. 5.

Fig. 5. The layout of the DFCI.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. MTFs of the encoded dual-channel imaging optical system. (a) Channel one. (b) Channel two.

Download Full Size | PDF

The encoded MTF of each FOV is almost the same, which meets the requirements of the MFCI model. The encoded MTF of the second channel is no longer 0° and 90° MTF but 45° and 135°. PSF and OTF are shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. Dual-channel imaging experiments. (a) Image simulation of conventional dual-channel gated imaging. (b) Image simulation of MFCI.

Download Full Size | PDF

The two channels of DFCI, with coincident image planes and simultaneous operation, are therefore equivalent to a F/#1.75 LWIR system in terms of detection capability. Therefore, we use a DFCI LWIR system which single-channel with a field of view of 4°×4°, F/#3.5 and an uncooled detector of 256 × 256@15um to achieve a LWIR system with a field of view of 4°×8°, resolution of F/#3.5 and equivalent detection capability of F/#1.75 by the MFCI design method.

3.3 Image restoration

The decoding model established in Section 2 can separate the DFCI system’s images into those captured by the two channels. The TWIST algorithm is a compressed sensing algorithm to restore the image encoded by the DFCI system. The restoration result is shown in Fig. 7. We conduct image restoration experiments using pictures from the open-source COCO dataset [32].

Another advantage of having used wavefront encoding is that it can also be combined with super-resolution algorithms to perform super-resolution reconstruction of the images. [33] Thus, further research could lead to a significant improvement in the imaging resolution of DFCI.

oe-31-8-13291-i001

4. Discussion

The image obtained by the detector is the mixture of the object space information obtained by the two channels, as shown in Fig. 7(b). The compressed sensing image restoration algorithm can provide a smooth image restoration method. It can recover the image information of two channels by solving an ill-conditioned inverse problem. However, since the OTFs of the two optical systems overlap at low frequencies, artifacts appear in the restored image. But artifacts are not inevitable. There are three ways to reduce artifacts. The first way is to adjust OTF and MTF. The overlap of the OTFs of the two channels at low frequencies causes artifacts, so we can reduce the artifacts by degrading MTF and OTF. After all, the previous derivation proves that the worse the MTF and OTF, the better the quality of the recovered image. But this method is more limited. Because in order not to lose the object space information, the MTF of the optical system cannot have a zero-point below the Nyquist sampling frequency. It prevents the OTF of the optical system from deteriorating indefinitely. The second way is to use a better image restoration algorithm, and the TWIST algorithm is used in this paper. It has a simple principle and fast convergence speed. However, it was proposed earlier and is inferior to other algorithms, such as GAP [30], DeSCI [31], etc. These algorithms are superior to the TWIST algorithm regarding image restoration effect and artifact suppression. If other better algorithms are used, better results may be obtained. In addition, deep learning may be an excellent solution to this problem. The last method is to transform the image restoration to another transform domain. Due to the characteristics of OTF, low-frequency crosstalk is bound to occur in the Fourier frequency-domain. But it may be possible to achieve complete separation in other transform domains. It may have diverse requirements for OTF and PSF to satisfy sampling in a specific transform domain.

4.1 Feasibility verification

The experimental simulation results of the conventional channel gating mode and the MFCI mode are shown in Fig. 7. The imaging simulation of the multi-channel imaging optical system in the conventional channel gating mode is shown in Fig. 7(a). Each channel works under the diffraction limit. The two channels work sequentially by switching the baffle at the aperture in the dual-channel imaging optical system. Although the mutual interference of the two channels can be reduced as much as possible, it reduces the time resolution of the system by half. The multi-channel optical system’s imaging process in MFCI mode is shown in Fig. 7(b). In MFCI working mode, the MTF of each channel is poor due to encoding. The two channels work simultaneously, and the time resolution remains unchanged. With the system bandwidth unchanged, MFCI achieves twice the amount of data acquisition. But MFCI may cause crosstalk between the two channels. Through the upgrade of the algorithm, we can enormously suppress crosstalk. With algorithmic recovery, the imaging quality can reach the diffraction limit.

4.2 Necessity verification

Two groups of comparative experiments are carried out to verify the necessity of the proposed method. Diffraction-limited PSF and defocused PSF are used for MFCI. One channel is close to the diffraction limit imaging through optical design and imaging directly without coding to obtain the diffraction-limited PSF. The other channel can defocus with the image plane through optical design, and defocused PSF can also be generated by introducing the defocusing term of Zernike aberration into the aperture. In addition, the MTF of the channel producing the defocused PSF has no zero-point below the Nyquist sampling frequency. The experimental results are as shown in Fig. 8(a). Experiment recovery fails. The object space information captured by the two channels is not separated however is all mixed into the restored image of channel 1. The recovered image of channel 2 contains almost no information.

 figure: Fig. 8.

Fig. 8. Comparative experiment. (a) Image simulation using diffraction-limited OTF and defocused OTF of MFCI. (b) Image simulation using wavefront coded OTF diffraction-limited OTF of MFCI.

Download Full Size | PDF

The OTF of the two channels in the control group violated the requirements of the MFCI model. Since the focus OTF covers the defocused OTF, the information obtained by the defocus encoding channel is completely aliased into the focus channel in the spatial and frequency domains. Therefore, it is impossible to separate the defocus-coded image from the image obtained by the detector. It also directly affects the image restoration of the focus encoding channel so that the image restored by the focus encoding channel contains the information of two channels. In contrast, the image restored by the defocus encoding channel contains almost no information.

In another group of contrast experiments, wavefront coded OTF and focusing OTF were used, which also failed to meet the requirements of the MFCI model. The final results of the two channels are the same as the first group of comparative experiments. The images of the channels with small OTF coverage almost contain no information. On the contrary, the other channel image includes the information of the two channels and cannot be separated.

The recovered image contains low-frequency crosstalk from another channel, which will degrade the image quality. To quantitatively evaluate the restored image, structural similarity index (SSIM) is used to describe the image quality. Table 2 shows the SSIM of two channels with the four groups of experiments.

Tables Icon

Table 2. SSIM of the restored image in four group experiments

The SSIM of the restored are 0.6472 and 0.6747, which is slightly lower than that of the conventional diffraction limit imaging, because of the crosstalk. But it is significantly higher than the two groups of comparative experiments.

4.3 Effect of image lightness on MFCI

After extensive experiments, we can speculate that lightness affects the recovery effect of MFDI. It can help the proposed method find a suitable application environment or adjust parameters for better recovery results. In this experiment, the average grey level of the image is used to characterize the image’s lightness. SSIM is used as an evaluation indicator. Because the proposed method will perform gray-scale stretching on the processed image, the original image will also be grey-scale stretched before calculating the SSIM. As shown in Fig. 9, the experiment used six pictures with different lightness and the same information. All images are from the open-source COCO dataset [32]. The recovery effect is demonstrated in Fig. 10 and Fig. 11.

 figure: Fig. 9.

Fig. 9. A set of images with different lightness.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Experimental recovery effect. (a)-(f) are the restoration effects.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Analysis of experimental results. (a) The relationship between the restoration effect of channel 2 image and its average gray level. (b) The relationship between the restoration effect of channel 1 image and the average gray level of channel 2 image.

Download Full Size | PDF

When the image lightness is low, the image restoration effect of this channel is positively correlated with the average grey level of the image, as the restoration effect of the other channel is negatively correlated with it. It is because there is crosstalk between channels. If the average grey level of the image is low and the detailed information is smoother, the information will be submerged in the crosstalk noise. Conversely, the higher the average grey level of the image, the greater the crosstalk to another channel image. However, when the average grey level of the image is too high, the image detail information is also relatively smooth, which is easily affected by the crosstalk of another channel. Therefore, the proposed method is very suitable for high-resolution monitoring or infrared thermal imaging with a large field of view. If the brightness of the image captured by both optical channels is relatively low, then the exposure time of the detector can be reasonably adjusted or a more responsive detector can be selected to obtain better imaging results. By properly adjusting the exposure time or stops size, better restoration results can be obtained.

5. Conclusion

In summary, a new method is introduced in this work to solve the contradiction between the FOV and the focal length in the conventional optical imaging theory. We intend to apply it to the infrared thermometer in the COVID-19. PSF engineering and compressed sensing are adopted. In the case of limited detector arrays, the proposed method can simultaneously achieve large FOV and extended focal length imaging with unaffected temporal resolution. In addition, this method is different from conventional infrared compression imaging. It does not perform intensity encoding in the spatial-domain but in the frequency-domain, which does not affect the illuminance received by the detector. The light collected by multiple channels is converged on the same sensor, which can effectively improve the signal-to-noise ratio of infrared images reduce the exposure time of the detector, and achieve high frame rate infrared imaging. The effect of image restoration, the effectiveness of the method, and the factors affecting the restoration effect are analyzed. In further work, the new algorithm is expected to be applied. We can use algorithms such as GAP, DeSCI, and deep learning to solve problems such as crosstalk and obtain better image restoration effects and faster processing speeds.

Funding

Key Laboratory of Optical System Advanced Manufacturing Technology, Chinese Academy of Sciences (2022KLOMT02-01).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. A. Belousov and G. Popov, “Infrared zone-scanning system,” Appl. Opt. 45(9), 1931–1937 (2006). [CrossRef]  

2. C. Song, J. Chang, J. Cao, L. Zhang, Y. Wen, A. Wei, and J. Li, “Airborne Infrared Scanning Imaging System with Rotating Drum for Fire Detection,” J. Opt. Soc. Korea 15(4), 340–344 (2011). [CrossRef]  

3. X. Du and B. Anthony, “Concentric circle scanning system for large-area and high-precision imaging,” Opt. Express 23(15), 20014–20029 (2015). [CrossRef]  

4. B. Leininger, J. Edwards, J. Antoniades, D. Chester, D. Haas, E. Liu, M. Stevens, C. Gershfield, M. Braun, J. D. Targove, S. Wein, P. Brewer, D. G. Madden, and K. H. Shafique, “Autonomous real-time ground ubiquitous surveillance-imaging system (ARGUS-IS),” Proc. SPIE 6981, 69810H (2008). [CrossRef]  

5. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486(7403), 386–389 (2012). [CrossRef]  

6. B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High Performance Imaging Using Large Camera Arrays,” 12 (n.d.).

7. B. Zhang, W. Hou, G. Jin, and J. Zhu, “Simultaneous improvement of field-of-view and resolution in an imaging optical system,” Opt. Express 29(6), 9346–9362 (2021). [CrossRef]  

8. L. Zhang, J. Ke, S. Chi, X. Hao, T. Yang, and D. Cheng, “High-resolution fast mid-wave infrared compressive imaging,” Opt. Lett. 46(10), 2469–2472 (2021). [CrossRef]  

9. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9), 10526–10545 (2013). [CrossRef]  

10. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516(7529), 74–77 (2014). [CrossRef]  

11. D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2C2: Programmable pixel compressive camera for high speed imaging,” in CVPR 2011 (IEEE, 2011), pp. 329–336.

12. W. Liao, J. Hsieh, C. Wang, W. Zhang, S. Ai, Z. Peng, Z. Chen, B. He, X. Zhang, N. Zhang, B. Tang, and P. Xue, “Compressed sensing spectral domain optical coherence tomography with a hardware sparse-sampled camera,” Opt. Lett. 44(12), 2955–2958 (2019). [CrossRef]  

13. X. Wang, Y. Zhang, X. Ma, T. Xu, and G. R. Arce, “Compressive spectral imaging system based on liquid crystal tunable filter,” Opt. Express 26(19), 25226–25243 (2018). [CrossRef]  

14. R. M. Sullenberger, A. B. Milstein, Y. Rachlin, S. Kaushik, and C. M. Wynn, “Computational reconfigurable imaging spectrometer,” Opt. Express 25(25), 31960–31969 (2017). [CrossRef]  

15. A. N. Simonov and M. C. Rombach, “Passive ranging and three-dimensional imaging through chiral phase coding,” Opt. Lett. 36(2), 115–117 (2011). [CrossRef]  

16. Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal Point Spread Function Design for 3D Imaging,” Phys. Rev. Lett. 113(13), 133902 (2014). [CrossRef]  

17. S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. U.S.A. 106(9), 2995–2999 (2009). [CrossRef]  

18. E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “DeepSTORM3D: dense 3D localization microscopy and PSF design by deep learning,” Nat. Methods 17(7), 734–740 (2020). [CrossRef]  

19. E. R. Dowski and W. Thomas Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34(11), 1859–1866 (1995). [CrossRef]  

20. X. Wei, J. Han, S. Xie, B. Yang, X. Wan, and W. Zhang, “Experimental analysis of a wavefront coding system with a phase plate in different surfaces,” Appl. Opt. 58(33), 9195–9200 (2019). [CrossRef]  

21. J. H. Wirth, A. T. Watnik, and G. A. Swartzlander, “PSF Engineering for Sensor Protection,” in Frontiers in Optics (OSA, 2017), paper JTu2A.95.

22. J. H. Wirth, A. T. Watnik, and G. A. Swartzlander, “Half-ring point spread functions,” Opt. Lett. 45(8), 2179–2182 (2020). [CrossRef]  

23. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A New TwIST: Two-Step Iterative Shrinkage/Thresholding Algorithms for Image Restoration,” IEEE Trans. on Image Process. 16(12), 2992–3004 (2007). [CrossRef]  

24. C. Xu, D. Cheng, J. Chen, and Y. Wang, “Design of all-reflective dual-channel foveated imaging systems based on freeform optics,” Appl. Opt. 55(9), 2353–2362 (2016). [CrossRef]  

25. R. Tang, G. Jin, and J. Zhu, “Freeform off-axis optical system with multiple sets of performance integrations,” Opt. Lett. 44(13), 3362–3365 (2019). [CrossRef]  

26. J. H. Wirth, A. T. Watnik, and G. A. Swartzlander, “Experimental observations of a laser suppression imaging system using pupil-plane phase elements,” Appl. Opt. 56(33), 9205–9211 (2017). [CrossRef]  

27. F. Yan and X. Zhang, “Application of wavefront coding technology on TMA system,” Infrared and Laser Engineering. 37, 1048–1052 (2008).

28. C.-F. Lee and C.-C. Lee, “Application of a cubic phase plate to a reflecting telescope for extension of depth of field,” Appl. Opt. 59(14), 4410–4415 (2020). [CrossRef]  

29. W. Zhou, X. Wei, F. Xu, G. Zhang, Y. Xia, J. Ren, G. Wang, and X. Tang, “Application of computed graphic holograph in testing the integrated wave-front coding unit,” The 2015 International Conference on Management, Information and Communication and the 2015 International Conference on Optics and Electronics Engineering (2016).

30. Y. Liu, X. Yuan, J. Suo, D. J. Brady, and Q. Dai, “Rank Minimization for Snapshot Compressive Imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(12), 2990–3006 (2019). [CrossRef]  

31. X. Liao, H. Li, and L. Carin, “Generalized Alternating Projection for Weighted- Minimization with Applications to Model-Based Compressive Sensing,” SIAM J. Imaging Sci. 7(2), 797–823 (2014). [CrossRef]  

32. T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Dollár, “Microsoft COCO: Common Objects in Context” (2015).

33. H. Zhao, J. Wei, Z. Pang, and M. Liu, “Wave-front coded super-resolution imaging technique,” Infrared Laser Eng. 45(4), 0422003 (2016). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Schematic diagram of MFCI encoding and decoding process.
Fig. 2.
Fig. 2. The phase distribution of the four kinds of phase plates of the cubic, vortex, axicon, and chiral and their corresponding PSF. (a) Cubic. (b) Vortex. (c) Axicon. (d) Chiral.
Fig. 3.
Fig. 3. The layout of the dual-channel imaging optical system.
Fig. 4.
Fig. 4. MTFs of dual-channel imaging optical system. (a) Channel one. (b) Channel two.
Fig. 5.
Fig. 5. The layout of the DFCI.
Fig. 6.
Fig. 6. MTFs of the encoded dual-channel imaging optical system. (a) Channel one. (b) Channel two.
Fig. 7.
Fig. 7. Dual-channel imaging experiments. (a) Image simulation of conventional dual-channel gated imaging. (b) Image simulation of MFCI.
Fig. 8.
Fig. 8. Comparative experiment. (a) Image simulation using diffraction-limited OTF and defocused OTF of MFCI. (b) Image simulation using wavefront coded OTF diffraction-limited OTF of MFCI.
Fig. 9.
Fig. 9. A set of images with different lightness.
Fig. 10.
Fig. 10. Experimental recovery effect. (a)-(f) are the restoration effects.
Fig. 11.
Fig. 11. Analysis of experimental results. (a) The relationship between the restoration effect of channel 2 image and its average gray level. (b) The relationship between the restoration effect of channel 1 image and the average gray level of channel 2 image.

Tables (2)

Tables Icon

Table 1. Specifications of the optical system

Tables Icon

Table 2. SSIM of the restored image in four group experiments

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

g ( x , y ) = i f i ( x , y ) P S F i ( x , y ) ,
G ( u , v ) = F [ i f i ( x , y ) P S F i ( x , y ) ] = i F i ( u , v ) O T F i ( u , v ) ,
g i ( x , y ) = j f i , j ( x j , y j ) P S F i , j ( x j , y j )
g i ( x , y ) = f i ( x , y ) P S F i ( x , y ) ,
P S F = | F [ P ( x , y ) exp ( i p h a s e ( x , y ) ) ] | 2 ,
f = arg min | | g f P S F | | 2 2 + τ Φ T V ( f ) ,
Φ T V ( f ) = y x ( | Δ x f | + | Δ y f | ) ,
F = arg min | | G O T F F | | 2 2 + τ Φ T V ( F ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.