Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-performance imaging with an advanced non-imaging lens based on full-path optical diffraction calculation in two-dimensional space

Open Access Open Access

Abstract

High-performance image-forming systems often require high system complexity due to the overdetermined nature of optical aberration correction. What we present here is a novel computational imaging modality which can achieve high-performance imaging using a simple non-image-forming optical system. The presented optical system contains an aspherical non-imaging lens which is designed with the optimal transfer of light radiation between an object and a detector. All spatial frequencies of the object collected by the non-imaging lens are delivered to the detector. No image is formed on the detector, and a full-path optical diffraction calculation method is developed to recover a high-quality image of the object from multiple intensity measurements. The effectiveness and high performance of the proposed imaging modality is verified by the examples.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The purpose of an optical image-forming system is to collect a portion of the light rays emitted from all the object points in a desired field of view, and then to redirect these rays so that they are reunited at their corresponding image points [1]. Information about the object captured by an image-forming system is relayed and presented as an image during the ray reunification process (see Fig. 1). However, a perfect aberration-free image-forming system does not exist, because the ray reunification in an image-forming system can never be done perfectly due to a finite number of optical surfaces used in an imaging system, resulting in the extremely overdetermined nature of optical imaging. Optical imaging system design can be considered as the process of finding an optimal solution of an extremely overdetermined problem, by which a group of optical elements and their spatial relationships are optimized in order to minimize optical aberrations and yield an acceptable image quality over a desired field of view. The more optical elements are used to solve the overdetermined problem, the better the image quality could be. Due to this traditional imaging methodology, high system complexity cannot be avoided for those image-forming systems with either a large field of view or a large numerical aperture [2,3].

 figure: Fig. 1.

Fig. 1. The ray diagram of a traditional optical image-forming system. The information about the object PQ collected by the imaging system is transferred to the target region P’Q’ on the detector, and meanwhile the light rays emanating from all object points in the field of view should converge to their corresponding image points. This results in the extremely overdetermined nature of traditional optical imaging.

Download Full Size | PDF

One ultimate goal of computational imaging is to develop innovative imaging systems that, when compared to traditional image-forming systems, provide a desired capability with reduced requirements in size, weight, or cost [4,5]. The computational imaging indirectly forms images from collected information using computational algorithms, instead of directly generating images with high fidelity. Many methods have been developed to reconstruct an image from the collected information of an object. One typical method is to recover an image from the Fourier spectrum of an object. For example, an image can be reconstructed from the support domain and a measured amplitude of Fourier spectrum of an object by use of some iterative schemes (e.g., the Gerchberg–Saxton algorithm) [68]. The convolution model based computational imaging, which is suitable for shift invariant systems, is another powerful method to reconstruct an image from the collected information [911]. A typical application of this method is imaging through scattering media, where the image is recovered by setting up a convolution model within the optical memory effect of a scattering media (e.g., a diffuser) [9]. A key and challenging step of this method is to measure the point spread functions of an imaging system, or to tailor the point spread functions in a desired manner. The transmission matrix based computational imaging has shown its elegance in reconstructing the image of discrete objects at a fixed position [1217]. The transmission matrix of an imaging system is usually formed by the point spread functions at some sampled field points. However, measuring the transmission matrix of a non-image-forming system is a challenging task, and the transmission matrix based method, due to its nature, is less effective in reconstructing continuous objects.

The principle of reversibility of light states that if there are no losses, then wave propagation is reversible. This principle indicates that a high-quality image of an object can be reconstructed when a light wave emanating from the object propagates back to the object. In this paper, we present a novel computational imaging modality which can achieve high-performance imaging using a simple non-image-forming optical system. The proposed imaging system contains a non-imaging lens which is designed with the optimal transfer of light radiation between an object and a detector. No image is formed on the detector. A full-path optical diffraction calculation method is developed to reconstruct a high-quality image of the object from multiple intensity measurements recorded on the detector. The proposed computational imaging modality could significantly reduce the volume, weight and cost of the traditional imaging system. The rest of this paper is organized as follows. The proposed computational imaging modality is introduced in Section 2, and the full-path optical diffraction calculation is also presented in this section. Then, four examples are given in Section 3 to illustrate the effectiveness and high performance of the proposed imaging modality, and elaborate analyses of the proposed imaging modality are also made in this section before we conclude our work in Section 4.

2. Proposed computational imaging modality and reconstruction scheme

Figure 2 depicts a schematic of the proposed computational imaging system, which includes an elaborately designed non-imaging lens and a detector. Here, the object is illuminated by a coherent beam, and could be considered as a secondary light source from the point of view of non-imaging optics. The scattered optical wave which include the spatial frequencies of the object is captured and manipulated by the non-imaging lens, and then the output optical wave is further recorded with the detector. Unlike traditional imaging optics which are optimized to form an image of the object, the non-imaging lens is elaborately designed with the optimal transfer of light radiation between the object and the detector, which could allow all the information about the object captured by the lens to be recorded by the detector at a given distance. It also means that all spatial frequencies of the object collected by the lens are delivered to the detector. Since no image of the object is directly formed by the proposed imaging modality, we cannot identify the object from the recorded intensity distribution on the detector, and need to reconstruct an image of the object from the measured intensity distribution. To achieve measurement diversity, the detector is moved slightly along the optical axis and multiple measurements are taken from different distances. Then, we propose a full-path optical diffraction calculation method to reconstruct the image from these measured intensity distributions. Since the proposed imaging modality does not need to meet the redundant image forming conditions which should be satisfied in the traditional imaging optics, compared with a traditional image-forming optical system, the presented imaging system could capture the same amount of information about the object with fewer optical elements. In the rest of this section, we will give more physical insights into the proposed imaging modality and the developed full-path optical diffraction calculation method. Throughout this paper, all the studies are conducted on the two-dimensional (2D) calculation for the sake of simplicity.

 figure: Fig. 2.

Fig. 2. Schematic view of the proposed computational imaging modality.

Download Full Size | PDF

2.1 Multi-intensity phase retrieval

Since the detector used here is only sensitive to intensity/amplitude while phase information is lost, a key step of the proposed imaging modality is to obtain the complex amplitude of the optical wave on the detector before we perform the proposed full-path optical diffraction calculation. Phase retrieval is a technique to recover the lost phase information based on one or more intensity measurements. Here, the multi-intensity phase retrieval algorithm proposed in Ref. [18] is modified and used here to recover the phase information. Since this algorithm employs a random phase distribution to initiate phase recovery, inevitably, the phase recovery is easy to get stuck in local minimum and a slow convergence speed cannot be avoided due to the random starting point. In order to solve this problem, the transport equation of intensity (TIE) [19] is applied to our algorithm to generate a better initial phase distribution, which could make the phase recovery converge faster and more stable. The detector is moved slightly along the optical axis to achieve measurement diversity, and multiple intensity patterns Ii (i = 1,…,n) are recorded at different distances li [li = l1l×(i-1), i = 1,…,n], as shown in Fig. 3. Then, the phase can be recovered according to the following steps.

  • (1) We initialize the phase distribution φ1 on the first plane at the distance l1 according to the TIE equation and combine it with the square root of I1 to obtain an initial complex amplitude, which can be written as
    $${U_1}\textrm{(}x)\textrm{ = }\sqrt {{I_1}} {e^{j{\varphi _1}}}$$
  • (2) The optical wave propagates forward to the second plane, and yields a complex amplitude at the distance l2
    $${U_2}^{\prime}( x)\textrm{ = }\int {{\cal F}({U_1}){e^{ik\Delta l\sqrt {1 - {{(\lambda {f_x})}^2}} }}} {e^{i2\pi {f_x}x}}d{f_x}\textrm{ = }{u_2}{e^{i{\varphi _2}}}$$
    where, k = 2π/λ is the wavenumber, λ is the wavelength of light, φ2 is the calculated phase distribution on the second plane, fx is the spatial frequency along the x-axis, and ℱ (U1) denotes the Fourier transform of U1(x). Then, the amplitude u2 is updated with the square root of the measured intensity distribution I2 and the updated complex amplitude U2 can be written as
    $${U_2}\textrm{ = }\sqrt {{I_2}} {e^{i{\varphi _2}}}$$
  • (3) Step 2 is repeated until the complex amplitude Un on the n-th plane at the distance ln is obtained, which is given by
    $${U_n}\textrm{ = }\sqrt {{I_n}} {e^{i{\varphi _n}}}$$
  • (4) After that, the optical wave propagates backward from the n-th plane to the first plane, yielding a complex amplitude on the first plane
    $${\bar{U}_1}\textrm{ = }{u_1}{e^{i{{\bar{\varphi }}_1}}}$$

    The mean square error (MSE) is used here to quantify the difference between the calculated amplitude u1 and the measured one on the first plane

    $$MSE = \frac{1}{{Num}}\sum {{{\left|{\left|{{u_1} - \sqrt {{I_1}} } \right|} \right|}^2}\textrm{ }}$$
    where Num is the number of sample points. If MSE does not meet a predefined stopping criterion, we replace u1 with the square root of I1 and repeat Steps (2)-(4) until the stopping criterion is met.

 figure: Fig. 3.

Fig. 3. Schematic of the phase retrieval algorithm based on multiple intensity measurements.

Download Full Size | PDF

2.2 Full-path optical diffraction calculation through a non-imaging lens

After the complex amplitude of the optical wave at the first position l1 is obtained, the optical wave further propagates backward through the non-imaging lens which cannot be considered as a thin lens. Then, an image of the object can be reconstructed as long as the complex amplitude of the optical wave on the object plane is obtained. However, it is a big challenge to calculate the complex amplitude due to the nature of a thick non-imaging lens. Reference [20] developed a method to calculate the propagation of optical wave between a curved surface and a plane which are placed in the same medium. In this paper, we generalize this method to calculating the propagation of optical wave through a thick non-imaging lens whose transmission factor cannot be predicted and develop a full-path optical diffraction calculation method to recover an image of the object, as shown in Fig. 4. Recovering an image of the object in the proposed image modality is a process of inverse diffraction, where the optical wave passes through Surface 2 and Surface 1 of the lens successively. We assume that Surfaces 1 and 2 are mathematically represented by z1 =g1(x) and z2 = g2(x), respectively. The profiles of Surfaces 1 and 2 are divided into multiple segments along the transverse direction (x-axis) and each segment is approximated by a group of two sub-planes which are perpendicular to the optical axis (z-axis). In each group of two sub-planes, the inner sub-plane denoted by the blue dashed line refers to the sub-plane inside the lens, and the outer sub-plane denoted by the red dashed line refers to the one outside the lens, as shown in Fig. 4. Each two sub-planes pass through two extremum points of the corresponding surface segment. Surface 1 of the lens is divided into 2N sub-planes with equal width of Δx1 and Surface 2 is divided into 2M sub-planes with equal width of Δx2. Non-uniform segmentation could also be used for highly curved surfaces to reduce the error caused by the segment approximation.

 figure: Fig. 4.

Fig. 4. Schematic of the proposed full-path optical diffraction calculation.

Download Full Size | PDF

When the optical wave propagates backward from the detector to Surface 2, the angular spectrum on the p-th outer sub-plane can be expressed as

$${A_{p,1}}({f_x};z = {d_1}) = \int {T(x)\exp [ik({d_1} - {l_1})\sqrt {1 - {{(\lambda {f_x})}^2}} } ]\exp ( - i2\pi {f_x}x)dx$$
where d1 = max{g2(x)}, x∈[(p-1)Δx2, pΔx2); T(x) denotes the field distribution on the detector; Ap,1 represents the angular spectrum on the p-th sub-plane and the subscript 1 denotes the outer sub-plane. Here, we only consider the case, in which fx2≤1/λ2, to exclude evanescent wave component. It should be noted that the sampling numbers and window size should be carefully selected to satisfy the Nyquist sampling theorem and avoid aliasing [21,22]. After the angular spectrum on the p-th outer sub-plane is obtained, the field distribution on the p-th inner sub-plane can be calculated by use of phase compensation
$${u_{p,2}}(x;z = {d_2}) = \int {{A_{p,1}}({f_x}} ;z = {d_1}){g_{c, 1}}(z)\exp (i2\pi {f_x}x)rect\left[ {\frac{{x - (p - 0.5)\Delta {x_2}}}{{\Delta {x_2}}}} \right]d{f_x}$$
where up,2 represents the field distribution on the p-th sub-plane, the subscript 2 denotes the inner sub-plane, and
$$\left\{ \begin{array}{l} {d_2} = \min \{ {g_2}(x)\} \textrm{ }x \in [(p - 1)\Delta {x_2},p\Delta {x_2})\\ {g_{c, 1}}(x) = \exp [{ik({g_2}(x) - {d_1})} ]\exp [{ik{n_1}({d_2} - {g_2}(x))} ]\\ rect(x) = \left\{ {\begin{array}{cc} {\textrm{1 }}&|x |\le 1\\ 0&{\textrm{ else }} \end{array}} \right. \end{array} \right.$$
gc,1(x) is the phase compensation term, which reveals the influence of the local curvature of the p-th curved segment; n1 is the refractive index of the lens material. The rectangular window function represents the size of each sub-plane, which can also be replaced by some other functions with less sharp edges (e.g., Gaussian window function). Similarly, the field distribution and angular spectrum on Surface 1 can also be calculated in the same fashion as Surface 2. Here, we only consider the radiated fields in the propagation direction, the radiation losses and back reflections are ignored. The angular spectrum on the q-th inner sub-plane on Surface 1 contributed by all inner sub-planes on Surface 2 can be written as
$${B_{q,2}}({f_x};z = {d_3}) = \sum\limits_{p = 1}^M {\int {{u_{p,2}}(x;z = {d_2})\exp \left[ {ik{n_1}({{d_3} - {d_2}} )\sqrt {1 - {{(\frac{{\lambda {f_x}}}{{{n_1}}})}^2}} } \right]} \exp ( - i2\pi {f_x}x)dx}$$
where d3 = max{g1(x)}, x∈[(q-1)Δx1, qΔx1) and Bq,2 represents the angular spectrum on the q-th sub-plane of Surface 1. Similarly, the phase compensation is also applied to calculate the field distribution on the q-th outer sub-plane of Surface 1, which is given by
$${v_{q,1}}(x;z = {d_4}) = \int {{B_{q,2}}({f_x}} ;z = {d_3}){g_{c,2}}(z)\exp (i2\pi {f_x}x)rect\left[ {\frac{{x - (q - 0.5)\Delta {x_1}}}{{\Delta {x_1}}}} \right]d{f_x}$$
where
$$\left\{ \begin{array}{l} {g_{c,2}}(x) = \exp [{ik{n_1}({g_1}(x) - d3)} ]\exp [{ik({d_4} - {g_1}(x))} ]\\ {d_4} = \min \{ {g_1}(x)\} ,\textrm{ }x \in [(q - 1)\Delta {x_1},q\Delta {x_1}) \end{array} \right.$$
v represents the field distribution on the sub-plane of Surface 1; gc,2(x) reveals the influence of the local curvature of the q-th curved segment on Surface 1. After the optical wave propagates from Surface 1 to the object plane, the angular spectrum on the object plane contributed by all outer sub-planes on Surface 1 is given by
$$O({f_x};z = 0) = \sum\limits_{q = 1}^N {\int {{v_{q,1}}(x;z = {d_4})\exp [ - ik{d_4}\sqrt {1 - {{(\lambda {f_x})}^2}} ]\exp ( - i2\pi {f_x}x)dx} }$$
O represents the angular spectrum on the object plane. Then, the field distribution on the object plane can be easily achieved by the inverse Fourier transform of the angular spectrum
$$o(x;z = 0) = \int {O({f_x}} ;z = 0)\exp (i2\pi {f_x}x)d{f_x}$$

Since the coherent illumination h(x) is known, an image of the object can be easily reconstructed

$$Image(x;z = 0) = o(x;z = 0)/h(x;z = 0);$$

3. Design examples and analyses

In this section, four examples are given to illustrate the effectiveness and performance of the developed imaging modality. We assume that AB equals 8 mm, which is the field of view of the proposed imaging system, and the overall length of the system which is the distance between the object and the detector equals 80 mm. As mentioned above, the detector can be moved slightly along the axis to achieve measurement diversity for accurate phase retrieval. The airspace between the object and the lens equals 29.1 mm. The clear aperture diameter of the lens equals 16 mm and the lens thickness is equal to 5.6 mm. The refractive index of the lens is 1.5, and wavelength of the light is 546.1 nm. The simultaneous multiple surfaces (SMS) design method, which is very effective in designing high-performance non-imaging optics [23], is employed here to design the non-imaging lens. By use of the SMS method, all light beam captured by the non-imaging lens are projected to the target region [-6 mm,6 mm] on the detector. Since the SMS method yields a set of data points of the entrance and exit surfaces of the lens, surface fitting is performed to construct the two aspherical surfaces which are represented by

$$z = \frac{{c{r^2}}}{{1 + \sqrt {1 - (1 + {k_1}){c^2}{r^2}} }} + {C_1}{r^4} + {C_2}{r^6}$$
where, r2 = x2 + y2, c denotes the curvature at the pole of an aspheric surface, k1 is the conic constant, C1 and C2 are the deformation coefficients. The values of these parameters are given in Table 1.

Tables Icon

Table 1. Optimized parameters of the two aspherical surfaces

The profiles of the entrance and exit surfaces are depicted in Fig. 5(a), and a ray diagram of the non-image-forming system is given in Fig. 5(b). From Fig. 5(b) we can clearly see the non-image-forming property of the lens and that all the light beam collected by the lens are projected to the target region. Due to the edge-ray principle used in the SMS method [23], only the two end points of the line object are imaged to their corresponding image points which are the two end points of the target region. The light rays emanating from the other object points are not forced to converge to their corresponding image points, and the intercept points of these light rays on the detector are not predefined. Due to the non-image-forming nature of the presented imaging system, it is very difficult to tell the object from the recorded intensity distribution on the detector, as shown in Figs. 5(c) and 5(d). This non-image-forming nature offers high degrees of freedom to control the propagation of light, and therefore allows one to capture the same amount of information with fewer optical elements. Next, we employ this non-image-forming system to produce high-quality images of objects.

 figure: Fig. 5.

Fig. 5. Characteristics of the non-image-forming system. (a) The profiles of the entrance and exit surfaces of the non-imaging lens; (b) a ray diagram of the non-image-forming system. Although all image reconstruction presented in this paper is conducted on 2D calculation, we can still evaluate the optical performance of the non-imaging lens in three-dimensional (3D) space via some commercial optical design programs. (c) The input 2D object which is the EIA-1956 resolution chart, and (d) the recorded intensity distribution produced by the 2D object on the detector. Due to the non-image-forming nature of the lens, we can hardly tell the object from this recorded intensity distribution.

Download Full Size | PDF

In the first example, the object height equals 8 mm and a W-shape object amplitude with C° geometric continuity is predefined on the region [-4 mm,4 mm], as shown in Fig. 6(a). The object is assumed to be uniformly illuminated by a normally incident monochromatic plane wave. Multiple measurements are taken at five different positions with Δl of 2 mm between each two neighboring positions. VirtualLab is employed here to compute the intensity distributions at the five positions. Then, the five intensity distributions are fed into the phase retrieval algorithm presented above to recover the phase distribution at Position 1 with l1 of 80 mm. After that, the proposed full-path optical diffraction calculation method is used to reconstruct an image of the object with the complex amplitude of the optical wave at Position 1. Figure 6(a) gives the amplitude distribution of the reconstructed image. The red dashed line represents the object amplitude and the black solid line represents the amplitude distribution of the image. The root mean squared error (RMSE) is employed here to quantify the differences between the object amplitude and the reconstructed one. A smaller value of RMSE represents a better reconstruction. From Fig. 6(a) we know that RMES equals 0.0094, indicating a very good agreement between the object amplitude and the reconstructed one. In the second example, the object height is changed to 4 mm, and the object has a Gaussian amplitude distribution, as shown in Fig. 6(b). The object amplitude and the reconstructed one are, respectively, denoted by the red dashed line and the black solid line, as shown in Fig. 6(b) with RMSE = 0.0032. Again, a very good agreement has been achieved between the object amplitude and the reconstructed one. The two examples clearly illustrate the effectiveness of the developed imaging modality.

 figure: Fig. 6.

Fig. 6. Reconstruction results of Examples 1 and 2: (a) a W-shape amplitude distribution of a line object within the region [-4 mm,4 mm] has been recovered with a high fidelity; (b) a Gaussian amplitude distribution of a line object within the region [-2 mm,2 mm] has also been recovered with a high fidelity

Download Full Size | PDF

Although no image formation occurs in the proposed imaging modality, the resolving power of the presented non-image-forming system is still fully governed by the numerical aperture due to the fact that the amount of spatial frequencies of an object that can be captured by the detector is determined by the numerical aperture of the non-image-forming system. The larger the numerical aperture is, the higher the spatial frequency of an object can be captured. In 2D space, the maximum and minimum frequencies that can be captured by an imaging system is given by [22]

$$\left\{ \begin{array}{l} {f_{x\max }} = \frac{{\sin {\theta_1}}}{\lambda } = \frac{{D/2 - {x_0}}}{{\lambda \sqrt {{{(D/2 - {x_0})}^2} + {d^2}} }}\\ {f_{x\min }} = \frac{{\sin {\theta_2}}}{\lambda } = \frac{{ - {x_0} - D/2}}{{\lambda \sqrt {{d^2} + {{( - {x_0} - D/2)}^2}} }} \end{array} \right.$$
where, x0 is the x-coordinate of the object point, D is the entrance pupil diameter of an imaging system, and d denotes the airspace between the object plane and the entrance pupil, as shown in Fig. 7. From the design parameters given above, we know that the numerical aperture of the proposed imaging system equals 0.2519. This tells us that the maximum frequency which the non-imaging lens can capture from an on-axis object point equals 461.2 Hz. Obviously, this non-imaging lens can be considered as a low-pass filter due to its limited numerical aperture. We investigate the resolving power of the presented imaging system in the next two examples.

 figure: Fig. 7.

Fig. 7. Calculation of the maximum and minimum spatial frequencies of an object point that can be captured by the non-imaging lens.

Download Full Size | PDF

We change the object height to 0.04 mm, and define a Gaussian object amplitude on the region [-0.02 mm,0.02 mm], as shown in Fig. 8(a). Fourier transform of the object amplitude yields the frequency spectrum, amplitude of which is denoted by the red dashed line in Fig. 8(b). From this figure we can see that the maximum spatial frequency equals 155Hz which is less than 461.2Hz. That means all spatial frequency components of the object can be captured by the lens. The 2D profiles of the entrance and exit surfaces of the lens are, respectively, divided into 500 segments. The reconstructed image amplitude is also depicted in Fig. 8(a). RMSE = 0.0041, indicating a very good agreement between the object amplitude and that of the image. Figure 8(b) give the amplitude spectrum of the reconstructed image. From this figure and the fact that RMSE = 0.0066, we clearly see that all spatial frequencies are recovered with a very high fidelity. Since all spatial frequencies of the object are collected, a high-quality image can be reconstructed by the proposed imaging modality. In the next design, we extend the maximum spatial frequency of the object to 800Hz far beyond the maximum frequency that the non-imaging lens can capture. The amplitude distribution and the amplitude spectrum of the object and the reconstructed image are, respectively, depicted in Figs. 8(c) and 8(d). As mentioned above, the maximum spatial frequency that can be captured by an imaging system is determined by the numerical aperture. That means those high spatial frequencies greater than 461.2 Hz cannot be collected. Consequently, only those spatial frequencies collected by the imaging system can be recovered, as shown in Fig. 8(d). From this figure we still observe a very good agreement between the collected spatial frequency distribution and the recovered one within the region [-461.2 Hz, 461.2 Hz]. The recovered amplitude distribution is shown in Fig. 8(c). From this figure we can see that a high-quality image has been recovered well with only some minor differences between the object amplitude and the reconstructed one at the right and left tails of the Gaussian amplitude distribution, which are caused by the loss of those high spatial frequencies beyond the maximum frequency that the non-imaging lens can capture. It is apparent that the maximum resolution is determined by the numerical aperture of the non-image-forming system, and therefore we could increase the numerical aperture to increase the resolving power. The two examples presented here tell us that all spatial frequencies collected by the system can be recovered with a very high fidelity, indicating the effectiveness and high performance of the developed imaging modality.

 figure: Fig. 8.

Fig. 8. Reconstruction results of Examples 3 and 4: (a) the amplitude distribution of the reconstructed image in Example 3; (b) the amplitude spectrum of the reconstructed image in Example 3; (c) the amplitude distribution of the reconstructed image in Example 4; (d) the amplitude spectrum of the reconstructed image in Example 4

Download Full Size | PDF

.

As mentioned in subsection 2.2, the profiles of both entrance and exit surfaces of the non-imaging lens need to be divided into a set of segments, and each segment is approximated by a group of two sub-planes. It is necessary to analyze the influence of the number of segments on the quality of image reconstruction. The second design given in Fig. 6(b) is taken as an example, and N (M = N) is increased from 50 to 500. Figure 9(a) gives the two reconstructed images with 50 and 500 segments. The change of RMSE with the increase of N is depicted in Fig. 9(b). The RMSE is slightly decreased from 0.0045 to 0.0016 when N is increased from 50 to 500. From Figs. 9(a) and 9(b) we can see that the difference between the two recovered images can be ignored. That means a high-quality reconstruction has already been achieved when N = 50. A larger number of segments may mean a heavier computation task. Thus, an appropriate number of segments could be chosen in order to achieve a good balance between the reconstruction quality and the computation cost.

 figure: Fig. 9.

Fig. 9. The influence of the number of segments on the quality of image reconstruction. (a) The two reconstructed image amplitude distributions with 50 and 500 segments, and (b) the change of RMSE with the increase of N.

Download Full Size | PDF

4. Conclusion

In summary, we present a computational imaging modality which employs a simple non-image forming system to achieve high-performance imaging. The non-image-forming system includes a single non-imaging lens which is specially designed with the optimal transfer of light radiation (information about the object) between the object and the detector to ensure that all spatial frequencies of the object collected by the lens are delivered to the detector. No image of the object is formed on the detector, and a full-path optical diffraction calculation method is then developed to recover a high-quality image of the object from multiple intensity measurements. The resolving power of the proposed imaging modality is fully determined by the numerical aperture. All spatial frequencies an object collected by the imaging system can be fully recovered, which enables a high-quality image recovery of the object. Since the proposed imaging modality does not need to meet the redundant image forming conditions, it allows one to capture the same amount of information about an object with a rather simple imaging system. Although the effectiveness and high performance of the proposed imaging modality are evaluated in 2D space in this paper, this imaging modality can be generalized straightforwardly to 3D space. Finally, it is worth mentioning that the proposed imaging modality could employ freeform optical surfaces which possess versatile wavefront shaping capability to achieve novel functions in 3D space.

Funding

National Natural Science Foundation of China (62022071, 12074338); the Fundamental Research Funds for the Zhejiang Provincial Universities (2021XZZX020).

Disclosures

The authors declare no conflicts of interest.

Data availability

No data were generated or analyzed in the presented research.

References

1. G. H. Smith, Practical Computer-Aided Lens Design (Willmann-Bell, 1998).

2. X. Wang, X. Zhong, R. Zhu, F. Gao, and Z. Li, “Extremely wide-angle lens with transmissive and catadioptric integration,” Appl. Opt. 58(16), 4381–4389 (2019). [CrossRef]  

3. R. T. Kester, T. S. Tkaczyk, M. R. Descour, T. Christenson, and R. Richards-Kortum, “High numerical aperture microendoscope objective for a fiber confocal reflectance microscope,” Opt. Express 15(5), 2409–2420 (2007). [CrossRef]  

4. J. N. Mait, G. W. Euliss, and R. A. Athale, “Computational imaging,” Adv. Opt. Photon. 10(2), 409–483 (2018). [CrossRef]  

5. J. Wu, H. Zhang, W. Zhang, F. Guo, and L. Cao, “Single-shot lensless imaging with fresnel zone aperture and incoherent illumination,” Light-Sci. Appl. 9(1), 53 (2020). [CrossRef]  

6. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

7. H. M. Quiney, K. A. Nugent, and A. G. Peele, “Iterative image reconstruction algorithms using wave-front intensity and phase variation,” Opt. Lett. 30(13), 1638–1640 (2005). [CrossRef]  

8. M. Guizar-Sicairos and J. R. Fienup, “Phase retrieval with transverse translation diversity: a nonlinear optimization approach,” Opt. Express 16(10), 7264–7278 (2008). [CrossRef]  

9. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014). [CrossRef]  

10. S. B. Rohde and A. D. Kim, “Convolution model of the diffuse reflectance for layered tissues,” Opt. Lett. 39(1), 154–157 (2014). [CrossRef]  

11. F. S. Oktem, O. F. Kar, C. D. Bezek, and F. Kamalabadi, “High-Resolution Multi-Spectral Imaging With Diffractive Lenses and Learned Reconstruction,” IEEE Trans. Comput. Imaging 7, 489–504 (2021). [CrossRef]  

12. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104(10), 100601 (2010). [CrossRef]  

13. S. Zhou, M. Davy, J. Wang, and A. Z. Genack, “Focusing through random media in space and time: a transmission matrix approach,” Opt. Lett. 38(15), 2807 (2013). [CrossRef]  

14. M. Kim, W. Choi, Y. Choi, C. Yoon, and W. Choi, “Transmission matrix of a scattering medium and its applications in biophotonics,” Opt. Express 23(10), 12648–12668 (2015). [CrossRef]  

15. J. Xu, H. Ruan, Y. Liu, H. Zhou, and C. Yang, “Focusing light through scattering media by transmission matrix inversion,” Opt. Express 25(22), 27234–27246 (2017). [CrossRef]  

16. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018). [CrossRef]  

17. P. Pai, J. Bosch, and A. P. Mosk, “Optical transmission matrix measurement sampled on a dense hexagonal lattice,” OSA Continuum 3(3), 637–648 (2020). [CrossRef]  

18. G. Pedrini, W. Osten, and Y. Zhang, “Wave-front reconstruction from a sequence of interferograms recorded at different planes,” Opt. Lett. 30(8), 833–835 (2005). [CrossRef]  

19. N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. 49(1), 6–10 (1984). [CrossRef]  

20. C.-Y. Hwang, S. Oh, I.-K. Jeong, and H. Kim, “Stepwise angular spectrum method for curved surface diffraction,” Opt. Express 22(10), 5537–5548 (2011). [CrossRef]  

21. K. Matsushima and T. Shimobaba, “Band-Limited Angular Spectrum Method for Numerical Simulation of Free-Space Propagation in Far and Near Fields,” Opt. Express 17(22), 19662–19673 (2009). [CrossRef]  

22. Y. Xiao, X. Tang, Y. Qin, H. Peng, and W. Wang, “Wide-window angular spectrum method for diffraction propagation in far and near field,” Opt. Lett. 37(23), 4943–4945 (2012). [CrossRef]  

23. J. C. Miñano and J. C. González, “New method of design of nonimaging concentrators,” Appl. Opt. 31(16), 3051–3060 (1992). [CrossRef]  

Data availability

No data were generated or analyzed in the presented research.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The ray diagram of a traditional optical image-forming system. The information about the object PQ collected by the imaging system is transferred to the target region P’Q’ on the detector, and meanwhile the light rays emanating from all object points in the field of view should converge to their corresponding image points. This results in the extremely overdetermined nature of traditional optical imaging.
Fig. 2.
Fig. 2. Schematic view of the proposed computational imaging modality.
Fig. 3.
Fig. 3. Schematic of the phase retrieval algorithm based on multiple intensity measurements.
Fig. 4.
Fig. 4. Schematic of the proposed full-path optical diffraction calculation.
Fig. 5.
Fig. 5. Characteristics of the non-image-forming system. (a) The profiles of the entrance and exit surfaces of the non-imaging lens; (b) a ray diagram of the non-image-forming system. Although all image reconstruction presented in this paper is conducted on 2D calculation, we can still evaluate the optical performance of the non-imaging lens in three-dimensional (3D) space via some commercial optical design programs. (c) The input 2D object which is the EIA-1956 resolution chart, and (d) the recorded intensity distribution produced by the 2D object on the detector. Due to the non-image-forming nature of the lens, we can hardly tell the object from this recorded intensity distribution.
Fig. 6.
Fig. 6. Reconstruction results of Examples 1 and 2: (a) a W-shape amplitude distribution of a line object within the region [-4 mm,4 mm] has been recovered with a high fidelity; (b) a Gaussian amplitude distribution of a line object within the region [-2 mm,2 mm] has also been recovered with a high fidelity
Fig. 7.
Fig. 7. Calculation of the maximum and minimum spatial frequencies of an object point that can be captured by the non-imaging lens.
Fig. 8.
Fig. 8. Reconstruction results of Examples 3 and 4: (a) the amplitude distribution of the reconstructed image in Example 3; (b) the amplitude spectrum of the reconstructed image in Example 3; (c) the amplitude distribution of the reconstructed image in Example 4; (d) the amplitude spectrum of the reconstructed image in Example 4
Fig. 9.
Fig. 9. The influence of the number of segments on the quality of image reconstruction. (a) The two reconstructed image amplitude distributions with 50 and 500 segments, and (b) the change of RMSE with the increase of N.

Tables (1)

Tables Icon

Table 1. Optimized parameters of the two aspherical surfaces

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

U 1 ( x )  =  I 1 e j φ 1
U 2 ( x )  =  F ( U 1 ) e i k Δ l 1 ( λ f x ) 2 e i 2 π f x x d f x  =  u 2 e i φ 2
U 2  =  I 2 e i φ 2
U n  =  I n e i φ n
U ¯ 1  =  u 1 e i φ ¯ 1
M S E = 1 N u m | | u 1 I 1 | | 2  
A p , 1 ( f x ; z = d 1 ) = T ( x ) exp [ i k ( d 1 l 1 ) 1 ( λ f x ) 2 ] exp ( i 2 π f x x ) d x
u p , 2 ( x ; z = d 2 ) = A p , 1 ( f x ; z = d 1 ) g c , 1 ( z ) exp ( i 2 π f x x ) r e c t [ x ( p 0.5 ) Δ x 2 Δ x 2 ] d f x
{ d 2 = min { g 2 ( x ) }   x [ ( p 1 ) Δ x 2 , p Δ x 2 ) g c , 1 ( x ) = exp [ i k ( g 2 ( x ) d 1 ) ] exp [ i k n 1 ( d 2 g 2 ( x ) ) ] r e c t ( x ) = { | x | 1 0  else 
B q , 2 ( f x ; z = d 3 ) = p = 1 M u p , 2 ( x ; z = d 2 ) exp [ i k n 1 ( d 3 d 2 ) 1 ( λ f x n 1 ) 2 ] exp ( i 2 π f x x ) d x
v q , 1 ( x ; z = d 4 ) = B q , 2 ( f x ; z = d 3 ) g c , 2 ( z ) exp ( i 2 π f x x ) r e c t [ x ( q 0.5 ) Δ x 1 Δ x 1 ] d f x
{ g c , 2 ( x ) = exp [ i k n 1 ( g 1 ( x ) d 3 ) ] exp [ i k ( d 4 g 1 ( x ) ) ] d 4 = min { g 1 ( x ) } ,   x [ ( q 1 ) Δ x 1 , q Δ x 1 )
O ( f x ; z = 0 ) = q = 1 N v q , 1 ( x ; z = d 4 ) exp [ i k d 4 1 ( λ f x ) 2 ] exp ( i 2 π f x x ) d x
o ( x ; z = 0 ) = O ( f x ; z = 0 ) exp ( i 2 π f x x ) d f x
I m a g e ( x ; z = 0 ) = o ( x ; z = 0 ) / h ( x ; z = 0 ) ;
z = c r 2 1 + 1 ( 1 + k 1 ) c 2 r 2 + C 1 r 4 + C 2 r 6
{ f x max = sin θ 1 λ = D / 2 x 0 λ ( D / 2 x 0 ) 2 + d 2 f x min = sin θ 2 λ = x 0 D / 2 λ d 2 + ( x 0 D / 2 ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.