Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Designing an optical phase element for field of view enhancement by using wavelength multiplexing

Open Access Open Access

Abstract

Enhancing the image quality of the captured image is one of the prime objectives of modern image acquisition systems. These imaging systems can be broadly divided into two subsystems: an optical subsystem and a digital subsystem. There are various limitations associated with the optical and digital subsystems. One of the crucial parameters that are affected by the limitation of the physical extent of the recording or capturing system is the field of view (FOV). A reduced FOV can lead to loss of information thereby increasing the time for post-processing of images as well as introducing mechanical scanning to achieve a larger FOV. A simple yet efficient technique for FOV enhancement is demonstrated in this paper. An optical element is designed in such a way that it diffracts different wavelengths in the desired manner and the information from different regions of the object is carried by different wavelengths which upon combination at the sensor plane leads to enhancement of FOV.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

In modern image acquisition systems, captured images require further processing for enhancing image quality. Therefore, the overall imaging system can be considered as a combination of an optical subsystem and a digital subsystem. The optical subsystem includes the optical elements and the sensors while the digital subsystem comprises the algorithms for performing signal processing [1].

The field of view (FOV) is a very important specification describing an imaging system. The maximum angular size of the object as seen from the entrance pupil is often considered the field of view (FOV) of an optical system. In other words, it is a measure of how much can the device see. FOV is basically related to the imaging lens, its focal length, and the sensor size. For a given sensor size, the shorter the focal length, the wider the angular FOV [2].

Mathematically FOV can be expressed in terms of the focal length of the optics as

$$FOV = 2\arctan (\Delta S/2f)$$
where $\Delta S$ is the size of the sensor, $f$ is the focal length. Equation (1) suggests that for having a wide FOV one needs a short focal length which affects the spatial resolution of the imaging system [3]. Moreover, the expression of the full FOV of an imaging system taking into consideration the height of the image H, the focal length $f$ of the imaging system, and the FOV angle $\omega$ is
$$H = f\tan \omega$$

Therefore, for a fixed H, an increase in f will increase the resolution of the system but decrease the range of objects that can be observed. Thus, it becomes relatively difficult to produce an imaging system that has a large FOV and a long focal length a system in which both wide FOV and high resolution are provided is desirable.

Various well-established techniques for FOV extension have been reported. These techniques have been based on the convolution approach [4,5], multiplexing of holograms [6,7], and particle encoding [8]. However, most of these techniques were designed for monochromatic optical systems and were not suitable for a wide spectrum. A common approach for having a large FOV is to use a fisheye lens, however, the captured image has nonuniform resolutions and suffers from severe distortion which may require subsequent corrections [9]. Catadioptric omnidirectional cameras provide a 360° FOV by employing both lenses and mirrors nevertheless this makes the system bulky and costly [1,9]. Fourier ptychographic microscopy (FPM) which is a computational imaging technique has also been employed to enhance the field of view [1012].

Another approach for FOV extension is folding different regions of the observed object simultaneously into a given FOV followed by a signal processing algorithm. A diffraction grating can be applied at the entrance pupil plane to fold different regions and produce duplication of the object across the image plane owing to diffraction orders. In order to separate orders for the wide image different folding are required to reduce uncertainty and thus a dynamic grating is required which can increase the cost of the system [3].

In general, researchers have developed an extensive interest in the field of diffractive optics as diffractive optical element (DOE’s) provides flexibility to convert and modulate wavefronts by achieving refraction, reflection, and dispersion simultaneously. Moreover, multiple optical functions can be integrated into a single DOE. Owing to these qualities DOEs have already been applied for applications such as wavelength multiplexing and wavelength demultiplexing and focusing [1316]. Fabrication and application of diffractive phase elements (DPE’s) for wavelength demultiplexing and spatial focusing have already been proposed based on the general theory of amplitude & phase retrieval and an iterative algorithm [1517]. Therefore, by designing and fabricating various DPE’s, wavelength multiplexing with required spatial focusing in any shape pattern can be achieved [18].

The requirement of having a large FOV and the ability of DPE to modulate wavefronts in the desired manner has motivated us to design an optical phase element that can enhance the field of view of an imaging system by employing the concept of wavelength multiplexing. The enhancement of FOV is achieved by designing an optical element that upon attachment to the entrance pupil of a lens of an imaging system can diffract different wavelengths in the desired manner. The recombination of these wavelengths at the sensor plane and processing of the three images independently through three color channels of the RGB camera will eventually lead to an increase in the effective FOV of the imaging system.

2. Theory and working

The proposed optical element is based on the relationship between wavelength and diffraction wherein the shorter wavelengths get diffracted at a greater angle in comparison to the longer wavelengths. The element can be considered as an attachment that will be placed at the entrance pupil of the lens of the imaging system. The design of the element uses this property to manipulate the diffraction encountered by different wavelengths and recombine the output of the imaging system associated with different wavelengths to capture information beyond the conventional FOV of the imaging system.

Figure 1 represents the schematic of the concept behind the design of the element. The process of designing the element uses three wavelengths 465 nm, 532 nm, and 632 nm where each of these wavelengths forms an image that is significantly large in comparison to the sensor size.

 figure: Fig. 1.

Fig. 1. Concept behind the proposed optical element

Download Full Size | PDF

To understand the working of the proposed optical element, consider the size of the image formed by the imaging system to be three times the sensor size. The optical element diffracts one wavelength (${\lambda _1} = 532nm$) such that the bottom section of the image associated with it falls on the sensor area, whereas it diffracts the other wavelength (${\lambda _3} = 632nm$) such that the upper section of the image associated with it falls on the sensor area. However, the element allows the central region of the image associated with ${\lambda _2} = 465nm$ to be recorded by the sensor. This wavelength-dependent, color-coded information (in the form of three different images) can easily be accessed simultaneously by using the RGB channels of the color sensor that is used to record the images. Hence, a single image associated with one of the wavelengths falling on the sensor area will contain only 1/3rd of the object information. Due to the shift in the images associated with the other two wavelengths the missing information about the object (top and bottom regions) is obtained at the sensor plane. This concept applied for enhancing the FOV is novel to the best of our knowledge. The ability to obtain a large FOV without compromising the time makes the technique quick and efficient because generally to increase the frame rate of the recording system the FOV is compromised.

Figure 2 represents the working of the proposed optical element where the thickness of the element along its length is designed to diffract all three laser beams by a required shift. To calculate the thickness of the proposed optical element, the thickness associated with each wavelength is computed using the following equations

$${d_1} = (2\pi {m_1} + {\phi _1})\frac{{{\lambda _1}}}{{2\pi n}};$$
$${d_2} = (2\pi {m_1} + {\phi _1})\frac{{{\lambda _1}}}{{2\pi n}};$$
$${d_3} = (2\pi {m_3} + {\phi _3})\frac{{{\lambda _3}}}{{2\pi n}}$$
where ${d_1} = d{ = _2}{d_3} = {d_{}}$ (it means that the path length as a function of x is the same for the three wavelengths and it does not mean it has a constant value), $n$ is the refractive index of the material, ${m_1}$, ${m_2}$, ${m_3}$ are integers, ${\phi _1}$, ${\phi _2}$, ${\phi _3}$ are the phase associated with wavelengths ${\lambda _1}$, ${\lambda _2}$, and ${\lambda _3}$ respectively. Also, ${\phi _1} = {a_1}x$, ${\phi _2} = {a_2}$, ${\phi _3} ={-} {a_3}x$. The values of ${m_1}$, ${m_2}$, ${m_3}$, ${a_1}$, ${a_2}$, ${a_3}$ are to be adjusted to obtain the required shift in each wavelength upon encountering diffraction through the element. Equations (3), (4) and (5) form a set of three linear equations with three unknown variables. To solve the equations the number of unknown variables were reduced by considering ${m_2} = {m_1} + 1$, ${m_3} = {m_1} + 2$.

 figure: Fig. 2.

Fig. 2. Schematic representing the working of the proposed optical element

Download Full Size | PDF

The equations are solved by rearranging and converting them to matrix form

$${AX = B};X = {A^{ - 1}}B$$
where, $A = \left[ {\begin{array}{{cc}} {2\pi {\lambda_2}}&{ - 2\pi {\lambda_3}}\\ {2\pi {\lambda_1}}&{ - 2\pi {\lambda_3}} \end{array}} \right]$, $X = \left[ {\begin{array}{{c}} {{m_1}}\\ {{m_3}} \end{array}} \right]$,and $B = \left[ {\begin{array}{{c}} {{\phi_3}{\lambda_3} - {\phi_2}{\lambda_2} - 2\pi {\lambda_2}}\\ {{\phi_3}{\lambda_3} - {\phi_1}{\lambda_1}} \end{array}} \right]$.

Substituting the values of ${m_1}$, ${m_2}$, and ${m_3}$ in any of the Eqs. (4), (5), and (6) yields the value of d. The design of the element is based on mapping thickness d along its length. Figure 3 shows the plot of the thickness of the optical element ($d$) along the propagation direction, obtained by using the aforementioned equations in the simulation as a function of x, where x is along the direction perpendicular to the propagation.

 figure: Fig. 3.

Fig. 3. Plot of the value of d (thickness) with respect to the length of the optical element.

Download Full Size | PDF

3. Experimental setup

In an ideal condition, a $4f$ imaging system is employed to image an object using a digital sensor. It comprises an object which is to be imaged, a lens L1 that takes the Fourier transform of the object, and a second lens L2 that takes the inverse Fourier transform to image the object at the sensor plane. The element is required to be placed at the Fourier transform plane between lens L1 and L2. The use of SLM for the experimental realization of the proposed optical element, provides an opportunity to simplify the $4f$ imaging system as shown in Fig. 4. The first Fourier transform of object performed by the L1 of $4f$ imaging system along with the presence of an optical element is replaced by projecting a Fourier transform of the object and the phase of the optical element on the SLM and use the lens L2 to take the inverse Fourier transform (of the product of the phase of the optical element and the object to be imaged) to get the image at the sensor plane.

 figure: Fig. 4.

Fig. 4. A $4f$ system and its adaptation in the experimental setup

Download Full Size | PDF

To demonstrate the ability of the proposed optical element in enhancing the field of view, phase masks associated with each wavelength are projected onto the phase-only Spatial Light Modulator (SLM) (Holoeye-Pluto 2(1080 × 1920), pixel size 8 µm) at three spatially separated regions simultaneously as shown in Fig. 5. The phase values used for the experiment is computed using the obtained $d$ values.

 figure: Fig. 5.

Fig. 5. Schematic of the experimental setup for validating the working of the proposed optical system

Download Full Size | PDF

Figure 5 shows the schematic of the experimental system, where three laser sources (465 nm, 532 nm, and 632nm) illuminate the SLM projecting the phases associated with the three wavelengths simultaneously at three spatially separated regions along with a Fourier transform of the object image is projected on the SLM. The SLM is then Fourier transformed onto the digital sensor through a lens (f = 200 mm). Each phase diffracts the object image by the desired shift according to the wavelength used as shown in Fig. 5. It can be seen from Fig. 5 the element is designed in such a way that the wavelength 532 nm shifts the image upwards and thus the bottom region of the image comes in the field of view, whereas the wavelength 632 nm shifts the image downwards and thus the upper region of the image comes in the field of view. The phase element produces no shift for the wavelength 465 nm and thus the central region of the object falls in the field of view.

Figure 6 represents the tabletop experimental setup for demonstrating the ability of the proposed optical element to enhance the FOV using wavelength multiplexing where each wavelength carries information from different regions of the object and projects it onto the digital sensor thereby increasing the effective FOV. Light beams from all three laser sources enter the central system through the beam splitter (BS). The BS deflects the light onto the SLM upon reflection. The SLM is loaded with the Fourier transform of the object along with the spatially separated phase values obtained for each wavelength by using the calculated phase element d. The information at the surface of SLM experiences an inverse Fourier transform by the lens. This inverse Fourier transform produces an image of the object at the CCD. The information carried by three wavelengths is assessed using the RGB channel of the color CCD.

 figure: Fig. 6.

Fig. 6. The actual experimental setup for performing enhancement in effective FOV using SLM (Phase of optical element projected)

Download Full Size | PDF

4. Results and discussion

In the first set of experiments, an outline of a man is considered as an object as shown in Fig. 7. The experimental results validate that the information from different regions of the object is carried by different wavelengths to the sensor where ${\lambda _3} = 632nm$ carries the upper portion of the image while ${\lambda _2} = 465nm$ carries the middle region and ${\lambda _1} = 532nm$ carries the bottom region of the object. The sensor records an image containing complete information about the object in the form of wavelength multiplexing. The information is accessed by selecting the correct channel from the RGB channel of the colour sensor.

 figure: Fig. 7.

Fig. 7. Results obtained through experiments carried out for an outline of a man; image associated with ${\lambda _1}$, ${\lambda _2}$, and ${\lambda _3}$ experiences desired shifts and assembled using the RGB channel of the color CCD to increase the effective FOV.

Download Full Size | PDF

The second set of the experiment is carried out by taking a smiley as an object as shown in Fig. 8. As in the case of the previous object, all three wavelengths carry the information similarly, and information is accessed similarly by using RGB channels of the color sensor. Both sets of experiments validate the effectiveness of the proposed optical element in enhancing the FOV using wavelength multiplexing.

 figure: Fig. 8.

Fig. 8. Results obtained through experiments carried out for a smiley; image associated with ${\lambda _1}$, ${\lambda _2}$, and ${\lambda _3}$ experiences desired shifts and assembled using the RGB channel of the color CCD to increase the effective FOV.

Download Full Size | PDF

5. Conclusion

An optical element is proposed and demonstrated which can enhance the effective FOV of an imaging system by employing the concepts of diffraction and wavelength multiplexing. Three different wavelengths are employed to obtain three different images where each image contains information from different regions of the object. Thus, through wavelength multiplexing, this information is assembled and can be stitched to obtain an enlarged field of view. Since the element enhances the effective FOV by simultaneously using three different wavelengths, the factor of time is also not compromised. Owing to the advantages of this technique, it can prove to be very useful in various applications, especially microscopy. Moreover, the proposed method can work with incoherent sources. They can definitely be spatially incoherent and also partially temporally incoherent. In the partially temporally incoherent case, we will still use three light sources with different spectral regions, but the spectral bandwidth of each source will be wider than those one has in lasers.

Funding

Israeli Innovation Authority (73178).

Acknowledgment

The authors would like to thank the Israeli Innovation Authority for the research grant #73178, entitled: Dynamic and controlled increase of resolution and field of view”. Vismay Trivedi would like to thank the PBC Fellowship Program for Outstanding Chinese and Indian Post-Doctoral Fellows.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. N. A. Ahuja and N. K. Bose, “Design of Large Field-of-View High-Resolution Miniaturized Imaging System,” EURASIP J. Adv. Signal Process. 2007(1), 059546 (2007). [CrossRef]  

2. J. E. Greivenkamp, Field Guide to Geometrical Optics (SPIE, 2004).

3. D. Dahan, A. Yaacobi, G. Aharonovich, E. Pinsky, and Z. Zalevsky, “Broadband field-of-view expansion using a pair of digital micromirror devices,” J. Opt. Soc. Am. A 36(10), 1631 (2019). [CrossRef]  

4. J. Li, P. Tankam, Z. Peng, and P. Picart, “Digital holographic reconstruction of large objects using a convolution approach and adjustable magnification,” Opt. Lett. 34(5), 572 (2009). [CrossRef]  

5. M. R. Rai, A. Vijayakumar, and J. Rosen, “Extending the field of view by a scattering window in an I-COACH system,” Opt. Lett. 43(5), 1043 (2018). [CrossRef]  

6. P. Girshovitz and N. T. Shaked, “Doubling the field of view in off-axis low-coherence interferometric imaging,” Light: Sci. Appl. 3(3), e151 (2014). [CrossRef]  

7. M. Joglekar, V. Trivedi, V. Chhaniwal, D. Claus, B. Javidi, and A. Anand, “LED based large field of view off-axis quantitative phase contrast microscopy by hologram multiplexing,” Opt. Express 30(16), 29234 (2022). [CrossRef]  

8. Z. Zalevsky, E. Gur, J. Garcia, V. Micó, and B. Javidi, “Superresolved and field-of-view extended digital holography with particle encoding,” Opt. Lett. 37(13), 2766 (2012). [CrossRef]  

9. S. K. Nayar, “Catadioptric omnidirectional camera,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE Comput. Soc, n.d.), pp. 482–488.

10. A. Pan, C. Zuo, and B. Yao, “High-resolution and large field-of-view Fourier ptychographic microscopy and its applications in biomedicine,” Rep. Prog. Phys. 83(9), 096101 (2020). [CrossRef]  

11. Y. Zhu, M. Sun, X. Chen, H. Li, Q. Mu, D. Li, and L. Xuan, “Single full-FOV reconstruction Fourier ptychographic microscopy,” Biomed. Opt. Express 11(12), 7175 (2020). [CrossRef]  

12. D. Wang, Y. Han, J. Zhao, L. Rong, Y. Wang, and S. Lin, “Enhanced image reconstruction of Fourier ptychographic microscopy with double-height illumination,” Opt. Express 29(25), 41655 (2021). [CrossRef]  

13. M. Kato and K. Sakuda, “Computer-generated holograms: application to intensity variable and wavelength demultiplexing holograms,” Appl. Opt. 31(5), 630 (1992). [CrossRef]  

14. Y. Amitai, “Design of wavelength-division multiplexing/demultiplexing using substrate-mode holographic elements,” Opt. Commun. 98(1-3), 24–28 (1993). [CrossRef]  

15. G. Yang, M.-P. Chang, B. Dong, B. Gu, X. Tan, and O. K. Ersoy, “Iterative optimization approach for the design of diffractive phase elements simultaneously implementing several optical functions,” J. Opt. Soc. Am. A 11(5), 1632 (1994). [CrossRef]  

16. B.-Y. Gu, G.-Z. Yang, B.-Z. Dong, M.-P. Chang, and O. K. Ersoy, “Diffractive-phase-element design that implements several optical functions,” Appl. Opt. 34(14), 2564 (1995). [CrossRef]  

17. M. P. Chang, O. K. Ersoy, B. Dong, G. Yang, and B. Gu, “Iterative optimization of diffractive phase elements simultaneously implementing several optical functions,” Appl. Opt. 34(17), 3069 (1995). [CrossRef]  

18. B.-Z. Dong, G.-Q. Zhang, G.-Z. Yang, B.-Y. Gu, S.-H. Zheng, D.-H. Li, Y.-S. Chen, X.-M. Cui, M.-L. Chen, and H.-D. Liu, “Design and fabrication of a diffractive phase element for wavelength demultiplexing and spatial focusing simultaneously,” Appl. Opt. 35(35), 6859 (1996). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Concept behind the proposed optical element
Fig. 2.
Fig. 2. Schematic representing the working of the proposed optical element
Fig. 3.
Fig. 3. Plot of the value of d (thickness) with respect to the length of the optical element.
Fig. 4.
Fig. 4. A $4f$ system and its adaptation in the experimental setup
Fig. 5.
Fig. 5. Schematic of the experimental setup for validating the working of the proposed optical system
Fig. 6.
Fig. 6. The actual experimental setup for performing enhancement in effective FOV using SLM (Phase of optical element projected)
Fig. 7.
Fig. 7. Results obtained through experiments carried out for an outline of a man; image associated with ${\lambda _1}$, ${\lambda _2}$, and ${\lambda _3}$ experiences desired shifts and assembled using the RGB channel of the color CCD to increase the effective FOV.
Fig. 8.
Fig. 8. Results obtained through experiments carried out for a smiley; image associated with ${\lambda _1}$, ${\lambda _2}$, and ${\lambda _3}$ experiences desired shifts and assembled using the RGB channel of the color CCD to increase the effective FOV.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

F O V = 2 arctan ( Δ S / 2 f )
H = f tan ω
d 1 = ( 2 π m 1 + ϕ 1 ) λ 1 2 π n ;
d 2 = ( 2 π m 1 + ϕ 1 ) λ 1 2 π n ;
d 3 = ( 2 π m 3 + ϕ 3 ) λ 3 2 π n
A X = B ; X = A 1 B
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.