Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

General design algorithm for stray light suppression of a panoramic annular system

Open Access Open Access

Abstract

In this work, a universal algorithm for designing a panoramic annular lens (PAL) system free from stray light is proposed. The impact of a given stray light path to the optical system could be estimated without running a full stray light analysis process, which allows designers to eliminate troublesome stray light paths by optimizing lens parameters at an early stage of optical design. A 360° ×(40°-100°) PAL system is designed and implemented to verify the proposed method. Simulation shows that the point source transmittance (PST) decreases by 2 orders of magnitude at a specific field-of-view (FoV) range after optimizing the system. Experimental results show perfect consistency with the simulation predictions, which indicate that two types of stray light are totally eliminated in the demonstrated system. This stray light analysis and suppression method provides a promising approach for the research and development of ultra-wide angle high performance optical systems.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Among all kinds of panoramic imaging optical systems, panoramic annular lens (PAL) has caught plenty of attention from researchers and optical engineers due to its ultra-large field of view (FoV), relatively simple and compact structure, excellent image quality, easy mounting and low $f-\theta$ distortion compared to other solutions like fisheye lenses and multi-camera splicing [1,2]. PAL systems have been widely used in the field of tube surface inspection, surveillance systems, online meeting, robot navigation etc. [39].

However, due to the catadioptric optical structure, multiple reflections and leaking zero-order stray light similar to what’s common in a Cassegrain system lead to unexpected light paths in the system. In addition, the entrance pupil diameter of a PAL system is relatively small comparing to the physical diameter of the lens. The area of illuminated part of an optical surface is fairly large relative to the area of the cross section of imaging rays of an individual FoV, increasing the potential relative amount of stray light energy. Thus, PAL systems can be severely affected by stray light. Figure 1(a) shows a picture captured by a typical PAL system. This picture is badly deteriorated by stray light spots, and the shape and brightness of these spots vary. Stray light suppression is a vital process especially for optical systems with a unique optical structure or complex operating environment. Abundant of techniques and rules for optical system design and fabrication to deal with stray light have been proposed, including implementing lens hoods or Lyot Stops to block rays, adopting mounting components with specially designed shapes to avoid being illuminated, and lower the Bidirectional Scattering Distribution Function (BSDF) value of certain surfaces using black paint or other surface treatment methods [1012]. There’re also countermeasures by digital image processing algorithms to reduce the impact of stray light, which requires prior knowledge of scenery characteristics or the spatial distribution of stray light spots [1316].

 figure: Fig. 1.

Fig. 1. (a) The phenomenon of stray light of a PAL system at specific angles. (b) Layout of a typical PAL optical system.

Download Full Size | PDF

Earlier works on stray light of PAL systems indicate some possible stray light paths and corresponding solutions including digging holes, dull polishing and slotting [17,18]. Due to the difficulty to implement and the lack of experimental verification, the possibility to operate and the effect remain unknown. In 2013, Huang et al. proposed a stray light suppressing method in the optical designing process. They pointed out that some stray light would not exist when certain mathematical relationship about imaging rays is satisfied [19]. Recently, Li et al. proposed a similar method to control other stray light paths in a PAL [20]. However, controlling stray light using mathematical models entirely based on imaging rays might be imprecise for some types of stray light, since rays contribute to stray light ought to have no specific relationship with the imaging rays. Countermeasures from image processing algorithms also fail in dealing with stray light in a PAL system because of the diversity of stray light spots and the observing scenery.

In this paper, a brand-new and general stray light analysis and suppressing algorithm based on feature rays identification via backward ray tracing is proposed. Optical designers are able to use this method to evaluate the stray light risk instantly, and control it to an acceptable level directly in optical design software. To verify, a 360$^{\circ} \times$(40$^{\circ }$-100$^{\circ }$) PAL system is designed and implemented. Stray light simulation of the newly designed PAL system indicates that the specific stray light spots disappear, and the Point Source Transmittance (PST) decreases by at least 1 order (up to 2 orders at a certain FoV range) comparing to a typical PAL system. Experimental results of further stray light test under various observing environments show perfect consistency with simulation predictions. This algorithm is of great importance to design stray-light-free catadioptric optical systems, and provides a whole new idea on high-performance optical system design.

This paper is organized as follows. Section 2 gives a brief introduction to the optical structure of PAL systems and discusses the basic knowledge and principle of the proposed method in detail, with two demonstrating examples of stray light paths. Section 3 shows a PAL optical design based on our method, including stray light analysis results. Section 4 is about the experimental results of the manufactured PAL system, and gives a direct comparison to a PAL designed without our method. Finally, section 5 summarizes the current research work and prospects for future stray light suppression and optical design approaches.

2. Principle

2.1 Structure of a PAL system

Rather than purely exploiting refraction to collect rays from large FoV like fisheye lens does, a PAL system is a catadioptric system in which multiple refraction and reflection occur in a special optical structure referred to Panoramic Head Unit (PHU). PHU is the core element in a PAL system since it collects rays from all directions in the FoV and compresses the angle of incidence through a unique path. It also determines some basic optical and mechanical properties of the PAL system, including the FoV range, entrance pupil diameter and mechanical aperture.

A relay lens (RL) group is implemented to correct the substantial optical aberrations PHU induces like astigmatism and distortion, and also, to produce the required optical power for the entire system. Thanks to the cooperation of PHU and its RL, the optical performance of a PAL system is way much better than fisheye lenses, especially in the ability of distortion control and edge image quality. Figure 1(b) is the optical layout of a PAL system. Light from the scenery propagates in the PHU, then passes through the aperture stop and RL, finally reaches the image plane.

2.2 Necessary information and techniques for stray light analysis

In Fig. 1(a), there’s a shrunk and dim ghost image of the computer screen at the opposite position of the original screen. This may lead to misrecognition in some computer vision systems using PAL as the image acquiring unit. A shining arc and a diffused light spot cover the image and greatly lower the contrast of some areas, making this picture looks quite annoying. The underlying mechanisms of stray light are clear and various, and stray light paths in a catadioptric optical system are complicated. There’s no such general principle or relationship depicting stray light paths. The foremost procedure of stray light control is to identify the stray light path needs to be considered.

Stray light in a PAL can be caused by multiple ways, as shown in Fig. 2, which is obtained from a stray light test in a dark room and a full stray light analysis. Figure 2(a)-(d) show light spots from experiment and simulation and the causing light path of the dominating stray light spot for 4 different FoVs respectively. Except for the situation in Fig. 2(b), all paths are caused by ray splitting in the PHU. Theoretically, these stray light paths can all be dealt with through the idea of the proposed method. In this paper the stray light path shown in Fig. 2(d) is used to demonstrate the analysis process in detail, and the one in Fig. 2(a) gives a more complicated implementation of our method. These two paths cause the ghost image and shining arc in Fig. 1(a) respectively, and also deteriorate the picture the most. We use SL(a) to SL(d) to represent the four kinds of stray light path depicted in Fig. 2.

 figure: Fig. 2.

Fig. 2. Experimental and simulation results of stray light at different FoVs. (a) FoV=0$^{\circ }$. The bright ring evolves to an arc with FoV increasing. (b) FoV=13$^{\circ }$. Scattering causes diffused light spot. (c) FoV=25$^{\circ }$. A typical non-focused ghost image is present. (d) FoV=78$^{\circ }$. An almost focused ghost image is present.

Download Full Size | PDF

Like imaging rays, stray light paths have their own aperture stops, vignetting stops and, in some cases, field stops. It is obvious that the aperture stop of the imaging rays is also the aperture stop for SL(a) and SL(d). The position and size of the vignetting stops or field stops determine the FoV range of a certain path. The conventional method to describe a single imaging ray is to use normalized pupil coordinates and normalized field coordinates [21]. With the same manner, any single stray light ray can be easily characterized. We adopt the angle of incidence on the aperture stop (AoS) as the field coordinate to characterize rays. This is reasonable since AoS has a one-to-one correspondence to FoV. This also allows us to trace a ray backwards, i.e., assuming rays propagate from the image plane or aperture stop to the object space, to avoid a ray-aiming process. Backward ray tracing is a common technique in various aspects of optical simulation, including critical surface and illuminated surface identification in stray light analysis, and scattered ray aiming in fast scattering simulation [10].

In addition, the PAL system is central symmetrical, so considering rays on the tangential plane is adequate. Thus, for a given PHU structure, the result and arguments of a backward ray tracing process could be expressed as Eq. (1).

$$\boldsymbol{R}=\Psi(\rho,\theta).$$

$\boldsymbol {R}$ is a $2 \times T$ ray data matrix consists of $T$ position vectors of ray-surface intersections, following the order of the ray propagates along a specific path, and $T$ is the total number of intersections in the whole ray path. Any backward tracing process for a single ray, represented by a function $\Psi$, with two input arguments, $\rho$ (normalized pupil coordinate) and $\theta$ (field coordinate, or AoS), results in a ray data matrix $\boldsymbol {R}$. The definition of AoS or $\theta$ and a demonstrating ray with a pupil coordinate of -1 is shown in Fig. 3(a).

 figure: Fig. 3.

Fig. 3. (a) Definition of $\theta$ and an SL(d) down marginal ray obtained by backward ray tracing. (b) Light path of SL(d) for 3 field coordinates and 3 pupil coordinates and LRW, URWs. (c) Main idea of the proposed method. (d) Two marginal rays with maximum field coordinate contributing to SL(d).

Download Full Size | PDF

2.3 Definition of feature rays and its application on characterizing stray light

The goal for defining feature rays is to use as least as possible of rays to completely characterize a given type of stray light. Like what imaging rays behave, for any given pupil coordinate, the field coordinate of an arbitrary stray light ray is constrained in a certain range, which is determined by field stops and vignetting stops of the optical structure. A type of stray light can be characterized in this manner.

Generally, we do not know which surfaces can be vignetting stops, even the path of stray light is known. Theoretically, the farther a surface is from the aperture stop, the more chance this surface can be a vignetting stop. We assume all surfaces except for the aperture stop are possible, to cover all cases. For each optical surface containing a ray-surface intersection in a stray light path, the two edges of this surface are two possible vignetting points. The edge closer to large-field-coordinate rays may constrain the upper limit of field coordinate, thus is defined as Upper Ray Waypoint (URW). The edge closer to small-field-coordinate rays is defined as Lower Ray Waypoint (LRW). Since rays intersect with optical surfaces $T$ times in an entire path, the number of URWs or LRWs is also $T$. We construct another $2 \times T$ waypoint matrix $\boldsymbol {W}$ to store the lateral coordinates of URWs and LRWs on each intersecting surface. Note that for a known stray light path, not all $T$ URWs or LRWs are possible to vignette the stray light in practical. These waypoints having no chance to constrain the field coordinate and corresponding elements in $\boldsymbol {W}$ can be omitted in the following process.

Based on 3 field coordinates and 3 pupil coordinates, a diagram containing 9 rays of a specific stray light path can be obtained by backward tracing starting from the aperture stop. The case of SL(d) is shown in Fig. 3(b). The five optical surfaces are marked as S1 to S5. Hues of the rays indicate field coordinates, while the luminances of the rays indicate pupil coordinates. This diagram shows the shifting tendency of rays forming SL(d) propagating inside the PHU with the change of field coordinate and pupil coordinate, and contains the information of URWs and LRWs. The down edges of S1 and S2 limit the field coordinate to increase, and the up edge of S4 limits the field coordinate to decrease. These points actually determine the range of field coordinates of existing rays contributing to SL(d), and are considered as URWs and LRW used for SL(d), as marked in Fig. 3(b). The narrower this range is, the less the system would be affected by the specific stray light.

With presence of the stray light, the range of field coordinates is a rational interval, i.e., the upper limit of the range is larger than the lower limit, as shown in the top half of Fig. 3(c). If we optimize the optical structure to a status where the so-called upper limit is even lower than the so-called lower limit, where the range of field coordinates becomes irrational and paradoxical, the specific stray light is cut off, as described in the bottom half of Fig. 3(c). This is the main idea of the proposed method. Note that neither the symbol $\theta$ nor AoS is used in Fig. 3(c) and the discussion of main idea since AoS is only one possible expression of the field coordinate, and we use AoS in this paper for the convenience of algorithm. There are also other expressions like the conventional object FoV, and it can also be used to understand the main principle. However, manipulating the field coordinate is not straightforward enough since it is an angle value and does not have direct relationship with the manufacturability of the optical system. For optical designers, it’s better to control the position of a ray-surface intersection related to manufacture tolerance. So we aim to characterize and control stray light status through the relationship of some specific rays and waypoints.

By definition, the ray with the maximum field coordinate must pass through one of URWs. Two marginal rays satisfying this condition for SL(d) are shown in Fig. 3(d). If we compress the ray-surface intersection of the down marginal ray (marked as blue, having the largest lateral distance between ray-surface intersection and LRW) under the LRW, SL(d) ray from any pupil coordinate with the maximum field coordinate cannot propagate normally in the PHU. At this time, the field coordinate of this down marginal ray, which is also the upper limit of it, is even lower than the imaginary lower limit, and all SL(d) rays cannot exist in the PHU, i.e., SL(d) is cut off. Generally, We define the two marginal rays with maximum field coordinate as $\pm$Upper Ray (UR). On the other end, $\pm$Lower Ray (LR) can be defined in the same way. With the absence of stray light, the ray-surface intersections of $\pm$LR are beyond one of URWs. $\pm$UR and $\pm$LR are defined as feature rays set. From the relative relationship of feature rays set and URWs/LRWs, the existence of stray light could be judged. That is, if both $\pm$LR/UR cannot propagate inside the effective aperture of a URW/LRW, the stray light is absent.

SL(d) is a case where the process could be further simplified. Since there’s only one LRW, eliminating SL(d) here requires considering -UR only. Although the sign of LR that two URWs require may be different, to clearly illustrate the workflow, we consider only -LR as an approximation. -UR and -LR are simply marked as UR and LR for SL(d).

2.4 Identification and quantification of feature rays set

For both edges of pupil coordinates, the goal is to find an existing ray with the maximum or minimum field coordinate, and multiple waypoints lead to multiple $\pm$UR/LR candidates. Among all $\pm$UR candidates, the valid $\pm$UR that exists in the PHU is the one with minimum field coordinate, and the valid $\pm$LR is the one with maximum field coordinate. The identification of feature rays can be quantified as searching for a field coordinate so that the ray satisfies the condition.

For SL(d), by setting the vertex of S1 as the origin of the $y$-$z$ coordinate system, an arbitrary ray with its ray data matrix $\boldsymbol {R}_{SL(d)}$ can be derived and plotted in Eq. (2) and Fig. 4(a). The element $R_{SL(d),n,t}$ is the coordinate value on the $n$-th axis (e.g. 1 for $y$-axis and 2 for $z$-axis) of the $t$-th ray-surface intersection. The waypoint matrix $\boldsymbol {W}_{SL(d)}$ is also given in Eq. (3). $\boldsymbol {R}$ and $\boldsymbol {W}$ share the same dimension and size like Eq. (2) and Eq. (3) show, but the meaning of the elements are different. $[R_{1,t}, R_{2,t}]^\top$ is the position vector of $t$-th ray-surface intersection, while $W_{1,t}$ and $W_{2,t}$ denote the lateral coordinates of LRW and URW on $t$-th ray-surface intersection respectively. In Eq. (3) the values of $W_{SL(d),1,7}$, $W_{SL(d),2,10}$ and $W_{SL(d),2,11}$ are exactly the lateral coordinates of LRW, URW2 and URW1 of SL(d) in Fig. 3(b).

$$\boldsymbol{R}_{SL(d)}=\Psi_{SL(d)}({-}1,6.4)= \begin{bmatrix} R_{SL(d),1,1} & R_{SL(d),2,1} \\ R_{SL(d),1,2} & R_{SL(d),2,2} \\ \vdots & \vdots \\ R_{SL(d),1,11} & R_{SL(d),2,11} \\ \end{bmatrix}^\top = \begin{bmatrix} -1.39 & 15.15 \\ -1.37 & 10.52 \\ \vdots & \vdots \\ -14.51 & 7.75 \end{bmatrix}^\top.$$
$$\boldsymbol{W}_{SL(d)} = \begin{bmatrix} W_{SL(d),1,1} & W_{SL(d),2,1} \\ \vdots & \vdots \\ W_{SL(d),1,7} & W_{SL(d),2,7} \\ \vdots & \vdots \\ W_{SL(d),1,10} & W_{SL(d),2,10} \\ W_{SL(d),1,11} & W_{SL(d),2,11} \\ \end{bmatrix}^\top = \begin{bmatrix} \frac{1}{2}D_{S5} & -\frac{1}{2}D_{S5} \\ \vdots & \vdots \\ \frac{1}{2}D_{S4} & \frac{1}{2}D_{S1} \\ \vdots & \vdots \\ \frac{1}{2}D_{S2} & -\frac{1}{2}D_{S2} \\ -\frac{1}{2}D_{S4} & -\frac{1}{2}D_{S1} \\ \end{bmatrix}^\top.$$

 figure: Fig. 4.

Fig. 4. (a) A testing ray and UR identifying indicator for SL(d). (b) An example of $\theta$-$I_{URW}$s graph for SL(d). In both figures the subscript "SL(d)" is omitted.

Download Full Size | PDF

The lateral distance between the ray-surface intersection and each URW/LRW could be used as the indicator for $\pm$UR/LR identification, as expressed in Eq. (4) in general, and Eq. (5) with Fig. 4(a) for SL(d). Eq. (5) only contains 3 equations since we omitted other URWs/LRWs. The sign of $I_{URW(t)}$ and $I_{LRW(t)}$ in Eq. (4) is set to represent the relative position of a ray-surface intersection and corresponding waypoint. A positive value corresponds to the situation where the intersection is within the valid aperture of the surface, which also means that the ray is not blocked here.

$$\begin{cases} \lvert I_{URW(t)} \rvert = \lvert R_{1,t} - W_{2,t} \rvert \\ \lvert I_{LRW(t)} \rvert = \lvert R_{1,t} - W_{1,t} \rvert \end{cases} \Bigg| _{t = 1,2,\ldots,T}.$$
$$\begin{cases} I_{SL(d),URW1} = R_{SL(d),1,11} - W_{SL(d),2,11} = R_{SL(d),1,11} + \frac{1}{2}D_{S1} \\ I_{SL(d),URW2} = R_{SL(d),1,10} - W_{SL(d),2,10} = R_{SL(d),1,10} + \frac{1}{2}D_{S2} \\ I_{SL(d),LRW} = R_{SL(d),1,7} - W_{SL(d),1,7} = R_{SL(d),1,7} - \frac{1}{2}D_{S4} \\ \end{cases}.$$

Based on this model, a UR candidate satisfies $I_{URW(t)} = 0$. Each $I_{URW(t)}$ is a function of $\theta$. For SL(d), an example of its graph is shown as Fig. 4(b). A non-zero value of Error means ray cannot be traced throughout the entire SL(d) path so this ray does not exist in the PHU. The root for each $I_{URW}$ curve is the $\theta$ of the specific UR candidate, and the smaller root corresponds to valid UR, since this is the UR that exists in the PHU due to its less $\theta$. The situation of LR is just the opposite. In general the valid $\pm$UR/LR solving problem can be expressed as to find $\theta _{\pm UR}$/$\theta _{\pm LR}$ such that Eq. (6) holds. Implementing Eq. (6) to SL(d) obtains Eq. (7).

$$\begin{cases} \boldsymbol{R}_{{\pm} UR}=\Psi({\pm} 1,\theta_{{\pm} UR}) \\ min(I_{URW(t)}) = 0 \\ \boldsymbol{R}_{{\pm} LR}=\Psi({\pm} 1,\theta_{{\pm} LR}) \\ min(I_{LRW(t)}) = 0 \end{cases} \Bigg| _{t = 1,2,\ldots,T}.$$
$$\begin{cases} \boldsymbol{R}_{SL(d),UR}=\Psi_{SL(d)}({-}1,\theta_{UR}) \\ min(I_{SL(d),URW1},I_{SL(d),URW2}) = 0 \\ \boldsymbol{R}_{SL(d),LR}=\Psi_{SL(d)}({-}1,\theta_{LR}) \\ I_{SL(d),LRW} = 0 \end{cases}.$$

If the left part of the second or fourth equation in Eq. (6) is a monotone function, Eq. (6) can be easily solved by various numerical algorithms, and only a few iterations are needed. After obtaining $\theta _{\pm UR}$/$\theta _{\pm LR}$, $\pm$UR and $\pm$LR are accurately identified and quantified for now.

Figure 5(a) shows -UR and +LR of SL(d) identified by the proposed method with only 10 iterations in total, as an example to show the efficiency and accuracy of characterizing stray light. As a contrast, Fig. 5(b) is the result derived by traditional stray light analysis process using commercial software. These two rays are found following the rules that they must have the minimum and maximum object FoV in all rays contributing to SL(d). Figure 5 indicates that the concept of feature rays set is able to characterize the stray light status, and is easy to realize in practice. +LR is selected here because we’re characterizing rays using FoV rather than AoS in the process.

 figure: Fig. 5.

Fig. 5. (a) Two feature rays identified by our method. (b) Two feature rays founded using commercial stray light analysis software.

Download Full Size | PDF

2.5 Judgement on existence of stray light by feature rays set

Since $\pm$UR and $\pm$LR stand for the range of field coordinate for existing rays, adopting the difference between the field coordinates of $\pm$UR and $\pm$LR as the parameter describing the existence and relative amount of stray light seems like a good choice, and also reflects the main idea of this method. However, according to the discussions in section 2.3, an angle value is not practical. Thus, we use the lateral distance between each ray-surface intersection of $\pm$UR/LR and the LRW/URW on corresponding surface to quantify the stray light condition. This provides more convenience on comprehending how far the system is to a stray-light-free condition and on controlling the manufacturing tolerance to ensure the product’s performance. This lateral distance is calculated by Eq. (8). The sign of $C_{\pm UR,t}$ and $C_{\pm LR,t}$ in Eq. (8) is set to represent the relative position of a ray-surface intersection and corresponding waypoint following the same rule discussed in section 2.4. Implementing Eq. (8) to SL(d) obtains Eq. (9), and the values are as shown in Fig. 6(a). Note that it is URW1 that valid UR passes through due to the less field coordinate this ray has, comparing to the ray passing through URW2. The relationships among the points and parameters discussed above are clearly shown.

$$\begin{cases} \lvert C_{{\pm} UR,t} \rvert = \lvert R_{{\pm} UR,1,t} - W_{1,t} \rvert \\ \lvert C_{{\pm} LR,t} \rvert = \lvert R_{{\pm} LR,1,t} - W_{2,t} \rvert \\ \end{cases} \Bigg| _{t = 1,2,\ldots,T}.$$
$$\begin{cases} C_{SL(d),UR} = R_{SL(d),UR,1,7} - W_{SL(d),1,t} = R_{SL(d),UR,1,7} - \frac{1}{2}D_{S4} \\ C_{SL(d),LR1} = R_{SL(d),LR,1,11} - W_{SL(d),2,11} = R_{SL(d),LR,1,11} + \frac{1}{2}D_{S1} \\ C_{SL(d),LR2} = R_{SL(d),LR,1,10} - W_{SL(d),2,10} = R_{SL(d),LR,1,10} + \frac{1}{2}D_{S2} \end{cases}.$$

 figure: Fig. 6.

Fig. 6. (a) Feature rays analysis for SL(d). (b) Feature rays in an SL(d)-free PHU. In both figures the subscript "SL(d)" is omitted.

Download Full Size | PDF

If both $\pm$UR/LR are blocked by one of LRWs/URWs, this stray light cannot exist in the PHU. The presence/absence of $\pm$LR is equal to the presence/absence of $\pm$UR, and they both indicate that the stray light is present/blocked. Now the existence of stray light could be directly judged by a single parameter derived from feature rays set. More specifically, if Eq. (10) holds, the stray light is blocked. The maximum operation to calculate $C_{UR,t}$ and $C_{LR,t}$ selects the effective pupil coordinate for each LRW and URW, assuring feature rays from all pupil coordinates are being controlled, and avoiding incomplete elimination of stray light. Implementing Eq. (10) to SL(d) obtains Eq. (11).

$$\begin{cases} C_{UR,t}=max(C_{{+}UR,t},C_{{-}UR,t}) \\ C_{LR,t}=max(C_{{+}LR,t},C_{{-}LR,t}) \\ min(C_{UR,t}) < 0 \Leftrightarrow min(C_{LR,t}) < 0 \end{cases} \Bigg| _{t = 1,2,\ldots,T}.$$
$$C_{SL(d),UR} < 0 \Leftrightarrow min(C_{SL(d),LR1},C_{SL(d),LR2}) < 0 .$$

2.6 Complete quantification of stray light status by feature rays set

Inaccuracy of the mechanical aperture of a manufactured lens block may lead to unexpected stray light. For a system which does not suffer from a type of stray light in simulation, designer needs to know how far it is to the critical situation, and evaluate the acceptable manufacturing tolerance based on stray light consideration. An example PHU where SL(d) is eliminated is shown in Fig. 6(b). By definition, $C_{SL(d),UR}$, $C_{SL(d),LR1}$ and $C_{SL(d),LR2}$ are all negative in this case, and the intersections on these surfaces become virtual, i.e., they lie beyond the effective aperture of the surface. The risk of stray light is related to the lateral distance virtual intersections have to shift to become realistic. Each distance is just the absolute value of $C_{SL(d),UR}$, $C_{SL(d),LR1}$ and $C_{SL(d),LR2}$. Considering multiple intersections, the minimum $C_{LR,t}$/$C_{UR,t}$ (maximum absolute value if negative) is the distance for all virtual intersections to be realistic so that stray light exists, making itself the representative one for all $C_{\pm LR,t}$/$C_{\pm UR,t}$. Furthermore, to summarize this into a single value $C$, Eq. (12) gives the definition.

$$C=max(min(C_{UR,t}),min(C_{LR,t}))\rvert _{t = 1,2,\ldots,T}.$$

If negative, this is the closest lateral distance (minimum absolute value) for all virtual intersections to be realistic allowing the appearance of stray light. Therefore, $C$ serves as a direct reference for manufacturing tolerance. Under the circumstance that stray light exists, $C_{LR,t}$/$C_{UR,t}$ are all positive. According to Eq. (10), if one of $C_{LR,t}$/$C_{UR,t}$ becomes negative, then stray light is eliminated, and $C$ goes to negative. Therefore, $C$ is also the indicator stands for relative distance to stray-light-free status if stray light is present. Implementing Eq. (12) to SL(d) obtains Eq. 13.

$$C_{SL(d)}=max(C_{SL(d),UR},min(C_{SL(d),LR1},C_{SL(d),LR2})).$$

To conclude, the sign of $C$ directly reflects the existence of stray light. Besides, the absolute value of $C$ is the physical distance to the critical situation where stray light is just about to appear or vanish. Thus, the stray light condition is summarized and quantified to a single parameter $C$. This value is derived by above discussion using optical structure parameters of PHU and the aperture stop, and this calculating process could be done using programming functions in commercial optical design software. Since only a few rays need to be traced in this process, this algorithm does not slow down the optimizing process in optical design significantly. By calculating and restricting the value of $C$, designers are able to get aware of the current stray light situation and control the software to optimize it.

2.7 Summary of the workflow

In section 2.2, we pointed out that both pupil coordinate and field coordinate are required to backward trace and characterize a ray. The tracing result consists of the positions of all ray-surface intersections is calculated by function $\Psi$ and stored in a $2 \times T$ ray data matrix $\boldsymbol {R}$ following the order of the ray propagates. In section 2.3, URWs/LRWs are defined. The feature rays set generally consists of 4 marginal rays are thereupon defined and identified. These rays help to characterize stray light in current optical system. Another $2 \times T$ waypoints matrix $\boldsymbol {W}$ is defined to store the coordinates of URWs/LRWs. Though multiple URWs/LRWs lead to more calculation, with some prior knowledge or approximation, number of URWs/LRWs and feature rays need to be considered can reduce. In section 2.4, the method to quantify the identification of feature rays set is discussed. The lateral distance between a ray-surface intersection and corresponding waypoint can be used to solve for each feature ray. In section 2.5, a set of parameters are calculated from $\boldsymbol {R}$ of feature rays set and $\boldsymbol {W}$ of the stray light path to evaluate the existence of stray light in the PHU. The most representative value $C$ is extracted in section 2.6. To eliminate the stray light, $C$ must be controlled to be negative at least, so that the stray light path is cut off at one of the waypoints.

A complete workflow for optimizing an optical system considering multiple stray light paths is shown in Fig. 7. Initially starting from an optical structure affected by stray light, like the optical system in Fig. 2, a typical stray light analysis is required to identify light paths are going to be considered. Normally those having substantial impact to the image would be selected. For any certain stray light path, a ray path diagram can be obtained by backward ray tracing and helps to identify possible vignetting stops, thus feature ray waypoints are also determined. Note in general all surfaces in the PHU may contain waypoints. The simplest case is that there are only one URW and one LRW in practical or under approximation. The UR and LR can be easily identified, and $C_{LR}$ and $C_{UR}$ can be directly calculated without any maximum or minimum operation. If multiple waypoints are considered, then a selecting program would be needed to obtain the representative $C_{LR}$ and $C_{UR}$. The value $C$ can be calculated and integrated as part of optimization functions in optical design software. By implementing multiple stray light paths, and after sufficient optimization, theoretically all considering stray light paths disappear in the optical system. An optical structure with less stray light is obtained. About the algorithm in general, several explanations are necessary here.

 figure: Fig. 7.

Fig. 7. Workflow for stray light control based on proposed algorithm.

Download Full Size | PDF

Firstly, not all kinds of stray light can be eliminated using this method. Our method provides a novel idea on dealing with some troublesome cases where traditional solutions may fail. And it’s more likely to be helpful in catadioptric systems, since stray light paths in these systems are complicated and fickle. There’s a greater chance that a non-zero lower limit of field coordinates could be found in catadioptric systems, and only under this circumstance can the proposed method work.

Secondly, to evaluate and control stray light and minimize the impact on optical design process, all possible vignetting stops, which means also, URWs and LRWs must be identified. This is why the generalized implementation considers all $T$ URWs/LRWs in total. If some potential URWs or LRWs are not considered, our method can still be used to eliminate specific stray light, but the eliminating condition may become more strict, adding some pressure on optical design process. Thus, to implement this algorithm to the most efficient level, adequate stray light analysis experience of some specific optical system is required to simplify the algorithm while maintaining its accuracy.

Thirdly, we take both URs and LRs into consideration to make the value $C$ to have a clear meaning which is the physical distance to the appearance of stray light. Ignoring either UR or LR does not affect the ability to eliminate stray light or evaluating stray light roughly. Moreover, in many cases URs and LRs have different complexity. By considering one of them, the amount of computation can be reduced, and the analysis process can be accelerated. The price is that the value $C$ (equals to $min(C_{UR,t})$ or $min(C_{LR,t})$) might indicate a relative risk rather than an accurate one with a clear practical meaning.

Fourthly, though in traditional stray light suppression theory, the size and position of the aperture stop are crucial to the system’s performance [10], in this paper we do not discuss this aspect due to its irrelevance to stray lights inside the PHU. Because the aperture stop of imaging rays and stray lights are the same, altering the aperture size does not affect the relative amount of these stray lights (relative to imaging rays).

Finally, different types of stray light paths have different implementations of this algorithm, and the eliminating conditions of those might conflict with each other. Therefore, considering many types of stray light may limit other optical performances. It is a clear reflection that optical design is all about a balancing work.

2.8 Extra: A brief workflow for SL(a) and some further discussions

Here we also demonstrate a more complicated case briefly for a glance on the capability of this method on stray light control. As Fig. 2(a) shows, the abnormal event forming SL(a) is ray splitting on S1. This type of stray light has never been mentioned in published literature. The ray path diagram is shown in Fig. 8(a). We investigated abundant PHU structures and revealed that SL(a) has 4 possible LRWs and 2 possible URWs. These waypoints and UR/LR indicators are marked in Fig. 8(b). The $\theta$-$I_{LRW}$ graph for the sample PHU is shown as Fig. 8(c).

 figure: Fig. 8.

Fig. 8. (a) Ray path diagram for SL(a). (b) UR/LR identifying indicators. (c) An example of $\theta$-$I_{LRW}$s graph. In (b) and (c) the subscript "SL(a)" is omitted.

Download Full Size | PDF

According to discussions in section 2.4 and Eq. (6), the $\theta _{\pm LR}$ of $\pm$LR is the one satisfying Eq. (14).

$$\begin{cases} \boldsymbol{R}_{SL(a),\pm LR}=\Psi_{SL(a)}({\pm} 1,\theta_{{\pm} LR}) \\ min(I_{SL(a),LRW1},I_{SL(a),LRW2},I_{SL(a),LRW3},I_{SL(a),LRW4})=0. \end{cases}$$

Numerically, the left part of the second equation in Eq. (14) is a piecewise function as Fig. 8(c) indicates. Depending on error determination algorithm in ray tracing program and the order of intersections ray path follows, only one section of this function corresponds to correct and complete ray path which is needed in the stray light control process. Other sections either have a ray tracing error such as the occurrence of total internal reflection or missing lens surfaces, or contain rays propagate inside the PHU with an incorrect order. Thus, an extra searching and rejection process is necessary to extract the section we need. The valid section is highlighted with the green frame in Fig. 8(c). Besides, inside this section, the function is not monotonic, and may have multiple roots, so some traditional solving algorithm cannot be utilized, or they may lead to the wrong result. The valid $\theta _{\pm LR}$ is the smaller root of Eq. (14), since rays for both roots exist and the smaller root corresponds to a lower field coordinate. An appropriate iteration algorithm is needed to derive the valid $\theta _{\pm LR}$. Some related feature rays and corresponding parameters are depicted in Fig. 9(a). Value $C_{SL(a)}$ is given by Eq. (15).

$$\begin{cases} C_{SL(a),URm}=max(C_{SL(a),+URm},C_{SL(a),-URm}) \\ C_{SL(a),LRn}=max(C_{SL(a),+LRn},C_{SL(a),-LRn}) \\ C_{SL(a)}=max(min(C_{SL(a),URm}),min(C_{SL(a),LRn})) \end{cases} \Bigg| _{m=1,2,3,4 \; n=1,2}.$$

 figure: Fig. 9.

Fig. 9. (a) Feature rays analysis for SL(a). (b) Feature rays analysis for SL(a) of a SL(a)-free PHU. (c) $\theta$-$I_{LRW}$s graph of a SL(a)-free PHU. In all figures the subscript "SL(a)" is omitted.

Download Full Size | PDF

Now the status of SL(a) is completely evaluated and quantified, and can be optimized in the design. By this mean SL(a) can also be eliminated.

There’s also a probability that $I_{SL(a),LRW2}$ never becomes positive. In such case, SL(a) is already eliminated before $\pm$LR is even found. Actually, the so-called LR does not exist. To estimate the risk of SL(a), the ray which has maximum $I_{SL(a),LRW2}$ is considered as best-fit LR (bfLR). An example of this scenario is shown in Fig. 9(b-c). Note that the +bfLR does not pass through any LRW in Fig. 9(b), and $I_{SL(a),LRW2}$ is always negative in the section with a green frame, as Fig. 9(c) shows. The calculation of $C_{SL(a),LR}$s can be executed, and the identification and quantification process of UR are not affected. This possibility is taken into consideration to prevent automatic programs to go wrong when implementing the proposed algorithm to SL(a).

3. Optical design and simulation

The evaluation function of the proposed algorithm can be easily implemented in optical design process using user-defined optimization function or with external extension software. Taking multiple aspects such as optical performances and physical diameter into consideration, and by utilizing the proposed method and its main idea to eliminate or alleviate SL(a), SL(b) and SL(d), and adopting a single block of PHU rather than a couplet to avoid SL(c), we designed a F/3.5 PAL system with an FoV of 360$^{\circ } \times$(40$^{\circ }$-100$^{\circ }$) to verify the ability of proposed method on handling with stray light. The optical structure of designed PAL system is shown in Fig. 10(a) and its optical performance is shown in Fig. 11. The optical design has a low $f-\theta$ distortion (less than 2$\%$), and excellent image quality throughout the entire FoV range. The feature rays picked by our algorithm to characterize and control SL(a) and SL(d) are shown in Fig. 10(b). These two rays are used to calculate the value $C$ of each path of stray light. The ray-surface intersections of both rays lie beyond the effective aperture, resulting in minus values of $C$, and indicating that the two types of stray light are totally cut off.

 figure: Fig. 10.

Fig. 10. (a) Optical layout of the PAL system considering proposed method. (b) Feature rays of SL(a) and SL(d).

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. (a) MTF for all FoVs. (b) $f-\theta$ distortion.

Download Full Size | PDF

To further examine the stray light in the demonstrating PAL system, a full stray light analysis process is performed, yielding the PST curve as shown in the orange curve in Fig. 12(a). For comparison, the PST curve of the PAL system in Fig. 1(b) is also plotted as the blue curve in Fig. 12(a). The newly designed PAL system adopts a single block of lens as the PHU, while the previous one uses a couplet as the PHU. The coupling surface of PHU may induce some stray light, e.g., SL(c). Thus, for a more accurate comparison, optical-mechanical model of the previous PAL system is modified to make sure the coupling does not induce irrelevant stray light. Now both PST curves are mainly contributed by SL(a), SL(b), SL(d) and some scattering stray light on some optical or mechanical surfaces.

 figure: Fig. 12.

Fig. 12. (a) PST curve of the newly designed PAL system and the PAL system in Fig. 1(b). (b) Stray light spots of newly designed and previous PAL system. The upper row corresponds to the new system. The previous system suffers from SL(a) and SL(d), as highlighted by red boxes.

Download Full Size | PDF

In the blue curve, the high value around 0$^{\circ }$ and the peak around 13$^{\circ }$ are caused by SL(a) and SL(b) respectively, and the raise between 75$^{\circ }$ and 92$^{\circ }$ is due to the appearance of SL(d). These representative peak and raise are absent in the PST curve of the prototype. The PST value decreases by at least 2 orders of magnitude around 0$^{\circ }$ where SL(a) may appear, since the stray light spot of SL(a) is bright and occupies a large area in the image plane. Though SL(d) does not lead to a severe rise in the curve due to its small occupied area, the impact of SL(d) to the image is intolerable in practical. The simulated stray light spots of both PAL systems at several FoVs are shown in Fig. 12(b), indicating that SL(a) and SL(d) is eliminated in the new system. Except for some scattering spots which are exaggerated here, the images from the optimized system are very clean.

4. Experiments

The newly designed PAL system is implemented to a prototype. It has a compact size, as shown in Fig. 13(a) and Fig. 13(b). We tested the prototype in a indoor environment with a table lamp and an outdoor environment in a cloudless day. Another 360$^{\circ } \times$(40$^{\circ }$-100$^{\circ }$) F/3 PAL system with similar optical parameters is tested in the same manner for comparison. The testing frames are shown in Fig. 13(c)-(f). In the indoor testing frames, from the top down, the pose of both PAL systems is intentionally set to let the FoV of the table lamp is approximately 100$^{\circ }$, 70$^{\circ }$, 50$^{\circ }$, 35$^{\circ }$ and 10$^{\circ }$ respectively. The light spots caused by SL(a), SL(b) and SL(d) are marked with the same color as Fig. 1(a). Figure 13(d) and Fig. 13(f) indicate that if not properly controlled, the stray light spots from SL(a), SL(b) and SL(d) are significant and appear in a wide range of FoV. The size and form of those spots vary with FoV even when the stray-light-causing light source is beyond the visible range. As a contrast, the pictures captured by the newly designed optical system, as shown in Fig. 13(c) and Fig. 13(e), are completely pure in the imaging area. The residual stray light spot in the central blind area in Fig. 13(e) is caused by scattering of mechanical structures near the aperture stop under illumination of an extremely bright source, and will not invade into the effective area of the image. Demo videos of both PAL systems in multiple environments are in Visualization 1, Visualization 2, and Visualization 3. Experimental results show that under all kinds of testing environments can our prototype provide a clean and sharp image, as the simulation predicts.

 figure: Fig. 13.

Fig. 13. (a) The manufactured prototype. (b) The physical size of the prototype. (c) Indoor test of the prototype. (d) Indoor test of the comparing system. (e) Outdoor test of the prototype. (f) Outdoor test of the comparing system. Frames from (e) and (f) are extracted from Visualization 1.

Download Full Size | PDF

5. Conclusion and future work

In this paper, a novel and general stray light analysis and suppression method is proposed and being implemented to design and manufacture a PAL system free from stray light. The novelty is reflected in that we treat stray lights like imaging rays and propose a method to realize real-time characterization and suppression to troublesome stray light paths. The universality is reflected in that the proposed method can cover all scenarios we expect it to resolve, as an optical design algorithm for the class of PAL systems. Optical simulation shows that major stray light spots in typical PAL systems disappear in the newly designed system. The PST value decreases by 2 orders of magnitude around the FoV range where SL(a) appears, and also drops substantially throughout the entire FoV range. Experiment results of the manufactured prototype under complex and variant external light sources indicate that under all circumstances can the prototype provide a clean and sharp image. The ability of the proposed method to eliminate stray light for a PAL system is validated theoretically and experimentally. Our method provides a whole new approach on designing high-performance catadioptric optical systems where light paths are complicated and fickle, especially when the desired FoV is going to be ultra-large.

Future works include implementing and extending the proposed method and its idea to atypical PAL systems, in which aspheric and freeform surfaces might help to reduce the size or improve optical performances, or the aperture stop is inside the RL group, and to other modern catadioptric or reflective optical systems like off-axis multi-mirror systems.

Funding

National Natural Science Foundation of China (62175211).

Acknowledgments

We thank Yihe Feng and Zhifeng Wang from Hangzhou Huanjun Technology Co., Ltd. for their fruitful discussions and support to this work.

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Y. Luo, J. Bai, X. Zhou, X. Huang, Q. Liu, and Y. Yao, “Non-blind area pal system design based on dichroic filter,” Opt. Express 24(5), 4913–4923 (2016). [CrossRef]  

2. S. Gao, E. A. Tsyganok, and X. Xu, “Design of a compact dual-channel panoramic annular lens with a large aperture and high resolution,” Appl. Opt. 60(11), 3094–3102 (2021). [CrossRef]  

3. S. Gao, K. Yang, H. Shi, K. Wang, and J. Bai, “Review on panoramic imaging and its applications in scene understanding,” IEEE Trans. Instrum. Meas. 71, 1–34 (2022). [CrossRef]  

4. A. Amani, J. Bai, and X. Huang, “Dual-view catadioptric panoramic system based on even aspheric elements,” Appl. Opt. 59(25), 7630–7637 (2020). [CrossRef]  

5. H. Chen, K. Wang, W. Hu, K. Yang, R. Cheng, X. Huang, and J. Bai, “PALVO: visual odometry based on panoramic annular lens,” Opt. Express 27(17), 24481–24497 (2019). [CrossRef]  

6. J. Wang, Y. Liang, and M. Xu, “Design of panoramic lens based on ogive and aspheric surface,” Opt. Express 23(15), 19489–19499 (2015). [CrossRef]  

7. Y. Huang, Z. Liu, Y. Fu, and H. Zhang, “Design of a compact two-channel panoramic optical system,” Opt. Express 25(22), 27691–27705 (2017). [CrossRef]  

8. S. Gao, L. Sun, Q. Jiang, H. Shi, J. Wang, K. Wang, and J. Bai, “Compact and lightweight panoramic annular lens for computer vision tasks,” Opt. Express 30(17), 29940–29956 (2022). [CrossRef]  

9. Q. Jiang, H. Shi, L. Sun, S. Gao, K. Yang, and K. Wang, “Annular computational imaging: Capture clear panoramic images through simple lens,” IEEE Trans. Comput. Imaging 8, 1250–1264 (2022). [CrossRef]  

10. E. C. Fest, Stray Light Analysis and Control (SPIE, 2013).

11. D. Cheng, H. Chen, C. Yao, Q. Hou, W. Hou, L. Wei, T. Yang, and Y. Wang, “Design, stray light analysis, and fabrication of a compact head-mounted display using freeform prisms,” Opt. Express 30(20), 36931–36948 (2022). [CrossRef]  

12. S. M. Pompea, R. N. Pfisterer, and J. S. Morgan, “Stray light analysis of the Apache Point Observatory 3.5-m telescope system,” Proc. SPIE 4842, 128–138 (2003). [CrossRef]  

13. Z. Xu, D. Liu, C. Yan, and C. Hu, “Stray light nonuniform background correction for a wide-field surveillance system,” Appl. Opt. 59(34), 10719–10728 (2020). [CrossRef]  

14. L. Clermont, C. Michel, and Y. Stockman, “Stray Light Correction Algorithm for High Performance Optical Instruments: The Case of Metop-3MI,” Remote Sens. 14(6), 1354 (2022). [CrossRef]  

15. C. Huang, M. Zhang, Y. Chang, F. Chen, L. Han, B. Meng, J. Hong, D. Luo, S. Li, L. Sun, and B. Tu, “Directional polarimetric camera stray light analysis and correction,” Appl. Opt. 58(26), 7042–7049 (2019). [CrossRef]  

16. Y. Zong, S. W. Brown, G. Meister, R. A. Barnes, and K. R. Lykke, “Characterization and correction of stray light in optical instruments,” Proc. SPIE 6744, 67441L (2007). [CrossRef]  

17. T. Doi, “Panoramic imaging lens,” U.S. Patent 6,646,818 (Nov. 11, 2003).

18. V. Martynov, T. Jakushenkova, and M. Urusova, “New constructions of panoramic annular lenses: design principle and output characteristics analysis,” Proc. SPIE 7100, 71000O (2008). [CrossRef]  

19. Z. Huang, J. Bai, T. X. Lu, and X. Y. Hou, “Stray light analysis and suppression of panoramic annular lens,” Opt. Express 21(9), 10810–10820 (2013). [CrossRef]  

20. Y. Li, Z. Liu, Y. Huang, and X. Wang, “Analysis and suppression of stray light in miniaturized panoramic system,” J. Appl. Opt. 41(3), 455–461 (2020). [CrossRef]  

21. D. Hill, “Drawing specific rays in layout plots,” https://support.zemax.com/hc/en-us/articles/1500005576462-Drawing-specific-rays-in-layout-plots.

Supplementary Material (3)

NameDescription
Visualization 1       Outdoor demo video of designed stray-light-free PAL system.
Visualization 2       Demo video of designed stray-light-free PAL system in the garage.
Visualization 3       Demo video of designed stray-light-free PAL system on the desktop.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. (a) The phenomenon of stray light of a PAL system at specific angles. (b) Layout of a typical PAL optical system.
Fig. 2.
Fig. 2. Experimental and simulation results of stray light at different FoVs. (a) FoV=0$^{\circ }$. The bright ring evolves to an arc with FoV increasing. (b) FoV=13$^{\circ }$. Scattering causes diffused light spot. (c) FoV=25$^{\circ }$. A typical non-focused ghost image is present. (d) FoV=78$^{\circ }$. An almost focused ghost image is present.
Fig. 3.
Fig. 3. (a) Definition of $\theta$ and an SL(d) down marginal ray obtained by backward ray tracing. (b) Light path of SL(d) for 3 field coordinates and 3 pupil coordinates and LRW, URWs. (c) Main idea of the proposed method. (d) Two marginal rays with maximum field coordinate contributing to SL(d).
Fig. 4.
Fig. 4. (a) A testing ray and UR identifying indicator for SL(d). (b) An example of $\theta$-$I_{URW}$s graph for SL(d). In both figures the subscript "SL(d)" is omitted.
Fig. 5.
Fig. 5. (a) Two feature rays identified by our method. (b) Two feature rays founded using commercial stray light analysis software.
Fig. 6.
Fig. 6. (a) Feature rays analysis for SL(d). (b) Feature rays in an SL(d)-free PHU. In both figures the subscript "SL(d)" is omitted.
Fig. 7.
Fig. 7. Workflow for stray light control based on proposed algorithm.
Fig. 8.
Fig. 8. (a) Ray path diagram for SL(a). (b) UR/LR identifying indicators. (c) An example of $\theta$-$I_{LRW}$s graph. In (b) and (c) the subscript "SL(a)" is omitted.
Fig. 9.
Fig. 9. (a) Feature rays analysis for SL(a). (b) Feature rays analysis for SL(a) of a SL(a)-free PHU. (c) $\theta$-$I_{LRW}$s graph of a SL(a)-free PHU. In all figures the subscript "SL(a)" is omitted.
Fig. 10.
Fig. 10. (a) Optical layout of the PAL system considering proposed method. (b) Feature rays of SL(a) and SL(d).
Fig. 11.
Fig. 11. (a) MTF for all FoVs. (b) $f-\theta$ distortion.
Fig. 12.
Fig. 12. (a) PST curve of the newly designed PAL system and the PAL system in Fig. 1(b). (b) Stray light spots of newly designed and previous PAL system. The upper row corresponds to the new system. The previous system suffers from SL(a) and SL(d), as highlighted by red boxes.
Fig. 13.
Fig. 13. (a) The manufactured prototype. (b) The physical size of the prototype. (c) Indoor test of the prototype. (d) Indoor test of the comparing system. (e) Outdoor test of the prototype. (f) Outdoor test of the comparing system. Frames from (e) and (f) are extracted from Visualization 1.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

R = Ψ ( ρ , θ ) .
R S L ( d ) = Ψ S L ( d ) ( 1 , 6.4 ) = [ R S L ( d ) , 1 , 1 R S L ( d ) , 2 , 1 R S L ( d ) , 1 , 2 R S L ( d ) , 2 , 2 R S L ( d ) , 1 , 11 R S L ( d ) , 2 , 11 ] = [ 1.39 15.15 1.37 10.52 14.51 7.75 ] .
W S L ( d ) = [ W S L ( d ) , 1 , 1 W S L ( d ) , 2 , 1 W S L ( d ) , 1 , 7 W S L ( d ) , 2 , 7 W S L ( d ) , 1 , 10 W S L ( d ) , 2 , 10 W S L ( d ) , 1 , 11 W S L ( d ) , 2 , 11 ] = [ 1 2 D S 5 1 2 D S 5 1 2 D S 4 1 2 D S 1 1 2 D S 2 1 2 D S 2 1 2 D S 4 1 2 D S 1 ] .
{ | I U R W ( t ) | = | R 1 , t W 2 , t | | I L R W ( t ) | = | R 1 , t W 1 , t | | t = 1 , 2 , , T .
{ I S L ( d ) , U R W 1 = R S L ( d ) , 1 , 11 W S L ( d ) , 2 , 11 = R S L ( d ) , 1 , 11 + 1 2 D S 1 I S L ( d ) , U R W 2 = R S L ( d ) , 1 , 10 W S L ( d ) , 2 , 10 = R S L ( d ) , 1 , 10 + 1 2 D S 2 I S L ( d ) , L R W = R S L ( d ) , 1 , 7 W S L ( d ) , 1 , 7 = R S L ( d ) , 1 , 7 1 2 D S 4 .
{ R ± U R = Ψ ( ± 1 , θ ± U R ) m i n ( I U R W ( t ) ) = 0 R ± L R = Ψ ( ± 1 , θ ± L R ) m i n ( I L R W ( t ) ) = 0 | t = 1 , 2 , , T .
{ R S L ( d ) , U R = Ψ S L ( d ) ( 1 , θ U R ) m i n ( I S L ( d ) , U R W 1 , I S L ( d ) , U R W 2 ) = 0 R S L ( d ) , L R = Ψ S L ( d ) ( 1 , θ L R ) I S L ( d ) , L R W = 0 .
{ | C ± U R , t | = | R ± U R , 1 , t W 1 , t | | C ± L R , t | = | R ± L R , 1 , t W 2 , t | | t = 1 , 2 , , T .
{ C S L ( d ) , U R = R S L ( d ) , U R , 1 , 7 W S L ( d ) , 1 , t = R S L ( d ) , U R , 1 , 7 1 2 D S 4 C S L ( d ) , L R 1 = R S L ( d ) , L R , 1 , 11 W S L ( d ) , 2 , 11 = R S L ( d ) , L R , 1 , 11 + 1 2 D S 1 C S L ( d ) , L R 2 = R S L ( d ) , L R , 1 , 10 W S L ( d ) , 2 , 10 = R S L ( d ) , L R , 1 , 10 + 1 2 D S 2 .
{ C U R , t = m a x ( C + U R , t , C U R , t ) C L R , t = m a x ( C + L R , t , C L R , t ) m i n ( C U R , t ) < 0 m i n ( C L R , t ) < 0 | t = 1 , 2 , , T .
C S L ( d ) , U R < 0 m i n ( C S L ( d ) , L R 1 , C S L ( d ) , L R 2 ) < 0 .
C = m a x ( m i n ( C U R , t ) , m i n ( C L R , t ) ) | t = 1 , 2 , , T .
C S L ( d ) = m a x ( C S L ( d ) , U R , m i n ( C S L ( d ) , L R 1 , C S L ( d ) , L R 2 ) ) .
{ R S L ( a ) , ± L R = Ψ S L ( a ) ( ± 1 , θ ± L R ) m i n ( I S L ( a ) , L R W 1 , I S L ( a ) , L R W 2 , I S L ( a ) , L R W 3 , I S L ( a ) , L R W 4 ) = 0.
{ C S L ( a ) , U R m = m a x ( C S L ( a ) , + U R m , C S L ( a ) , U R m ) C S L ( a ) , L R n = m a x ( C S L ( a ) , + L R n , C S L ( a ) , L R n ) C S L ( a ) = m a x ( m i n ( C S L ( a ) , U R m ) , m i n ( C S L ( a ) , L R n ) ) | m = 1 , 2 , 3 , 4 n = 1 , 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.