Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Design of Fourier ptychographic illuminator for single full-FOV reconstruction

Open Access Open Access

Abstract

Fourier ptychographic microscopy (FPM) is a spatial-temporal-modulation high-throughput imaging technique via a sequential angle-varied LED illumination. Therefore, the illuminator is one of the key components and the design of this illuminator is significant. However, because of the property of spherical wave, partial coherence, and aperture-induced vignetting, the acquired images must be processed in blocks first, and rely on parallel reconstruction via a graphics processing unit (GPU). The high cost makes it unappealing compared with commercial whole slide imaging system via a low-cost central processing unit (CPU). Especially, the vignetting severely destroys the space-invariant model and induces obvious artifacts in FPM, which is the most difficult problem. The conventional method is to divide the field of view (FOV) into many tiles and omit those imperfect images, which is crude and may discards low frequency information. In this paper, we reevaluated the conditions of vignetting in FPM. Through our analysis, the maximum side length of FOV is 0.759 mm for a single full-FOV reconstruction via a 4×/0.1 NA objective and a 4 mm spacing LED array in theory, while almost 1.0 mm can be achieved in practice due to the tolerance of algorithm. We found that FPM system can treat the vignetting coefficient Vf below 0.1 as brightfield images and Vf lager than 0.9 as darkfield images, respectively. We reported an optimized distribution for designing an illuminator without vignetting effect according to the off-the-shelf commercial products, which can reconstruct full FOV in one time via a CPU. By adjusting the distribution of LED units, the system could retrieve the object with the side length of FOV up to 3.8 mm for a single full-FOV reconstruction, which achieves the largest FOV that a typical 4×/0.1 NA objective with the field number of 22 mm can afford.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fourier ptychographic microscopy (FPM) is a computational optical imaging technique, first invented by Yang et al. in 2013 [1,2]. FPM utilizes synthetic aperture microwave technology and optical phase retrieval to achieve a large field of view (FOV) with a low numerical aperture (NA) objective. The light emitting diode (LED) array is utilized as light sources for multi-angle illumination, enabling the acquisition of multiple low-resolution (LR) images that can be stitched together in the Fourier domain to generate high-resolution (HR) quantitative phase images. FPM enhances Abbe's far-field diffraction limit formula ${\lambda / {2NA}}$ by incorporating the illumination NA and objective NA as separate variables, resulting in the formula of ${\lambda / {({\textrm{NA}_{\textrm{illu}} + \textrm{NA}_{\textrm{obj}}} )}}$. This improvement is discussed at length in Goodman’s book Introduction to Fourier Optics (4th edition) [3]. Since its invention, FPM has undergone significant advancements in noise suppression [4], high-throughput imaging [59], high-speed imaging [1013], three-dimensional imaging [1419] and colorization [2022]. These developments have demonstrated immense potential for biomedical applications [2325], optical cryptosystems [26] and remote sensing [27].

However, the implementation of FPM encounters the difficulty that the LR images have to be chunked before reconstruction, FPM reconstruction is performed for each fragment, and finally these fragments are stitched together. Block processing can save memory and process the data in parallel with GPU for faster speed [1], but it may also give rise to the following issues: FPM is primarily applied in biomedical diagnosis and industrial testing to assist medical professionals and researchers in making more accurate analyses and assessments of cells and pathologies. Many healthcare organizations are deterred by the high costs associated with modifying, manufacturing and maintaining a GPU image analysis device. With appropriate combination of hardware and software, CPUs have demonstrated their ability to deliver exceptional results in AI applications, particularly for precise and rapid AI reasoning [28,29]. The medical imaging industry is increasingly turning to the beneficiary of CPUs. For example, the Intel Co. Ltd. has utilized more user-friendly CPUs to develop AI solutions for the rapid detection of pathology section images by using the OpenVINO tool, significantly enhancing the productivity of pathology testing in healthcare facilities [28,29]. As the computing power and speed of CPUs continues to increase, it will certainly drive the rapid development of medical imaging and other imaging fields. Furthermore, block processing requires a minimum image overlap rate of 10%, which results in additional computational effort.

In fact, there are three primary rationales for utilizing block processing in FPM. The most crucial factor is that the pupil cannot be fully illuminated due to the hardware limitations, leading to vignetting. This causes the FPM model to lose its linear space-invariant (LSI) properties [30], resulting in pronounced “creasing” artifacts at the periphery of FOV in the reconstructed image. The LSI properties of the model can be preserved by employing block processing to prevent vignetting [31]. The second aspect pertains to the partial coherence of the light emitted by the source, Since FPM is based on a coherent imaging model, block processing enables us to approximate the source as coherent light [32]. The third aspect is the large dispersion angle of LED illumination, which strictly speaking is spherical wave illumination. When the light source array is not very far from the sample, it cannot be regarded as plane wave incidence. However, after block processing, each part can be considered as plane wave illumination. Therefore, the limitation of distance between light source array and sample is no longer strict. These three factors render block processing an indispensable step in FPM.

If alternative low-cost and efficient solutions can address the aforementioned three aspects, the step of block processing will become redundant, resulting in greater hardware cost savings. Both of the second and third issues can be solved by incorporating optical elements such as filters and condensers into the optical system. The conventional solution to mitigating the impact of vignetting on images is the block processing scheme proposed by Pan et al., which involves segmenting acquired images and discarding those that do not conform to the LSI model [31]. However, this method may result in misjudgments of discarding images and information loss. From the perspective of optical system design, the effects of vignetting can be eliminated by customizing specialized objectives and adjusting the aperture size and parfocal length. However, this solution results in an escalation of system costs due to the inability to leverage existing commercial products such as objectives and tube lenses.

In this paper, we reported an optimized distribution for designing an illuminator without the vignetting effect according to the off-the-shelf commercial products, which can reconstruct the full FOV in one time via a CPU. First, we reevaluated the conditions of vignetting in FPM. A mathematical model was established to describe the relationship between pupil and FOV in object space. We proposed a method to calculate the maximum side length of FOV and optimal parameters for a single full-FOV reconstruction in FPM without block processing. It was found that the maximum diameter of FOV is 1.07 mm for a single full-FOV reconstruction via a 4×/0.1 NA objective and a 4 mm spacing LED array in theory, while almost 1.41 mm can be achieved in practice due to the tolerance of algorithm. We found that FPM system can treat the vignetting coefficient $V_{f}$ below 0.1 as brightfield images and the $V_{f}$ lager than 0.9 as darkfield images, respectively. Further, by adjusting the distribution of LEDs via the combination of multiple height datasets, the system could retrieve the object with the diameter of FOV up to 5.37 mm for a single full-FOV reconstruction, which achieves the largest FOV that a typical 4×/0.1 NA objective with the field number of 22 mm can afford. And finally, the design of a planar LED distribution can be obtained. Our optimization method can also be used for other systems with different parameters.

2. Methods

2.1 Vignetting of FPM

FPM imaging system can be simplified to a 4-f coherent imaging system. In the context of paraxial approximation, it can further be regarded as an LSI system. However, the exit wave leaving the objective lens is limited by diffraction effect caused by the aperture limitation of the objective lens, resulting in the vignetting effect [31]. The manifestation and characteristics of vignetting depend on the relative position of the light source in relation to the objective lens. For the green beam depicted in Fig. 1(a), the aperture of objective lens obstructs a portion of light, leading to an incomplete filling of the post-objective pupil on the Fourier plane. Consequently, the acquired image is no longer a complete brightfield image but appears as a combination of brightfield and darkfield. A set of LR raw images is presented in Fig. 1(b) using the camera (Hamamatsu Co. Ltd., Japan, FlashV3_4.0, 2048 × 2048, 6.5 μm pitch) and a 4×/0.1 NA objective with the field number of 22 mm. It can be clearly seen that most of the raw images are half bright and half dark and do not conform to the LSI imaging model. Figures 1(c1-1-c1-3) display the reconstructed amplitude, phase and Fourier spectrum with 0.40625 mm (250 × 250 pixels) FOV side length. The amplitude and phase images are free of wrinkle artifacts, and the spectrum is centered on a single point. While performing FPM reconstruction by adjusting the FOV side length to 1.38125 mm (850 × 850 pixels), significant artifacts can be observed in both amplitude and phase images, resulting in amplification of the central zero frequency which is no longer a point. This is precisely attributed to the vignetting effect, where the point spread function (PSF) of the system exhibits an additional quadratic term in conjunction with that of the original LSI model [31].

 figure: Fig. 1.

Fig. 1. Generation of vignetting. (a)Simulation diagram depicting the generation of vignetting in optical systems; (b) Acquired LR raw images in a typical FPM system; (c1-1, c1-2, c1-3) Reconstructed HR intensity image, phase image and spectrum with the FOV side length of 0.40625 mm, respectively; (c2-1, c2-2, c2-3) Reconstructed HR intensity image, phase image and spectrum with the FOV side length of 1.38125 mm, respectively.

Download Full Size | PDF

The vignetting coefficient can be employed to quantify the degree of vignetting in an LR image and is determined by the ratio between the area of darkfield and the total area of the image, which is given as:

$$V_{f} = \frac{{S_{dark}}}{{S_{bright} + S_{dark}}}$$
where $V_{f}$ represents the vignetting coefficient, $S_{dark}$ and $S_{bright}$ respectively denote the area of darkfield and brightfield in the image. When $V_{f}$ equals 1, the image is totally a darkfield image. When $V_{f}$ equals 0, the image is totally a brightfield image. The image acquisition process also exhibits an intensity decay from the center towards the periphery of the bright field region, resulting in indistinct boundaries for the vignette edges. The vignetting of the LR images acquired by the detector is predetermined when the relative position of the light source to the objective is known. The vignetting coefficients we define, therefore, are not the ones determined from actual captured LR images but rather the ideal vignetting coefficients that can be obtained through simulation calculations.

2.2 Maximum FOV that can be reconstructed without vignetting

During the acquisition of LR images in FPM system, as shown in Fig. 2(a), the spatial position of the acquired LR image remains unchanged when sequentially lighting up each individual LED unit on the array, but the pupil will produce a shift. To simplify the computation, the concept of relative motion can be employed. As illustrated in Fig. 2(b), assuming a constant spatial position of the pupil, a fixed interval between the FOV will be established as adjacent LED units are sequentially illuminated. Figure 2(c) shows the captured images with the illumination of a 5 × 5 LED array when the FOV is square. By adhering to the constraints of acquiring a LR image that is completely free of vignetting (either full brightfield or full darkfield) and containing $n \times n$ brightfield images, it is possible to model the relative position and size of both the pupil and object field. This allows for obtaining the following relationship:

$$\left\{ {\begin{array}{{c}} {X \le \frac{D}{{\sqrt 2 }} - ({n - 1} )L}\\ {X \le - D + ({n + 1} )L} \end{array}} \right.$$
where D represents the diameter of the pupil, L denotes the distance between adjacent LED units, and X stands for the side length of FOV. Here, D is given as:
$$D = \frac{{FN}}{{Mag}}$$
where Mag represents the magnification of the objective and FN is the field number of the objective. In this case, the maximum achievable side length of FOV without vignetting for FPM reconstruction is denoted as $X_{TOV}$.When X is equal to $X_{TOV}$, the maximum value of L can be calculated as $L_{TOV} = {{\left( {1 + \sqrt 2 } \right)D} / {\left( {2\sqrt 2 n} \right)}}$.

 figure: Fig. 2.

Fig. 2. Scheme diagram for modelling the system using constraints. (a) Diagram depicting the displacement of pupil on the object plane when illuminating adjacent LED units; (b) Schematic representation of the relative motion between pupil and FOV; (c) The relative positions of images obtained through sequential illumination of $n \times n$ LED units.

Download Full Size | PDF

In certain experiments, however, the parameter L alone may not provide sufficient insight. Instead, L has a correlation with the distance between LED array and the sample, which can be described as:

$$L = \frac{{D \cdot \frac{{d_{LED}}}{{\sqrt {d_{LED}{^2} + {h^2}} }}}}{{2NA}} \approx \frac{{D \cdot d_{LED}}}{{2NA \cdot h}}$$
where ${d_{LED}}$ represents the distance between adjacent LED units, and h denotes the distance between the LED array and sample, and $\frac{{d_{LED}}}{{\sqrt {d_{LED}{^2} + {h^2}} }}$ is the step size of the illumination NA. Therefore, it can be found that when L equals $L_{TOV}$, the corresponding h is $h_{TOV} = {{D \cdot d_{LED}} / {({L_{TOV} \cdot 2NA} )}}$.

2.3 FPM illuminator for single full-FOV reconstruction

When each unit of the uniformly arranged LED array is illuminated, the FOV remains fixed relative to the pupil. The FOV within the pupil is brightfield, while outside the pupil it becomes darkfield. If the object is located at the edge of the pupil, a phenomenon known as half brightfield and half darkfield will occur, which makes the system incompatible with the LSI model. As the parameter X increases, the phenomenon is becoming increasingly apparent. By adjusting the object position within FOV, these images can be transformed to fit the LSI model, effectively eliminating vignetting effects and further expanding FOV for FPM reconstruction.

Assuming that $\textrm{t} \times \textrm{t}$ LED units of the parallel LED array are utilized, each LED unit can be numbered as:

$$\begin{array}{{cccc}} 1&2& \ldots &t\\ {t + 1}&{t + 2}& \cdots &{2t}\\ \vdots & \vdots & \ddots & \vdots \\ {({t - 1} )t + 1}&{({t - 1} )t + 2}& \cdots &{{t^2}} \end{array}.$$

The coordinates of FOV center corresponding to illumination of each LED unit can be expressed using Table 1. Here, we denote the coordinates of FOV center corresponding to central LED unit illumination as (0,0). Since the center-to-center distance between adjacent object fields is $L_{TOV}$, the central coordinates of other object fields can be determined.

For example, $C(m )= ({x,y} )$ represents the center point of FOV when the mth LED unit is illuminated. Equation (6) defines the coordinates of four vertices of the square FOV with side length X.

$$\begin{array}{{c}} {C_1(m )= \left( {x - \frac{x}{2},y + \frac{y}{2}} \right)}\\ {C_2(m )= \left( {x + \frac{x}{2},y + \frac{y}{2}} \right)}\\ {C_3(m )= \left( {x - \frac{x}{2},y - \frac{y}{2}} \right)}\\ {C_4(m )= \left( {x + \frac{x}{2},y - \frac{y}{2}} \right)} \end{array}$$
where the subscripts 1-4 denote the order of the four-square vertices respectively.

Tables Icon

Table 1. Define the coordinates of FOV center when each LED unit is illuminated

When X exceeds a certain threshold, Fig. 3 illustrates how the direction and distance of the FOV will shift as each LED unit illuminates. There are two distinct modes of movement, depending on whether the object is viewed in brightfield or darkfield. When the mth LED unit is illuminated, if $|{\overrightarrow {C(m )C({Centre} )} } |\le {D / 2}$, it is a brightfield, or otherwise it is a darkfield. $C({centre} )$ denotes the FOV center when the centeral LED unit is illuminated. Assuming that the coordinate of $C(m )$ is $({x,y} )$, then the FOV moves as follows:

 figure: Fig. 3.

Fig. 3. Flow chart of the optimization method.

Download Full Size | PDF

For bright fields:

  • (1) Ascertain if relocation of LED unit numbered m is necessary given a set size of X.
When $\max ({|{\overrightarrow {C_{p}(m )C({Centre} )} } |} )> {D / 2}({p = 1,2,3,4} )$, the corresponding LED unit needs to be moved, the corresponding p in $\max ({|{\overrightarrow {C_{p}(m )C({Centre} )} } |} )$ is noted as max and the corresponding $C_{p}(m )$ is noted as $C_{\max} (m )$. (If more than one p-value satisfies $\max ({|{\overrightarrow {C_{p}(m )C({Centre} )} } |} )$, then any one of them will do).
  • (2) Directional vector of movement of the FOV:
    $$sgp(m )= \frac{{\overrightarrow {C(m )C({Centre} )} }}{{|{\overrightarrow {C(m )C({Centre} )} } |}}$$
where $sgp(m )$ is the unit vector of the direction of travel of the FOV when the mth LED unit is illuminated;

  • (3) For a FOV with the centre of the FOV being $C(m )$, the following cases can be classified according to their coordinates:

①When $|x |= |y |$, the FOV moves a distance of:

$$N(m )= |{\overrightarrow {C(m )C({Centre} )} } |- \frac{D}{2}$$
where the distance travelled by the FOV at the time of the mth LED unit illumination is $N(m )$.

②When x = 0 or y = 0, the FOV moves a distance of:

$$N(m )= \sqrt {{{|{\overrightarrow {C_{\max} (m )C({Centre} )} } |}^2} - {{\left( {\frac{X}{2}} \right)}^2}} - \sqrt {{{\left( {\frac{D}{2}} \right)}^2} - {{\left( {\frac{X}{2}} \right)}^2}} $$
③else
$$N(m )= |{\overrightarrow {C_{\max} (m )C({Centre} )} } |- |{\overrightarrow {b1(m )C_{\max} ({Centre} )} } |$$
where $C_{\max} ({Centre} )$ is the coordinate of the vertex corresponding to the same p as $C_{\max} (m )$ in the central FOV, and $b1(m )$ is the intersection of the line between $C_{\max} ({Centre} )$ and $C_{\max} (m )$ with the boundary of the pupil.

For dark fields:

  • (1) Ascertain if relocation of LED unit numbered m is necessary given a set size of X.
When $\min ({|{\overrightarrow {C_{p}(m )C({Centre} )} } |} )< {D / 2}({p = 1,2,3,4} )$, the corresponding LED unit needs to be moved, the corresponding p in $\min ({|{\overrightarrow {C_{p}(m )C({Centre} )} } |} )$ is noted as min and the corresponding $C_{p}(m )$ is noted as $C_{\min} (m )$. (If more than one p-value satisfies $\min ({|{\overrightarrow {C_{p}(m )C({Centre} )} } |} )$, then any one of them will do).
  • (2) Directional vector of movement of the FOV:
    $$sgp(m )= \frac{{\overrightarrow {C({Centre} )C(m )} }}{{|{\overrightarrow {C({Centre} )C(m )} } |}}.$$
  • (3) For a FOV with the centre of the FOV being $C(m )$, the following cases can be classified according to their coordinates:
①When $|x |= |y |$, the FOV moves a distance of:
$$N(m )= \frac{D}{2} - |{\overrightarrow {C(m )C({Centre} )} } |.$$
②When x = 0 or y = 0, the FOV moves a distance of:
$$N(m )= \frac{D}{2} - \left( {|{\overrightarrow {C_{\min} (m )C({Centre} )} } |- \frac{X}{2}} \right)$$
③else
$$N(m )= |{\overrightarrow {b_2(m )C_{\min} ({Centre} )} } |- |{\overrightarrow {C_{\min} (m )C_{\min} ({Centre} )} } |$$
where $C_{\min} ({Centre} )$ is the coordinate of the vertex corresponding to the same p as $C_{\min} (m )$ in the central FOV, and $b2(m )$ is the intersection of the line between $C_{\min} ({Centre} )$ and $C_{\min} (m )$ with the boundary of the pupil.

3. Experiments

3.1 Experimental setup

Our experimental equipment contains a programmable $13 \times 13$LED array, which provides an incident illumination wavelength of 628.63 nm(red), 518.08 nm(green) and 463.46 nm (blue). The distance between adjacent LED units is 4 mm. A 4×/0.1 NA objective (Nikon, FN: 22 mm) and an image sensor (Hamamatsu Co. Ltd., Japan, FlashV3_4.0, 2048 × 2048, 6.5 μm pitch) are utilized for data acquisition.

3.2 Exploring the maximum FOV without vignetting

In order to investigate the impact of vignetting on the reconstructed results of FPM, this paper analyze the vignetting according to the method described in [31]. The simulation results in Fig. 4(a) demonstrate how different cases of vignetting affect FPM reconstruction. Due to variations in vignetting coefficients among low-resolution images from different angles, only representative samples are presented in the figure. The presence of small vignettes can significantly impact the accuracy of FPM reconstruction results.

 figure: Fig. 4.

Fig. 4. The tolerance of FPM system towards vignetting. (a) The impact of vignetting on experimental outcomes. The curves in the figure depict the Root Mean Squared Error (RMSE) between the FPM results and ground truth for simulating various levels of vignetting; (a1) Ground truth; (a2)-(a7) Reconstructed FPM results under varying levels of vignetting; (b1) FPM reconstruction using the theoretical maximum FOV size at h = 58 mm; (b2) FPM reconstruction with the experimentally obtained maximum FOV at h = 58 mm; (c1) FPM reconstruction using the theoretical maximum FOV size at h = 70 mm; (c2) FPM reconstruction with the experimentally obtained maximum FOV at h = 70 mm; (d1) FPM reconstruction using the theoretical maximum FOV size at h = 76 mm; (d2) FPM reconstruction with the experimentally obtained maximum FOV at h = 76 mm; (e)Plot of the maximum FOV that can be reconstructed in one time as the illumination height varies.

Download Full Size | PDF

The surface of image sensor is square in shape. According to the constraints demonstrated in Section 2.2, by setting the value of n as 3 in Eq. (2), the relationship between X and L can be revealed by:

$$\left\{ {\begin{array}{{c}} {X \le \frac{D}{{\sqrt 2 }} - 2L}\\ {X \le - D + 4L} \end{array}} \right..$$

Solving Eq. (15) gives the theoretical optimal values of $X_{TOV}$ and $L_{TOV}$:

$$\left\{ {\begin{array}{{c}} {X_{TOV} = \frac{{\sqrt 2 - 1}}{3}D = 0.795mm}\\ {L_{TOV} = \frac{{1 + \frac{{\sqrt 2 }}{2}}}{6}D = 1.565mm} \end{array}} \right.$$
we can obtain $h_{TOV} = 70.174$mm. The specific validation method is to acquire data of 37 samples at different heights. The processing for the ith sample is done by fitting the vignetting boundary in the acquired 5 × 5 images to the pupil boundary and measuring the distance between adjacent images(noted as $L_i$), and the distance between the sample and the LED array(noted as $h_i$).

In the experiment, the vignetting boundary of LR images is not well-defined. Therefore, we hypothesize that the experimental system exhibits a certain degree of tolerance towards vignetting. Specifically, even when partial vignetting occurs in the LR images, FPM can still reconstruct HR images without any visible signs of vignetting. The phenomenon of “tolerance” can be attributed to three potential factors: 1) The subjectivity inherent in the human eye's judgment results in the presence of minuscule artifacts in the experimental findings that are imperceptible. To minimize the experimental error caused by subjective human eye judgment, we employed a USAF as the experimental sample. As the USAF is known to contain horizontal and vertical lines, while the artifacts caused by vignetting manifest as rippled diagonal stripes, this artifact can be accurately identified in the experiment; 2) The actual vignetting coefficient of the image differ from those specified in this paper. The vignetting coefficient defined in Eq. (1) is an ideal parameter; however, in practical optical systems, the actual vignetting coefficient may deviate from the theoretical value due to factors such as scattering, uneven image energy distribution, and variations in LED brightness that are either too bright or too dark; 3) The robustness of the FPM algorithm allows it to tolerate a certain amount of vignetting. Experiments were conducted to verify the conjecture and explore the boundary of the system’s tolerable vignetting. The range of h is roughly 57-79 mm when 3 × 3 brightfield images can be captured. Therefore, we acquired three sets of 5 × 5 raw images at 2 mm intervals within the height range of 56-80 mm, and totally 39 sets of raw images were obtained. Three sets of images acquired at each height were utilized for FPM reconstruction with varying FOV sizes. The reconstructed FOV with the largest size at each height was recorded to ensure that the recorded images are free from artifacts caused by vignetting. The resulting experimental curves are presented in Fig. 4(e). It can be observed that the maximum value of X is achieved at the height of approximately 70 mm, which is consistent with theoretical analysis. The LR images obtained at 70 mm and the experimental results are presented in Figs. 4 (c1-c2), with an experimental value of 0.984375 mm for X and a theoretical value of 0.795 mm, thereby validating the system's ability to tolerate vignetting. Figures 4(b1-b2) illustrate the comparison between the theoretical maximum FOV and the experimentally obtained maximum FOV at h = 58 mm. The analysis on 58-68 mm data indicates that an artifact-free high throughput reconstructed image can be achieved when the vignetting coefficient ranges from 0.9 to 1.0. Figures 4(d1-d2) present the comparison between the theoretical maximum FOV and the experimentally obtained maximum FOV at h = 76 mm. The height range of 70-74 mm results in the vignetting coefficient between 0-0.1 and reconstructed image without significant wrinkles. It should be noted that when the height exceeds 74 mm, the full dark field image with a vignetting coefficient of 1, as theoretically calculated, does not strictly appear as a dark field in experiment. Instead, it appears as a region with slight brightness. This explains why there is no vignetting in LR images under theoretical calculations but may exhibit artifacts in FPM reconstruction experiments. The cause of this phenomenon could be the scattering of light in the system or the spreading of energy due to the high brightness of the LEDs. Therefore, under experimental conditions, the maximum FOV is smaller than that obtained from theoretical analysis.

Under these experimental conditions, we can conclude that the maximum side length of reconstructed FOV in one time is approximately 1 mm when the distance between the LED array and the sample is fixed as 70.174 mm. However, if the FOV size is further increased, vignetting effects will occur in the acquired LR images, leading to severe artifacts in the FPM reconstructed image.

3.3 Design and optimization of the LED distribution

By adjusting the position of designated LED units, it is possible to alter L and thus modify the relative position between the FOV and pupil on the object plane. The position adjustment of a specific LED unit can be achieved by manipulating the two-dimensional (x-y axis) displacement platform where the LEDs are affixed, as shown in Fig. 5(b). In addition, Eq. (4) indicates that adjusting the height of LED array h (z axis) to vary L enables effective elimination of vignetting when acquiring LR images, as shown in Fig. 5(a). The shape and homogeneity of LED array has altered, which compels non-conforming images to be “pulled” back into the LSI model, as illustrated in Fig. 5(c). The above two schemes of LED movement can be compared in the following aspects:

  • (1) The height adjustment scheme is independent for each LED, which means any issues with single LED will not affect the accurate capture of other images. Conversely, errors caused by x-y axis translation will result in position errors of all LED units.
  • (2) As all LED units are symmetrically distributed relative to the central LED, a single height adjustment can simultaneously align multiple LED units to a vignetting-free position, whereas a single translation of x-y axis can relocate only one LED unit to the desired position.
  • (3) When there is a significant height disparity between LED units, excessive differences in light intensity may occur. The inconsistency in brightness of images acquired by the detector at the same exposure time may have an impact on the reconstruction results of FPM.

 figure: Fig. 5.

Fig. 5. Optimization of the LED array. (a) Adjusting the illumination height to eliminate vignetting; (b) x-y axis translation of LED units; (c) FOV shift on the object plane; (d) The number of LED units with corresponding object fields; (e1-e5) The motion pattern of FOV as FOV expands.

Download Full Size | PDF

In summary, both of the two schemes can be operated in the experimental section to adjust the position of LED units. The height adjustment scheme is the prime option, and the x-y axis translation scheme can be selected when the height is adjusted more than 10 mm or less than the value of $h_{opt}$.

We have obtained the $X_{TOV}$ value 0.759 mm from previous calculation. For further discussion, we segment the FOV ranging from 0.759 mm to 3.88 mm (the inner square of the pupil has a side length of 3.88 mm). When X is set as 3.88 mm, totally 24 LED units in the middle of the LED array (excluding the central LED unit) do not conform to the LSI model Therefore, regardless of FOV changes, vignetting elimination can be achieved only by adjusting the position of central 5 × 5 LED units (excluding the central LED unit) under this condition. To facilitate analysis, the central 5 × 5 LED units are labeled, and axes are established in the object plane as illustrated in Fig. 5 (d). By utilizing the calculated $L_{TOV} = 1.565mm$, it is feasible to derive the coordinates of FOV center in the object plane when each LED unit is illuminated. The coordinate values have been presented in Table 2, which reads as:$C({13} )= ({0,0} )$, $C(3 )= ({0,3.13} )$ for example.

Tables Icon

Table 2. Values of the central coordinates of the FOV

The motion of this model is determined by the FOV size and the corresponding LED configuration for each object plane. According to Section 2.3, the unit directional vector of movement and distance of movement are respectively calculated as:

$$sgp(m )= \left\{ {\begin{array}{{cc}} {\frac{{\overrightarrow {C(m )C({13} )} }}{{|{\overrightarrow {C(m )C({13} )} } |}}}&{m = 7,8,9,12,14,17,18,19}\\ {\begin{array}{{c}} {}\\ {\frac{{\overrightarrow {C({13} )C(m )} }}{{|{\overrightarrow {C({13} )C(m )} } |}}} \end{array}}&{m = 1,2,3,4,5,6,10,11,15,16,20,21,22,23,24,25} \end{array}} \right.$$
$$\scalebox{0.9}{$\begin{array}{l} N(m )= \\ \left\{ {\begin{array}{{cc}} {\frac{D}{2} - 2L_{TOV} + \frac{X}{2}}&\begin{array}{l} 0.759 \le X \le 3.88\\ \& m = 3,11,15,23 \end{array}\\ {\sqrt 2 \left( {L_{TOV} + \frac{X}{2}} \right) - \frac{D}{2}}&\begin{array}{l} 0.759 \le X \le 3.88\\ \& m = 7,9,17,19 \end{array}\\ {L_{TOV} + \frac{X}{2} - \sqrt {{{\left( {\frac{D}{2}} \right)}^2} - {{\left( {\frac{X}{2}} \right)}^2}} }&\begin{array}{l} 1.606 \le X \le 3.88\\ \& m = 8,12,14,18 \end{array}\\ {\frac{D}{2} - 2\sqrt 2 L_{TOV} + \frac{{X\sqrt 2 }}{2}}&\begin{array}{l} 2.371 \le X \le 3.88\\ \& m = 1,5,21,25 \end{array}\\ {\left( {\frac{X}{4}\cos ({\tan 2} )+ \sqrt {{{\left( {\frac{D}{2}} \right)}^2} - {{\left( {\frac{X}{4}\sin ({\arctan 2} )} \right)}^2}} } \right) - \sqrt 5 \left( {L_{TOV} - \frac{X}{4}} \right)}&\begin{array}{l} 1.135 \le X \le 3.88\\ \& m = 2,4,6,10,16,20,22,24 \end{array}\\ 0&{else} \end{array}} \right. \end{array}$}.$$

The movement of object surface is achieved by adjusting the distance between the LED array and the sample. The distance of adjustment can be calculated as:

$$h_{m} = \sqrt {{{\left( {\frac{{a_{tho} \cdot d_{LED}}}{{{{d_{m}} / {({{{|{\overrightarrow {C(m )C({13} )} } |} / {L_{TOV}}}} )}}}}} \right)}^2} - d_{LED}{^2}} $$
$$d_{m} = \left\{ {\begin{array}{{cc}} {\frac{{D + X}}{2}}&{m = 3,11,15,23}\\ {\frac{{D - \sqrt 2 X}}{2}}&{m = 7,9,17,19}\\ {\sqrt {{{\left( {\frac{D}{2}} \right)}^2} - {{\left( {\frac{X}{2}} \right)}^2}} - \frac{X}{2}}&{m = 8,12,14,18}\\ {\frac{{D + \sqrt 2 X}}{2}}&{m = 1,5,21,25}\\ {\sqrt 5 L_{TOV} + N(m )}&{m = 2,4,6,10,16,20,22,24} \end{array}} \right..$$

Figures 5 (e1-e5) illustrate the motion pattern of FOV corresponding to the illumination of each LED unit as the FOV expands.

We validated our solutions at the range of 1 mm-3.8 mm and conducted three groups of variable-height experiments at 2 mm intervals to determine the optimal FPM reconstruction results. The theoretical maximum side length of FOV that can be achieved is 3.88 mm, which was obtained through multiple experiments. In terms of intensity reconstruction, the theoretical resolution is estimated as group 9, element 5 in the USAF target. Figures 6(a1-a2) show the reconstruction results when h = 70.174 mm and the FOV edge size is 0.1875 mm. Due to the influence of the experimental environment and systematic noise, the maximum resolution is around group 9, element 2 (0.87 um), so we use it as a benchmark to judge the reconstruction performance of our method. That is to say, the reconstruction is deemed successful when the reconstructed image achieves intensity or phase resolution of this value. Figures 6(b1-d4, b2-d2) shows reconstructed intensity and phase of our method when the side length of FOV is respectively adjusted to 3.4 mm, 3.6 mm, and 3.8 mm. The CCD used in our experiment has a maximum FOV side length of 3.32 mm. The experiment was concluded by employing multiple panning of the detector to synthesize LR images when the FOV exceeded this threshold value Figs. 6(a3-d3) are generated based on the magnified plots of a selected region in the reconstructed images, which provide a visual representation of the resolution achieved. The experimental results demonstrate that the information can be fully reconstructed with a FOV side length of 3.8 mm, indicating that our method can achieve the maximum FOV without considering noise effects, which is equivalent to the pupil.

 figure: Fig. 6.

Fig. 6. Experimental results of height adjustment scheme. (a1-a2) Reconstructed intensity and phase with FOV side length of 0.19 mm; (a3) Resolution curve corresponding to the red lines in Figs. (a1-a2); (b1-b2,c1-c2,d1-d2) Reconstructed intensity and phase with FOV side length of 3.4 mm, 3.6 mm and 3.8 mm, respectively; (b3) Resolution curves corresponding to the green lines in Figs. (b1-b2), the orange lines in Figs. (c1-c2) and the blue lines in Figs. (d1-d2), respectively.

Download Full Size | PDF

4. Discussion

Adjusting the height of a standard LED array is equivalent to altering the position of pupil relative to FOV, which can also be achieved by designing LED array with a specific surface structure. Generally, the step of the combination of datasets of different height is necessary before the planar design of a LED array for a specific system is finally set and is easy to achieve without introducing system errors. Mechanical errors must be considered in the development of a new LED array. FPM solution is capable of accommodating a certain degree of processing error due to its inherent tolerance for vignetting. The system calibration method for FPM (SC-FPM) algorithm [33] can be combined with our method to reconstruct a full-FOV image with higher resolution even when the LED unit is not placed at the ideal position or when the spectrum is shifted or rotated. The correlation between illumination height and the surface shape of LED array is depicted in Fig. 7(a). The distance between each LED unit and the central LED unit in the surface shape can be formulated as:

$$G_{m} = \frac{{\Delta L_{m} \cdot h_{opt}}}{{h_{m}}}$$
where $\Delta L_{m}$ is the linear distance between the m th LED unit and the central LED unit in the LED array used in the experiment, ${h_m}$ is the variable distance between the LED array and the sample when the m th LED unit is illuminated. Figure 7(b) displays the distinctive LED surface pattern that corresponds to the height adjustment scheme for 3.8 mm FOV reconstruction in the experiment, with each LED unit's movement parameters marked in Fig. 7(a1). Once the experimental parameters for height adjustment and x-y axis movement have been determined, this can serve as a recipe for the development of new LED units that directly replace mechanical displacement. For the device conditions specified in the experiment, the surface shape beyond the central 5 × 5 LED units in the array matrix can be customized to meet different requirements, such as the matrix shape shown in Fig. 7 (a) or the circular shape shown in Fig. 7 (b), among others. In addition to designing a specific LED surface pattern, it is also feasible to light up an illumination path through the scan of a single LED unit, as shown in Fig. 7 (c).

 figure: Fig. 7.

Fig. 7. Design of LED array surface structure. (a)The surface structure at X = 3.8 mm, where the red-colored LED units remain unchanged while the others have been relocated; (a1) The precise motion parameters for each LED unit; (a2) The number of LED units with different colors corresponding to Fig. 5 (d); (e)The design of circular LED array; (f) Point scanning of LED units.

Download Full Size | PDF

5. Conclusion

We established a mathematical model that utilizes the geometric analysis of the optical system to determine the relative position between FOV and pupil. The model also gives the maximum reconstructed FOV using an LED array with fixed period. Furthermore, we presented a height adjustment scheme to optimize and design illuminators for single full-FOV FPM reconstruction. Considering experimental conditions, it has been proved that successful reconstruction can still be achieved even when the vignetting coefficient of the captured LR images ranges from 0-0.1or 0.9-1. Based on the aforementioned model, the expansion of FOV can be realized by either adjusting the distance between the light source and the sample or by moving the LED units in the array plane, thus completely eliminating vignetting effects from the original data or inside the pupil. Our method enables a single reconstruction of FOV that matches the pupil, and optimal results can be achieved. The side length of FOV that can be successfully reconstruct in one time is 3.8 mm with our method, which has increased significantly compared with 1.0 mm without our method and reached the maximum FOV that the objective can afford. The techniques proposed in this paper for overcoming vignetting can be equally applied to other computational microimaging methods. Additionally, acquiring large field-of-view images in a single pass can simplify the imaging process in the domain of 3D imaging by eliminating imaging artifacts and systematic errors. In addition, the irregular design of LED array surface shape ensures that FPM reconstruction remains unaffected by raster noise [34], which is usually caused by the uniform distribution of light source.

This work still requires intensive investigation and improvement in certain crucial aspects. In the process of designing LED units based on optimal surface shapes, it may be beneficial to incorporate brightfield LED units into the existing design in order to increase overlap rates. Additionally, introducing images with vignetting coefficients ranging from 0-0.1 and 0.9-1.0 could also improve imaging efficiency and should be considered as a future avenue for research. In terms of reconstruction efficiency, our current work only addresses the GPU dependency and block processing, while in the next phase we aim to further enhance imaging efficiency by optimizing the FPM algorithm.

Funding

National Natural Science Foundation of China (12104500); Key Research and Development Projects of Shaanxi Province (2023-YBSF-263).

Disclosures

The authors have no relevant financial interests in this article and no potential conflicts of interest to disclose.

Data availability

Data will be made available by the corresponding author on reasonable request.

References

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

2. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 23(26), 33027 (2015). [CrossRef]  

3. J. W. Goodman, Introduction to Fourier Optics (Fourth Edition)”, Macmillan Learning: New York2017.

4. A. Pan, A. Wang, J. Zheng, Y. Gao, C. Ma, and B. Yao, “Edge effect removal in Fourier ptychographic microscopy via periodic plus smooth image decomposition,” Opt. Laser. Eng. 162, 107408 (2023). [CrossRef]  

5. A. Wang, Z. Zhang, S. Wang, A. Pan, C. Ma, and B. Yao, “Fourier ptychographic microscopy via alternating direction method of multipliers,” Cells 11(9), 1512 (2022). [CrossRef]  

6. J. Sun, C. Zuo, L. Zhang, and Q. Chen, “Resolution-enhanced Fourier ptychographic microscopy based on high-numerical-aperture illuminations,” Sci. Rep. 7(1), 1187 (2017). [CrossRef]  

7. A. Pan, Y. Zhang, K. Wen, M. Zhou, J. Min, M. Lei, and B. Yao, “Subwavelength resolution Fourier ptychography with hemispherical digital condensers,” Opt. Express 26(18), 23119–23131 (2018). [CrossRef]  

8. X. Ou, R. Horstmeyer, G. Zheng, and C. Yang, “High numerical aperture Fourier ptychography: principle, implementation and characterization,” Opt. Express 23(3), 3472–3491 (2015). [CrossRef]  

9. J. Kim, B. M. Henley, C. H. Kim, H. A. Lester, and C. Yang, “Incubator embedded cell culture imaging system (EmSight) based on Fourier ptychographic microscopy,” Biomed. Opt. Express 7(8), 3097–3110 (2016). [CrossRef]  

10. A. Pan, C. Zuo, and B. Yao, “High-resolution and large field-of-view Fourier ptychographic microscopy and its applications in biomedicine,” Rep. Prog. Phys. 83(9), 096101 (2020). [CrossRef]  

11. L. Tian and L. Waller, “Quantitative differential phase contrast imaging in an LED array microscope,” Opt. Express 23(9), 11394 (2015). [CrossRef]  

12. Z. F. Phillips, M. Chen, and L. Waller, “Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cDPC),” PLoS ONE 12(2), e0171228 (2017). [CrossRef]  

13. J. Sun, Q. Chen, J. Zhang, Y. Fan, and C. Zuo, “Single-shot quantitative phase microscopy based on color-multiplexed Fourier ptychography,” Opt. Lett. 43(14), 3365 (2018). [CrossRef]  

14. S. Chowdhury, M. Chen, R. Eckert, D. Ren, F. Wu, Nicole A. Repina, and L. Waller, “High-resolution 3D refractive index microscopy of multiple-scattering samples from intensity images,” Optica 6(9), 1211 (2019). [CrossRef]  

15. R. Horstmeyer, J. Chung, X. Ou, G. Zheng, and C. Yang, “Diffraction tomography with Fourier ptychography,” Optica 3(8), 827 (2016). [CrossRef]  

16. R. Ling, W. Tahir, H.-Y. Lin, H. Lee, and L. Tian, “High-throughput intensity diffraction tomography with a computational microscope,” Biomed. Opt. Express 9(5), 2130–2141 (2018). [CrossRef]  

17. J. Li, A. C. Matlock, Y. Li, Q. Chen, C. Zuo, and L. Tian, “High-speed in vitro intensity diffraction tomography,” Adv Photon. 1(06), 1 (2019). [CrossRef]  

18. C. Zuo, J. Sun, J. Li, A. Asundi, and Q. Chen, “Wide-field high-resolution 3D microscopy with Fourier ptychographic diffraction tomography,” Opt. Laser. Eng. 128, 106003 (2020). [CrossRef]  

19. S Zhou, J Li, J Sun, N Zhou, H Ullah, Z Bai, Q Chen, and C Zuo, “Transport-of-intensity Fourier ptychographic diffraction tomography: defying the matched illumination condition,” Optica 9(12), 1362 (2022). [CrossRef]  

20. S. Dong, R. Shiradkar, P. Nanda, and G. Zheng, “Spectral multiplexing and coherent-state decomposition in Fourier ptychographic imaging,” Biomed. Opt. Express 5(6), 1757 (2014). [CrossRef]  

21. Y. Gao, J. Chen, A. Wang, A. Pan, C. Ma, and B. Yao, “High-throughput fast full-color digital pathology based on Fourier ptychographic microscopy via color transfer,” Sci. China Phys. Mech. Astron. 64(11), 114211 (2021). [CrossRef]  

22. J. Chen, A. Wang, A. Pan, G. Zheng, C. Ma, and B. Yao, “Rapid full-color Fourier ptychographic microscopy via spatially filtered color transfer,” Photonics. Res 10(10), 2410 (2022). [CrossRef]  

23. A. Williams, J. Chung, X. Ou, G. Zheng, S. Rawal, Z. Ao, R. Datar, C. Yang, and R. Cote, “Fourier ptychographic microscopy for filtration-based circulating tumor cell enumeration and analysis,” J. Biomed. Opt. 19(6), 066007 (2014). [CrossRef]  

24. R. Horstmeyer, X. Ou, G. Zheng, P. Willems, and C. Yang, “Digital pathology with Fourier ptychography,” Comput. Med. Imaging. Graph. 42, 38–43 (2015). [CrossRef]  

25. K. Guo, J. Liao, Z. Bian, X. Heng, and G. Zheng, “InstantScope: a low-cost whole slide imaging system with instant focal plane detection,” Biomed. Opt. Express 6(9), 3210 (2015). [CrossRef]  

26. A. Pan, K. Wen, and B. Yao, “Linear space-variant optical cryptosystem via Fourier ptychography,” Opt. Lett. 44(8), 2032 (2019). [CrossRef]  

27. M. Xiang, A. Pan, Y. Zhao, X. Fan, H. Zhao, C. Li, and B. Yao, “Coherent synthetic aperture imaging for visible remote sensing via reflective Fourier ptychography,” Opt. Lett. 46(1), 29–32 (2021). [CrossRef]  

28. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM 60(6), 84–90 (2017). [CrossRef]  

29. E. L. Padoin, M. Diener, P. O. A. Navaux, and J.-F. Mehaut, “Managing Power Demand and Load Imbalance to Save Energy on Systems with Heterogeneous CPU Speeds,” in 2019 31st International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD), Campo Grande, Brazil, 72–79 (2019).

30. R. Horstmeyer, R. Heintzmann, G. Popescu, L. Waller, and C. Yang, “Standardizing the resolution claims for coherent microscopy,” Nat. Photonics 10(2), 68–71 (2016). [CrossRef]  

31. A. Pan, C. Zuo, Y. Xie, M. Lei, and B. Yao, “Vignetting effect in Fourier ptychographic microscopy,” Opt. Laser. Eng. 120, 40–48 (2019). [CrossRef]  

32. J. Hagemann and T. Salditt, “Coherence-resolution relationship in holographic and coherent diffractive imaging,” Opt. Express 26(1), 242–253 (2018). [CrossRef]  

33. A. Pan, Y. Zhang, T. Zhao, Z. Wang, D. Dan, M. Lei, and B. Yao, “System calibration method for Fourier ptychographic microscopy,” J. Biomed. Opt 22(09), 1 (2017). [CrossRef]  

34. K. Guo, S. Dong, P. Nanda, and G. Zheng, “Optimization of sampling pattern and the design of Fourier ptychographic illuminator,” Opt. Express 23(5), 6171 (2015). [CrossRef]  

Data availability

Data will be made available by the corresponding author on reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Generation of vignetting. (a)Simulation diagram depicting the generation of vignetting in optical systems; (b) Acquired LR raw images in a typical FPM system; (c1-1, c1-2, c1-3) Reconstructed HR intensity image, phase image and spectrum with the FOV side length of 0.40625 mm, respectively; (c2-1, c2-2, c2-3) Reconstructed HR intensity image, phase image and spectrum with the FOV side length of 1.38125 mm, respectively.
Fig. 2.
Fig. 2. Scheme diagram for modelling the system using constraints. (a) Diagram depicting the displacement of pupil on the object plane when illuminating adjacent LED units; (b) Schematic representation of the relative motion between pupil and FOV; (c) The relative positions of images obtained through sequential illumination of $n \times n$ LED units.
Fig. 3.
Fig. 3. Flow chart of the optimization method.
Fig. 4.
Fig. 4. The tolerance of FPM system towards vignetting. (a) The impact of vignetting on experimental outcomes. The curves in the figure depict the Root Mean Squared Error (RMSE) between the FPM results and ground truth for simulating various levels of vignetting; (a1) Ground truth; (a2)-(a7) Reconstructed FPM results under varying levels of vignetting; (b1) FPM reconstruction using the theoretical maximum FOV size at h = 58 mm; (b2) FPM reconstruction with the experimentally obtained maximum FOV at h = 58 mm; (c1) FPM reconstruction using the theoretical maximum FOV size at h = 70 mm; (c2) FPM reconstruction with the experimentally obtained maximum FOV at h = 70 mm; (d1) FPM reconstruction using the theoretical maximum FOV size at h = 76 mm; (d2) FPM reconstruction with the experimentally obtained maximum FOV at h = 76 mm; (e)Plot of the maximum FOV that can be reconstructed in one time as the illumination height varies.
Fig. 5.
Fig. 5. Optimization of the LED array. (a) Adjusting the illumination height to eliminate vignetting; (b) x-y axis translation of LED units; (c) FOV shift on the object plane; (d) The number of LED units with corresponding object fields; (e1-e5) The motion pattern of FOV as FOV expands.
Fig. 6.
Fig. 6. Experimental results of height adjustment scheme. (a1-a2) Reconstructed intensity and phase with FOV side length of 0.19 mm; (a3) Resolution curve corresponding to the red lines in Figs. (a1-a2); (b1-b2,c1-c2,d1-d2) Reconstructed intensity and phase with FOV side length of 3.4 mm, 3.6 mm and 3.8 mm, respectively; (b3) Resolution curves corresponding to the green lines in Figs. (b1-b2), the orange lines in Figs. (c1-c2) and the blue lines in Figs. (d1-d2), respectively.
Fig. 7.
Fig. 7. Design of LED array surface structure. (a)The surface structure at X = 3.8 mm, where the red-colored LED units remain unchanged while the others have been relocated; (a1) The precise motion parameters for each LED unit; (a2) The number of LED units with different colors corresponding to Fig. 5 (d); (e)The design of circular LED array; (f) Point scanning of LED units.

Tables (2)

Tables Icon

Table 1. Define the coordinates of FOV center when each LED unit is illuminated

Tables Icon

Table 2. Values of the central coordinates of the FOV

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

$$V_{f} = \frac{{S_{dark}}}{{S_{bright} + S_{dark}}}$$
$$\left\{ {\begin{array}{{c}} {X \le \frac{D}{{\sqrt 2 }} - ({n - 1} )L}\\ {X \le - D + ({n + 1} )L} \end{array}} \right.$$
$$D = \frac{{FN}}{{Mag}}$$
$$L = \frac{{D \cdot \frac{{d_{LED}}}{{\sqrt {d_{LED}{^2} + {h^2}} }}}}{{2NA}} \approx \frac{{D \cdot d_{LED}}}{{2NA \cdot h}}$$
$$\begin{array}{{cccc}} 1&2& \ldots &t\\ {t + 1}&{t + 2}& \cdots &{2t}\\ \vdots & \vdots & \ddots & \vdots \\ {({t - 1} )t + 1}&{({t - 1} )t + 2}& \cdots &{{t^2}} \end{array}.$$
$$\begin{array}{{c}} {C_1(m )= \left( {x - \frac{x}{2},y + \frac{y}{2}} \right)}\\ {C_2(m )= \left( {x + \frac{x}{2},y + \frac{y}{2}} \right)}\\ {C_3(m )= \left( {x - \frac{x}{2},y - \frac{y}{2}} \right)}\\ {C_4(m )= \left( {x + \frac{x}{2},y - \frac{y}{2}} \right)} \end{array}$$
$$sgp(m )= \frac{{\overrightarrow {C(m )C({Centre} )} }}{{|{\overrightarrow {C(m )C({Centre} )} } |}}$$
$$N(m )= |{\overrightarrow {C(m )C({Centre} )} } |- \frac{D}{2}$$
$$N(m )= \sqrt {{{|{\overrightarrow {C_{\max} (m )C({Centre} )} } |}^2} - {{\left( {\frac{X}{2}} \right)}^2}} - \sqrt {{{\left( {\frac{D}{2}} \right)}^2} - {{\left( {\frac{X}{2}} \right)}^2}} $$
$$N(m )= |{\overrightarrow {C_{\max} (m )C({Centre} )} } |- |{\overrightarrow {b1(m )C_{\max} ({Centre} )} } |$$
$$sgp(m )= \frac{{\overrightarrow {C({Centre} )C(m )} }}{{|{\overrightarrow {C({Centre} )C(m )} } |}}.$$
$$N(m )= \frac{D}{2} - |{\overrightarrow {C(m )C({Centre} )} } |.$$
$$N(m )= \frac{D}{2} - \left( {|{\overrightarrow {C_{\min} (m )C({Centre} )} } |- \frac{X}{2}} \right)$$
$$N(m )= |{\overrightarrow {b_2(m )C_{\min} ({Centre} )} } |- |{\overrightarrow {C_{\min} (m )C_{\min} ({Centre} )} } |$$
$$\left\{ {\begin{array}{{c}} {X \le \frac{D}{{\sqrt 2 }} - 2L}\\ {X \le - D + 4L} \end{array}} \right..$$
$$\left\{ {\begin{array}{{c}} {X_{TOV} = \frac{{\sqrt 2 - 1}}{3}D = 0.795mm}\\ {L_{TOV} = \frac{{1 + \frac{{\sqrt 2 }}{2}}}{6}D = 1.565mm} \end{array}} \right.$$
$$sgp(m )= \left\{ {\begin{array}{{cc}} {\frac{{\overrightarrow {C(m )C({13} )} }}{{|{\overrightarrow {C(m )C({13} )} } |}}}&{m = 7,8,9,12,14,17,18,19}\\ {\begin{array}{{c}} {}\\ {\frac{{\overrightarrow {C({13} )C(m )} }}{{|{\overrightarrow {C({13} )C(m )} } |}}} \end{array}}&{m = 1,2,3,4,5,6,10,11,15,16,20,21,22,23,24,25} \end{array}} \right.$$
$$\scalebox{0.9}{$\begin{array}{l} N(m )= \\ \left\{ {\begin{array}{{cc}} {\frac{D}{2} - 2L_{TOV} + \frac{X}{2}}&\begin{array}{l} 0.759 \le X \le 3.88\\ \& m = 3,11,15,23 \end{array}\\ {\sqrt 2 \left( {L_{TOV} + \frac{X}{2}} \right) - \frac{D}{2}}&\begin{array}{l} 0.759 \le X \le 3.88\\ \& m = 7,9,17,19 \end{array}\\ {L_{TOV} + \frac{X}{2} - \sqrt {{{\left( {\frac{D}{2}} \right)}^2} - {{\left( {\frac{X}{2}} \right)}^2}} }&\begin{array}{l} 1.606 \le X \le 3.88\\ \& m = 8,12,14,18 \end{array}\\ {\frac{D}{2} - 2\sqrt 2 L_{TOV} + \frac{{X\sqrt 2 }}{2}}&\begin{array}{l} 2.371 \le X \le 3.88\\ \& m = 1,5,21,25 \end{array}\\ {\left( {\frac{X}{4}\cos ({\tan 2} )+ \sqrt {{{\left( {\frac{D}{2}} \right)}^2} - {{\left( {\frac{X}{4}\sin ({\arctan 2} )} \right)}^2}} } \right) - \sqrt 5 \left( {L_{TOV} - \frac{X}{4}} \right)}&\begin{array}{l} 1.135 \le X \le 3.88\\ \& m = 2,4,6,10,16,20,22,24 \end{array}\\ 0&{else} \end{array}} \right. \end{array}$}.$$
$$h_{m} = \sqrt {{{\left( {\frac{{a_{tho} \cdot d_{LED}}}{{{{d_{m}} / {({{{|{\overrightarrow {C(m )C({13} )} } |} / {L_{TOV}}}} )}}}}} \right)}^2} - d_{LED}{^2}} $$
$$d_{m} = \left\{ {\begin{array}{{cc}} {\frac{{D + X}}{2}}&{m = 3,11,15,23}\\ {\frac{{D - \sqrt 2 X}}{2}}&{m = 7,9,17,19}\\ {\sqrt {{{\left( {\frac{D}{2}} \right)}^2} - {{\left( {\frac{X}{2}} \right)}^2}} - \frac{X}{2}}&{m = 8,12,14,18}\\ {\frac{{D + \sqrt 2 X}}{2}}&{m = 1,5,21,25}\\ {\sqrt 5 L_{TOV} + N(m )}&{m = 2,4,6,10,16,20,22,24} \end{array}} \right..$$
$$G_{m} = \frac{{\Delta L_{m} \cdot h_{opt}}}{{h_{m}}}$$
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.