Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

MEMS-based two-photon microscopy with Lissajous scanning and image reconstruction under a feed-forward control strategy

Open Access Open Access

Abstract

Two-photon microscopy (TPM) based on two-dimensional micro-electro-mechanical (MEMS) system mirrors shows promising applications in biomedicine and the life sciences. To improve the imaging quality and real-time performance of TPM, this paper proposes Lissajous scanning control and image reconstruction under a feed-forward control strategy, a dual-parameter alternating drive control algorithm and segmented phase synchronization mechanism, and pipe-lined fusion-mean filtering and median filtering to suppress image noise. A 10 fps frame rate (512 × 512 pixels), a 140 µm × 140 µm field of view, and a 0.62 µm lateral resolution were achieved. The imaging capability of MEMS-based Lissajous scanning TPM was verified by ex vivo and in vivo biological tissue imaging.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Two-photon microscopy (TPM) [13] is an emerging technique that integrates the principles of laser scanning microscopy and two-photon fluorescence excitation. Demonstrating excellent characteristics such as label-free capability and high spatiotemporal resolution, TPM holds considerable promise for advancing cancer diagnosis [4,5]. Micro-electro-mechanical system (MEMS) mirrors, commonly used in portable and miniature microscopes, function as micro-reflectors engineered to manipulate beam deflection, which feature rapid response speeds, compact sizes, and wide scanning angles. [6,7]. MEMS mirrors are categorized into four main types: electrostatic, electromagnetic, thermoelectric and piezoelectric [8,9]. Among these, electrostatic MEMS mirrors [10,11] stand out as the earliest and most mature, boasting high scanning frequencies and compact dimensions, which are suitable for integrated endomicroscopic probes. MEMS mirror scanning modes include raster, spiral, and Lissajous modes [12,13]. In the Lissajous scanning mode [1416], both axes are actuated by sinusoidal signals, offering excellent spectral characteristics devoid of harmonics that could induce mechanical resonance. Consequently, Lissajous scanning ensures robust mechanical stability [17]. Furthermore, this scanning mode is characterized by relatively higher uniformity, minimal photodamage, and the flexiblility of frame rate selection [18,19].

Portable and miniature TPM demonstrates a more compact and flexible integration scheme. The incorporation of closed-loop control algorithms, including proportional integral derivative (PID) control and sliding film control [20,21], necessitate optoelectronic sensors essential for optical trajectory detection, forming a feedback loop. These algorithms significantly increases the integration complexity and cost. When selecting open-loop control algorithms [2224], error compensation terms [25] must be derived based on the information related to system errors and parameter changes arised from circuit characteristics. These terms serve to rectify theoretically assigned parameters, and eliminate the impact of the system errors. This processing strategy embodies a feed-forward control algorithm [2629], which is an advanced correction method suitable for compensating known and measurable error factors. It is characterized by straightforward control and ease of implementation. The previous work [13,16,25] has addressed parameter errors, such as phase and magnitude, along with the compensatory methods, but does not introduce the feed-forward control algorithm.

Image reconstruction is the process of recovering an original image using the existing image information, which can be broadly categorized as model-driven and data-driven. Model-driven reconstruction mainly includes analytical and iterative reconstruction methods [3032], whereas data-driven reconstruction methods are more based on deep learning, such as image reconstruction using convolutional neural networks [33]. Unlike the model-driven method, data-driven reconstruction methods relying on deep learning entail algorithm design and substantial quantities of image data for neural network training and validation, imposing challenges to the acquisition of high-quality image data. The previous work [30] has introduced a model-based iterative image reconstruction algorithm, employing maximum a posteriori estimation with a generalized Gaussian Markov random field a priori model. However, this approach faces challenges due to computational intensity and time-consumption. In contrast, the analytical reconstruction algorithm within the model-driven framework is solved by mathematical analysis. By utilizing the forward mathematical model for inverse solution without iteration and requiring only a single calculation, image reconstruction can be efficiently realized, offering the advantages of simplicity and rapid computation [34]. Although the analytical reconstruction algorithm has been applied in previous work [31,32], the parameter error correction has not been discussed in detail.

This paper firstly introduces the theoretical principle and characteristics of Lissajous trajectory. Secondly, a MEMS-based Lissajous scanning TPM platform is constructed. Then, the principles of MEMS mirror scanning control and image analytical reconstruction based on the feed-forward control strategy are proposed, which are divided into control parameter selection, MEMS mirror alternating drive control, segmented phase synchronization control, and an image analytical reconstruction model (IARM) and image filtering algorithm, with elaboration in terms of the rationale, principles, process, and effects, respectively. Consequently, Lissajous scanning control and image analytical reconstruction are achieved, with a scanning fill density of 99.87% for a 512 × 512 pixels image. Finally, the performance of TPM was experimentally calibrated, including a frame rate of 10 fps, a field of view (FOV) of 140 µm × 140 µm, and a lateral resolution of 0.62 µm. The imaging capability of TPM is also verified by ex vivo and in vivo biological tissues.

2. Method

2.1 Lissajous principle and characteristics

A Lissajous trajectory, referred to as the combined trajectory followed by a mass undergoing simple harmonic motion along the X- and Y-axes [35], is described by the following equation:

$$\left\{ \begin{array}{l} x(t) = {A_x}\textrm{sin(2}\pi {f_x}t + {\varphi_x})\\ y(t) = {A_y}\textrm{sin(2}\pi {f_y}t + {\varphi_y}) \end{array} \right., $$
where x(t) and y(t) are the X- and Y-axis components of the trajectory at time t, respectively; Ax and Ay are the amplitudes of the sinusoidal movements; fx and fy are the frequencies with respect to the X- and Y-axes, respectively; and φx and φy are the biaxial initial phases. Ax and Ay define the imaging FOV by limiting the motion of the ensemble trajectory in the rectangular plane surrounded by the four side lines of x = ±Ax and y = ±Ay [35]. In order to obtain a closed and stable Lissajous trajectory, the control signal frequencies fx and fy are selected as integers here, the greatest common divisor of fx and fy is gcd(fx, fy), where the unit of gcd(fx, fy) is Hz and mx and my are mutually prime and dimensionless:
$$\frac{{{f_x}}}{{{f_y}}} = \frac{{{m_x} \cdot \gcd ({f_x},{f_y})}}{{{m_y} \cdot \gcd ({f_x},{f_y})}} = \frac{{{m_x}}}{{{m_y}}}. $$

When the parameters are determined, the ensemble trajectory pattern exhibits a periodic variation with period $\frac{1}{{\gcd ({f_x},{f_y})}}$ [36]. In this paper, gcd(fx, fy) is used as the system imaging frame rate (FR), and Eq. (3) holds. Thus, the larger gcd(fx, fy) is, the higher FR is. The frequency ratio parameters (mx and my) characterize the numbers of intersections of the trajectory with the sidelines (x = ±Ax and y = ±Ay), where the higher the number of intersections, the more complex the trajectory pattern [35]. In this way, mx and my reflect the trajectory scanning density in the cases of φx and φy determination, respectively. The higher the value of mx + my, the higher the scanning density [37]. However, according to Eq. (2), mx + my and gcd(fx, fy) are a pair of contradictory bodies, i.e., scanning density and FR are contradictory to each other, and fx and fy must be chosen appropriately so that scanning density and FR can be balanced [37,38]:

$$FR = \gcd ({f_x},{f_y}). $$

Eliminating the time variable t in Eq. (1), the trajectory equation transforms into Eq. (4):

$$\textrm{sin}({m_x}{\varphi _y} - {m_y}{\varphi _x}) = \sin ({m_x}\textrm{acrsin}\frac{{y(t)}}{{{A_y}}} \pm {m_y}\textrm{arcsin}(\frac{{x(t)}}{{{A_x}}})). $$

According to Eq. (4), when Ax and Ay are determined, the Lissajous trajectory is uniquely determined by mx, my, and mxφymyφx [39]. When Ax, Ay, mx, and my are determined, the Lissajous trajectory can be cyclical when φx and φy are varied from 0° to 360° [40]. Based on Refs. [39,40], when φx takes a constant value, the period of the trajectory varying with φyis $\frac{{2\pi }}{{{m_x}}}$, and when φy takes a constant value, the period of the trajectory varying with φx is $\frac{{2\pi }}{{{m_y}}}$.

2.2 Platform setup

The TPM validation platform based on a two-dimensional (2D) MEMS mirror was built as depicted in Fig. 1. The laser source is an 80 MHz femtosecond fiber laser operating at 1,560 nm. The laser beam passes through the delivery fiber (solid polarization maintaining fiber) and the frequency doubling module in the probe, which ensure the light pulse width by finely adjusting the length of the dispersion compensation fiber. This entire setup serves as an excitation source operating at 780 nm with 90 fs pulse width, 135 mW maximum power and the laser beam passes through mirror (M) before impinging upon a 2D electrostatic MEMS mirror (A3I12.2, Mirrorcle Technologies) with a diameter of 1.2 mm. The MEMS mirror deflects the reflected beam, which is focused by the scanning lens (L1) and incidents on the dichroic mirror (DM) and then is reflected by the DM and focused by a home-made objective (NA = 0.9) onto the sample. The focused pulse on the sample has a pulse width of about 180 fs and a maximum average optical power of about 55 mW, then the fluorescence (420-580 nm) excited by two-photon excitation is collected by the objective and focused by an aspheric lens (L2) into a soft fiber bundle (SFB; NA = 0.54) for flexible transmission. Then the collected light signal is collimated, filtered and condensed (L3) into the photomultiplier tubes (PMT; H10770A-40, Hamamatsu), which facilitates the conversion of optical signals into current signals. Finally, through the transimpedance and gain amplifier circuits with 60 MHz bandwidth, the current signal is converted into a voltage signal ranging from –1 V to +1 V.

 figure: Fig. 1.

Fig. 1. Schematic of TPM system based on a 2D MEMS mirror. (M, plane mirror; DM, dichroic mirror; L1, scanning lens; L2, aspheric lens; L3, condensed lens; SFB, soft fiber bundle; PMT, photomultiplier tube; PC, personal computer). The femtosecond excitation and fluorescence collection paths are depicted in red and green, respectively.

Download Full Size | PDF

The control module is a custom-designed board, primarily comprising a data acquisition (DAQ) circuit, a MEMS driver circuit, and an Ethernet communication circuit. The DAQ circuit (12 MS/s sampling rate, 16-bit sampling precision, ± 1 V scale range) is employed to capture the fluorescent signal output from the PMT module. The MEMS driver circuit is composed of several sub-circuits, including a digital-to-analog converter circuit (1 MHz conversion rate, 16-bit conversion precision, ± 10 V scale range), bias and differential conditioning circuit (0–2.5 V), low-pass filter circuit (20 kHz cutoff frequency), and high-voltage amplifier circuit (with a gain of 66×). This circuit generates the high-voltage differential sinusoidal drive signals for the X- and Y-axes of the 2D MEMS mirror. The Ethernet communication circuit (User Datagram Protocol (UDP), 1 Gbps transfer rate), realizes data interaction between the control module and PC. The PC (AMD Ryzen 5-5600 G CPU, 3.90 GHz clock, 32 GB RAM), is utilized to implement Lissajous scan control and image reconstruction.

3. Image reconstruction

The MEMS-based TPM system adopts a feed-forward control algorithm to realize Lissajous scanning control and image analytical reconstruction, as depicted in Fig. 2. It mainly consists of five modules: parameter selection, MEMS mirror dual-parameter cross-drive control, phase synchronization, IARM, and image filtering modules.

 figure: Fig. 2.

Fig. 2. Schematic diagram of MEMS mirror Lissajous scanning control and image analytical reconstruction based on feed-forward control algorithm (FR, frame rate; FOV, field of view; IARM, image analytical reconstruction model; DAQ, data acquisition). The green area generates the theoretically given parameters that drive the MEMS mirror. The yellow area produces the error compensation terms for the feed-forward control parameters. The gray area implements the cross-cutting synchronized Lissajous driving for the MEMS mirror and synchronized acquisition, caching, framing, and transmission of image data. The red area uses a PC to complete the image analytical reconstruction process, which generates the IARM and coordinate lookup table, determines the pixel gray, conducts image filtering and denoising, and finally forms a 2D image.

Download Full Size | PDF

Firstly, as indicated by the green area in Fig. 2, two groups of theoretically given parameters are generated according to the requirements of performance metrics such as FOV and FR. Secondly, as depicted by the yellow area in Fig. 2, the quantitative information, including the inherent system errors, parameter deviations caused by circuit characteristics, and calibration errors, are obtained through the preliminary experiments. The compensation values for phase, amplitude, frequency, and other parameters are then calculated to generate the parameter compensation terms or error compensation terms required for feed-forward control. These terms are subsequently combined with the theoretically given parameters, resulting in the formation of the two groups of actual control parameters for MEMS mirror deflection scanning and the main parameters for the IARM. Thirdly, as illustrated by the gray area in Fig. 2, two groups of bi-axial sinusoidal signals are segmented and phase synchronized to drive the MEMS mirror, enabling non-repetitive Lissajous scanning. Simultaneously, this module controls the DAQ according to the phase synchronization parameters to obtain the raw time series image data, which is uploaded to the PC after DDR3 caching and ethernet framing. Finally, as depicted by the red area in Fig. 2, the PC undertakes the image analytical reconstruction process. The MEMS bi-axial drive signals exhibit sinusoidal variation, producing Lissajous trajectories that exhibit a complex nonlinear correspondence between temporal and spatial locations. In this module, the error compensation terms for the scanning trajectory parameters are obtained in advance under the feed-forward control strategy. These terms are then combined the theoretically given parameters and the feedback phase values following phase synchronization, resulting in the formation of the IARM. The IARM generates a coordinate lookup table to process the time series image data to form the pixel gray values, which are mapped to the pixel coordinate positions to produce the primary 2D image. The final reconstructed image is obtained after image filtering and denoising.

The five functional modules depicted in Fig. 2 are the core of TPM. In the following, the realization process of each module will be illustrated in terms of basis, principles, methodologies, and effectiveness.

3.1 Parameter selection

In TPM, the Gaussian beam exhibits a full width at half maximum [41] of 0.6 µm, which defines the optical resolution Ropt as also being 0.6 µm. According to the Nyquist-Shannon sampling theorem, the image resolution Rimg should not be greater than $\frac{{{R_{opt}}}}{2}$. In this system, the image resolution is set to half of the optical resolution, i.e., ${R_{img}} = \frac{{{R_{opt}}}}{2} = $0.3 µm. The maximum FOV is set to 140 µm × 140 µm. In this way, the maximum number of pixels per row and column of an image is required to be ${P_{l\max }} = {P_{c\max }} = \frac{{FOV}}{{{R_{img}}}} = \frac{{140\mathrm{\mu}\textrm{m}}}{{0.3\mathrm{\mu}\textrm{m}}} = 467$. Thus, an image size of 512 × 512 pixels was chosen, i.e., P = Pl = Pc = 512, which can satisfy the FOV and Rimg requirements.

According to Eq. (1), the scanning velocities Vx and Vy of the Lissajous trajectory along the X- and Y-axes, respectively, are as follows:

$$\left\{ \begin{array}{l} {V_\textrm{x}} = \dot{x}(t) = \textrm{2}\pi {f_x} \cdot {A_x}\textrm{sin(2}\pi {f_x}t + {\varphi_x})\\ {V_\textrm{y}} = \dot{y}(t) = \textrm{2}\pi {f_y} \cdot {A_y}\textrm{sin(2}\pi {f_y}t + {\varphi_y}) \end{array} \right.. $$

Let P = 2Ax = 2Ay = 512, and the previous coordinate system by transformed into the pixel-plane coordinate system. Then, the scanning velocities $V_x^{\prime}$ and $V_y^{\prime}$ are as follows:

$$\left\{ \begin{array}{l} V_x^{\prime}(t) = \dot{x}(t) = P \cdot \pi {f_x}\textrm{sin(2}\pi {f_x}t + {\varphi_x})\\ V_y^{\prime}(t) = \dot{y}(t) = P \cdot \pi {f_y}\textrm{sin(2}\pi {f_y}t + {\varphi_y}) \end{array} \right.. $$

The combined trajectory velocity $V_{xy}^{\prime}$ is derived as in Eq. (7) in units of Pixels/s. The maximum combined trajectory velocity $V_{xy}^{\prime}$ $V_{\max }^{\prime}$ is shown in Eq. (8):

$$V_{xy}^{\prime}(t) = \pi P\sqrt {f_x^2{{\sin }^2}(2\pi {f_x}t + {\varphi _x}) + f_y^2{{\sin }^2}(2\pi {f_y}t + {\varphi _y})} $$
$$V_{\max }^{\prime} = \pi P\sqrt {f_x^2 + f_y^2} . $$

Equation (7) establishes that upon determining the amplitude, frequency, and phase of the MEMS mirror drive signals, is fixed and will not exceed $V_{\max }^{\prime}$. Set the sampling rate of analog-digital circuit as fs, and require that its lower threshold fsL not be less than $V_{\max }^{\prime}$, i.e., ${f_{sL}} \ge V_{\max }^{\prime}$, which can ensure that when the trajectory scanning velocity is the fastest, the analog-digital circuit can also capture the trajectory, e.g., fs = 2330 Hz, fy = 2390 Hz, 512 × 512 pixels image, $V_{\max }^{\prime} = \pi \times 512 \times \sqrt {{{2330}^2} + {{2390}^2}} \approx 5.37$MPixels/s, so fsL ≥ 5.37 MS/s.

For the upper threshold fsH of fs, the primary consideration is the pressure exerted on data transmission. The system uses a Gigabit Ethernet interface to transmit the raw image data, and the UDP is used for transmission efficiency, but the protocol suffers from packet loss. It has been experimentally demonstrated that when the packet-to-packet transmission delay is increased to 100 µs at 20 MB/s, the packet loss rate remains at 1%–2%. If the transmission rate is set to 40 MB/s, the packet loss rate is obviously up to 20%, even if the packet-to-packet transmission delay is increased to 200 µs, The packet loss rate is approximately 18% with no significant improvement. It was determined experimentally that fsH ≤ 15 MS/s, and the packet loss rate of UDP remains below 0.5% with a 100 µs packet delay.

A high imaging frame rate is effective in reducing image motion artifacts or delayed trailing [37]. In this platform, FR is set to 10 Hz, i.e., gcd(fx, fy) = 10 Hz. The operating frequency of MEMS is limited to 2400 Hz, maintaining a certain distance from its resonance frequency (resonant frequency of X-axis: 2937 Hz; resonant frequency of Y-axis: 2913 Hz). The other parameters are set as follows: 512 × 512 pixels image, fs = 10 MS/s, frequency range 2300–2400 Hz, and signal frequency error ≤6‰, so that six groups of frequency combinations can be found. For each group of frequency combinations, φy is varied from 0° to 360° in 0.1° increments, and under each φy, φx cycles from 0° to $\frac{{360^\circ }}{{{m_y}}}$ in 0.1° steps. Matlab is used to identify the first group of φx1 and φy1 that maximize the filling density. Then, let the second group of phase parameters obey φy2 =φy1; here, Matlab searches for φx2, so that the fusion filling density is the highest under the two groups of theoretically given parameters φx1, φy1 and φx2, φy2. The filling densities under the single and double groups of parameters are denoted as SFD and DFD, respectively, and are listed in Table 1.

Tables Icon

Table 1. Phase Parameters and Lissajous Scan Densities for Different Frequency Combinations

By referencing Table 1, we selected the theoretical parameters with the highest DFD and adopt them as the actual parameters of the IARM. As depicted in Table 2, the actual control parameters of the two groups of driving MEMS mirror were obtained by combining with the error compensation under feed-forward control, where the actual amplitude parameters Aact_x and Aact_y were determined by the FOV, which can be referred to the FOV calibration experiment in Section 4.1.

Tables Icon

Table 2. Two Groups of Actual Control Parameters for Driving the MEMS Mirror

3.2 MEMS mirror dual-parameter cross-drive control

The effectiveness of scanning imaging is directly affected by the quality of the MEMS mirror drive control strategy. Lissajous scanning is categorized into repetitive and non-repetitive approaches [31,37]. Repetitive scanning refers to using a group of fixed parameters to drive the MEMS mirror deflection motion continuously, ensuring consistency in the scanning trajectory throughout each cycle. This approach offers a simple and stable control method. In contrast, in the non-repeating Lissajous scanning mode, a complete cycle of Lissajous trajectories is executed continuously under a specific set of control parameters. Subsequently, the succeeding set of control parameters is introduced, leading to the continuation of the next complete cycle of Lissajous trajectories [31]. Consequently, non-repetitive scanning enriches trajectory information along with multi-frame trajectory fusion, which enhances the scanning filling density and cannot be achieved by repetitive scanning mode. However, the control of the non-repetitive scanning mode is more challenging, especially when dealing with a large number of parameter groups. In this work, we employed the dual-parameter cross-drive control algorithm, utilizing two sets of independent control parameters as detailed in Table 2. Following the completion of a scanning cycle driven by the first set of sinusoidal signal parameters, the subsequent set of parameters is employed to drive the MEMS mirror through the next scanning cycle. This alternation of control parameters gives rise to two distinct styles of Lissajous trajectories, defined as TA and TB respectively. Assuming the generation of continuous trajectory sequences within a specified time period: TA1, TB1, TA2, TB2, …, TAn, TBn. Notably, TA1, TA2, …, TAn are identical due to the same control parameters, and likewise, TB1, TB2, …, TBn exhibit identical characteristics. The fusion of TA and TB forms a composite image, achieving a filling density of 99.87%. The fusion process adopts a pipeline approach. For example, TA1 and TB1 are fused to generate the initial frame TAB1; TB1 and TA2 are fused to generate the second frame TAB2; TA2 and TB2 are fused to generate the third frame TAB3 and so forth. There is only a certain delay in the initial generation of fused image, subsequent fusion frames are continuous, maintaining unaffected frame rate. This algorithm operates in an alternating mode, generating two complementary scanning trajectories that could enhance scanning filling density.

Firstly, to verify the effectiveness of the dual-parameter cross-drive control algorithm, we employed Matlab to simulate the Lissajous trajectories driven by both single and double group parameters, as shown in Fig. 3. To facilitate the observation of trajectory patterns without loss of generality, mx and my were set to smaller values, while maintaining an image size is 512 × 512 pixels and fs =10 MS/s. Figure 3(a) shows the trajectory driven by a single group of parameters with fx1 = 2300 Hz, fy1 = 2200 Hz, $\frac{{{m_x}}}{{{m_y}}} = \frac{{23}}{{22}}$, φy1 = φx1 = 0°, Ax1 = 2 V, and Ay1 = 2 V, where SFD was calculated to be 14.59%. Figure 3(b) presents the trajectory driven alternately by double groups of parameters. The parameters in the first group are the same as those used to obtain Fig. 3(a), whereas the parameters in the second group remained unchanged except for φx2 = 2° and φy2 = 0°, where DFD was calculated to be 27.05%. The filling density of the trajectory increased from 14.59% in the single-parameter case to 27.05% in the dual-parameter scenario, which verifies the effectiveness of the control algorithm.

 figure: Fig. 3.

Fig. 3. Lissajous trajectories obtained with single- and double-group parameters. (a) Trajectory (red line) driven by the single-group parameters with fx1 = 2300 Hz, fy1 = 2200 Hz, $\frac{{{m_x}}}{{{m_y}}} = \frac{{23}}{{22}}$, φy1 = φx1 = 0°, Ax1 = 2 V, and Ay1 = 2 V, and SFD = 14.59%. (b) Trajectory (red and green lines) driven alternately by the double-group parameters. The parameters in the first group were the same as those used to obtain (a), and the parameters in the second group remained unchanged except for φx2 = 2° and φy2 = 0°, DFD = 27.05%.

Download Full Size | PDF

Secondly, the fluorescence target imaging is performed under single-group parameter drive and double-group parameter alternating drive respectively, as shown in Fig. 4. The fluorescent target structure was fabricated by etching on a BF33 glass substrate. This target comprised an array of evenly spaced fluorescent dots, each with a diameter of 2 µm, arranged in rows and columns with a 10 µm spacing. The image quality of the dual-parameter drive is evidently better than that of the single-parameter drive. Figure 4(a) presents the target imaging under the single-group parameters in Table 2 except for φy1 = 4° and φx1 = 259.7°, and SFD was calculated to be 74.63%. Figure 4(b) depicts the target imaging under the double-group parameters in Table 2, where DFD was evaluated to be 99.87%.

 figure: Fig. 4.

Fig. 4. (a) Target imaging with the single-group parameters in Table 2 except for φy1 = 4°, φx1 = 259.7°, and SFD = 74.63%. (b) Target imaging with the double-group parameters in Table 2 and DFD = 99.87%.

Download Full Size | PDF

3.3 Segmented phase synchronization

Synchronization between MEMS mirror drive control and DAQ is a key factor affecting the quality of image reconstruction. During continuous scanning imaging, consistent acquisition and reconstruction based on initial phase parameters can lead to cumulative phase errors [13,25], causing mismatches between the original data and the reconstruction model and resulting in image fragmentation, aggregation, and other distortions. The system uses a sine lookup table and digital-analog circuit to generate MEMS mirror driving signals. The sine lookup table stores the amplitude information for each phase point, with the number of phase points in a cycle represented by 25 bits in binary, and the phase resolution reaches $\frac{{{{360}^ \circ }}}{{{2^{25}}}} = 1.07 \times {10^{ - 5}}({^ \circ } )$. A quantization error arises because the sine lookup table configuration parameter can only be set as an integer, resulting in a deviation between the actual output frequency and the theoretical frequency. In particular, the theoretical frequency is 2330 Hz, whereas the actual output frequency is 2330.0052 Hz. Although the deviation is only 0.0052 Hz, with increasing time, the phase deviation will increase, easily resulting in image distortion.

In this paper, the segmented phase synchronization algorithm is adopted to reduce the effect of phase accumulation error. FR = 10 Hz, i.e., each image has a period of 100 ms. When the system starts working, the initial phase information of this frame is recorded, and the DAQ is started simultaneously. Then, the initial phase information is uploaded to the PC in a packet together with the acquired image data. At the beginning of the subsequent frame, the new phase information of this frame is recorded again and uploaded together with the image data. By analogy, each frame of image data contains its own initial phase information, which is synchronized every 100 ms, greatly reducing the image reconstruction bias due to the accumulated phase error. Referring to the control parameters in Table 2, fluorescent target imaging with and without the algorithm is shown in Fig. 5. When the algorithm is used, the fluorescence spots are better clustered without divergence and division in Fig. 5(a). When the algorithm is not used, the fluorescence spots produce obvious divergence and division in the X- and Y-directions due to the phase accumulation error, as shown in Fig. 5(b). Thus, the segmented phase synchronization algorithm can effectively solve the synchronization problem between MEMS mirror drive control and image data acquisition, ultimately enhancing the image reconstruction quality.

 figure: Fig. 5.

Fig. 5. Images reconstructed with and without the segmented phase synchronization algorithm. (a) With the algorithm, the distribution of the fluorescent spots is uniform with good clustering. (b) Without the algorithm, the fluorescence spots show significant divergence and division.

Download Full Size | PDF

3.4 Image analytical reconstruction model

The core of image analytical reconstruction is to establish the pixel plane mathematical model. By referring to Eq. (1), we can derive the theoretical trajectory equation for the pixel plane, as presented in Eq. (9) [31,32]:

$$\left\{ \begin{array}{l} X(t) = \left\lfloor {\frac{{{P_x}}}{2}\textrm{sin(2}\pi {f_x}t + {\varphi_x}) + \frac{{{P_x}}}{2}} \right\rfloor \\ Y(t) = \left\lfloor {\frac{{{P_y}}}{2}\textrm{sin(2}\pi {f_y}t + {\varphi_y}) + \frac{{{P_y}}}{2}} \right\rfloor \end{array} \right.. $$

Here, X(t) and Y(t) are the X- and Y-axis coordinate positions of the trajectory at time t, respectively; $\lfloor{} \rfloor $ is the downward rounding operator; Px and Py are the numbers of pixels in the X- and Y-directions, respectively; fx and fy are the frequencies; and φx and φy are the biaxial initial phases. The image size 512 × 512 pixels, i.e., Px = Py = 512, with a maximum imaging FOV of 140 µm × 140 µm, so the pixel resolution is 140 µm / 512 = 0.27 µm, meeting the specified image resolution requirement of 0.3 µm. The maximum error caused by the downward rounding operation is close to one pixel grid, which does not exceed the 0.3 µm image resolution requirement. In the actual reconstructed image, the gray value of each pixel is determined by the mean and median filtering of multiple fused images. Therefore, the gray value variations caused by a one-pixel grid error are negligible. The trajectory coordinates are translationally transformed for ease of calculation by the PC program, which does not affect the imaging. With fx and fy determined, Eq. (9) can be converted into an actual image reconstruction mathematical model in the pixel plane as shown in Eq. (10) by using the phase error correction terms Δλx and Δλy, referring to the previous research results of our team under feed-forward control [42], as well as the actual feedback phases γx and γy under the phase synchronization mechanism:

$$\left\{ \begin{array}{l} X(t) = \lfloor{256 \cdot \textrm{sin(2}\pi {f_x}t + {\gamma_x} + \Delta {\lambda_x}) + 256} \rfloor \\ Y(t) = \lfloor{256 \cdot \textrm{sin(2}\pi {f_y}t + {\gamma_y} + \Delta {\lambda_y}) + 256} \rfloor \end{array} \right.$$

The time variable t varies from 0 to 100 ms in steps of $\frac{1}{{{f_s}}}$. A 2D coordinate lookup table is generated using Eq. (10), and a reconstructed 2D image is generated by processing the 1D time-series image data. This process is employed to form the gray values for each pixel, which are subsequently assigned to their corresponding pixel coordinate positions.

3.5 Image filtering

Due to the unavoidable presence of optical and electromagnetic noise in the system [43,44], the reconstructed image is superimposed with obvious noise, such as Gaussian noise, Poisson noise, and salt-and-pepper noise [45,46]. In this paper, two software filtering algorithms are adopted to suppress image noise. The first method is the pipe-lined fusion-mean filtering algorithm. In this article, the test conditions are scanning and reconstructing images of fixed samples to obtain time series images, and these images are highly correlated. The algorithm cumulatively averages the gray values of multiple images at the same pixel positions to obtain the final gray values at the corresponding positions of the fused image [47] and adopts a pipe-lined mechanism to ensure the continuity of the fused images. This method is simple, fast, and suitable for real-time processing, and the noise signal level is weakened by cumulative averaging of the pixel gray values, which improves the signal-to-noise ratio of the fused image. Taking fluorescent target imaging as an example, 1-frame original image, 5-frame, 10-frame, and 20-frame fusion-mean filtering images are shown, in as shown in Figs. 6(a), (b), (c), and (d), respectively. The image in Fig. 6(a) contains obvious noise. As demonstrated by Figs. 6(b)–(d), the image fusion-mean filtering algorithm can effectively suppress the noise, and the more image frames involved in the fusion, the better the noise suppression effect, but due to the loss of more high-frequency components, the image will become slightly blurred.

 figure: Fig. 6.

Fig. 6. Image filtering. (a) One-frame original image, which mainly contains Poisson noise, Gaussian noise, and salt-and-pepper noise. (b) Five-frame fused image, where some noise is eliminated. (c) Ten-frame fused image, where noise is further suppressed. (d) Twenty-frame fused image, where the noise is suppressed, the image becomes slightly blurred, and salt-and-pepper noise still exists somewhat. (e) Median filtered image based on Fig. 6(d), where salt-and-pepper noise is eliminated, demonstrating optimal noise suppression (the filter window size is 3 × 3).

Download Full Size | PDF

The second method is the pipe-lined median filtering algorithm, which achieves noise filtering by taking a fixed-size window in the image and replacing the value of the center pixel of the window with the median value by sorting all the pixel values within the window [48] and traversing the whole image. Mean filtering is not very effective at salt-and-pepper noise suppression, and slight salt-and-pepper noise can also be seen in Fig. 6(d). After applying median filtering (the filter window size is 3 × 3), as shown in Fig. 6(e), the salt-and-pepper noise residue in Fig. 6(d) is effectively suppressed, and the image quality is further improved.

4. Results

4.1 FOV calibration

The full-scale voltage was denoted as Vmax, and the control parameters except for the amplitudes of the X- and Y-axes are listed in Table 2. With the fixed setting of Ay = 0.1Vmax and Ax selected in 0.01Vmax steps in the range of 0–0.3Vmax, scanning imaging of the fluorescent target was performed point by point, and the number of X-direction fluorescent spots was counted to calculate the X-direction imaging field of view size (FOVx). The curve of FOVx versus Ax is plotted as shown in Fig. 7 (red line). Similarly, to obtain the curve of FOVyversus Ay in Fig. 7 (blue line), the other parameters remain unchanged, Ax was fixed at 0.1Vmax, and Ay was selected in 0.01Vmax steps in the range of 0–0.3Vmax.

 figure: Fig. 7.

Fig. 7. Curves of FOV versus magnitude for X- and Y-axes. Red line: X axis. Blue line: Y axis. The input amplitude ratio $\frac{{{A_x}}}{{{V_{max}}}}$ and $\frac{{{A_y}}}{{{V_{max}}}}$ varied from 0 to 0.3.

Download Full Size | PDF

By referring to Fig. 7 and setting Ax = 0.25Vmax and Ay = 0.21Vmax with the other control parameters unchanged, the target imaging results were obtained, as illustrated in Fig. 8. The spacing between two rows or columns of spots is 10 µm, and the FOV can be calculated to approximately 140 µm × 140 µm.

 figure: Fig. 8.

Fig. 8. Results of fluorescent target for FOV calibration.

Download Full Size | PDF

4.2 Lateral resolution calibration

Fluorescent beads (G400 Fluoro-max, Thermo Scientific) with a diameter of 380 nm were selected as the calibration sample. Figure 7 guided the selection of control parameters for lateral resolution calibration, where Ax = 0.035Vmax, Ay = 0.031Vmax, and other parameters were selected following Table 2. Subsequently, an image of the fluorescent beads with a FOV of 15 µm × 15 µm was obtained (the pixel resolution is 15 µm/512 = 0.03 µm), as shown in Fig. 9(a). Five fluorescent beads enclosed within the red boxes in Fig. 9(a) were selected, and their intensities were measured. An average Gaussian curve was fitted, as shown in Fig. 9(b). The lateral resolution was calculated to be 0.62 µm.

 figure: Fig. 9.

Fig. 9. (a) Fluorescent bead image for lateral resolution calibration (the image shown is a 20-frame fused image). (b) Average Gaussian fitting curve of the fluorescent beads in the red boxes in (a).

Download Full Size | PDF

4.3 Bio-tissue imaging

The TPM implemented Lissajous scanning of pollen, stained human renal tubules (C15, Henan Zhizao Educational Equipment Co. Ltd., Prepared Slides 50 pcs), stained human lymph nodes (NO.58, BRESSER Prepared Slides 100 pcs), and stained human myocardium (NO.56, BRESSER Prepared Slides 100 pcs) for ex vivo two-photon imaging, as shown in Fig. 10. Furthermore, two-photon imaging of in vivo human skin was also performed, as depicted in Fig. 11.

 figure: Fig. 10.

Fig. 10. Two-photon imaging results of ex vivo biological tissue (the images shown are 20-frame fused images, frame rate is 10 fps with 512 × 512 pixels). (a) Pollen granules. (b) Human renal tubules. (c) Human lymph nodes. (d) Human myocardium.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Two-photon imaging results of in vivo human skin (the image shown are 20-frame fused images, frame rate is 10 fps with 512 × 512 pixels). (a) Cells of the basal layer of human skin. (b) Collagen fibers of the dermal layer of human skin.

Download Full Size | PDF

In Fig. 10(a), spherical or oval-shaped pollen granules exhibit clear structures with sprouting holes or grooves on the surface. In Fig. 10(b), edema occurs in the epithelial cells of the renal tubules, and the glomeruli are clearly visible. The imaging results of lymph nodes in Fig. 10(c) depict irregularly arranged, small, round lymphocytes and their nuclei. Figure 10(d) presents the myocardium imaging results, revealing myocardial fibrocytes and nuclei. In Fig. 11(a), cells of the basal layer of in vivo human skin, tightly arranged in the cylindrical shape, are clearly visible. Figure 11(b) further illustrates the structure of collagen fibers of in the dermal layer of in vivo human skin. The imaging results verify the feasibility and effectiveness of MEMS-based Lissajous scanning and image reconstruction under feed-forward control strategy.

5. Conclusion

This paper proposed Lissajous scanning and image reconstruction under feed-forward control strategy for MEMS-based TPM, which can be applied to other Lissajous laser scanning imaging systems. The MEMS mirror drive control parameters were deduced and determined in conjunction with practical applications, and the two-parameter alternating drive control algorithm and segmented phase synchronization mechanism were innovatively proposed to eliminate the image distortion due to phase error and improve the scanning fill density up to 99.87% for a 512 × 512 pixels image. Further, the mathematical model and implementation process of image analytical reconstruction were given, and the pipe-lined fusion-mean filter and median filter were used to suppress image noise. Finally, through calibration experiments, a frame rate of 10 fps, a field of view of 140 µm × 140 µm, and a lateral resolution of 0.62 µm were obtained. The imaging capability of TPM was verified by ex vivo and in vivo biological tissues.

Funding

National Key Research and Development Program of China (2020YFB1312802); National Natural Science Foundation of China (31830036, 61973019, 61975002); Academic Excellence Foundation of BUAA for PHD Students.

Acknowlegments

The authors thank the anonymous reviewers for their insightful and professional comments on this work.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248(4951), 73–76 (1990). [CrossRef]  

2. H. Fritjof and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2(12), 932–940 (2005). [CrossRef]  

3. K. Svoboda and R. Yasuda, “Principles of two-photon excitation microscopy and its applications to neuroscience,” Neuron 50(6), 823–839 (2006). [CrossRef]  

4. D. Kobat, N. G. Horton, and C. Xu, “In vivo two-photon microscopy to 1.6-mm depth in mouse cortex,” J. Biomed. Opt. 16(10), 1 (2011). [CrossRef]  

5. C. C. Wang, F. C. Li, R. J. Wu, et al., “Differentiation of normal and cancerous lung tissues by multiphoton imaging,” J. Biomed. Opt. 14(4), 044034 (2009). [CrossRef]  

6. S. T. S. Holmström, U. Baran, and H. Urey, “MEMS laser scanners: a review,” J. Microelectromech. Syst. 23(2), 259–275 (2014). [CrossRef]  

7. T. Fujita, K. Maenaka, and Y. Takayama, “Dual-axis MEMS mirror for large deflection-angle using SU-8 soft torsion beam,” Sensors and Actuators A: Physical 121(1), 16–21 (2005). [CrossRef]  

8. X. Duan, H. Li, Z. Qiu, et al., “MEMS-based multiphoton endomicroscope for repetitive imaging of mouse colon,” Biomed. Opt. Express 6(8), 3074–3083 (2015). [CrossRef]  

9. H. M. Chu and K. Hane, “Design,fabrication and vacuum operation characteristics of twodimensional comb-drive micro-scanner,” Sensors and Actuators A: Physical 165(2), 422–430 (2011). [CrossRef]  

10. W. Piyawattanametha, P. R. Patterson, D. Hah, et al., “Surface-and bulk-micromachined two dimensional scanner driven by angular vertical comb actuators,” J. Microelectromech. Syst. 14(6), 1329–1338 (2005). [CrossRef]  

11. V. Milanovic, G. A. Matus, and D. T. McCormick, “Gimbal-less monolithic silicon actuators for tip-tilt-piston micromirror applications,” IEEE J. Select. Topics Quantum Electron. 10(3), 462–471 (2004). [CrossRef]  

12. K. H. wang, Y. H. Seo, and K. H. Jeong, “Microscanners for optical endomicroscopic applications,” Micro Nano Syst. Lett. 5(1), 1–11 (2017). [CrossRef]  

13. M. Birla, X. Duan, H. Li, et al., “Image Processing Metrics for Phase Identification of a Multiaxis MEMS Scanner Used in Single Pixel Imaging,” IEEE/ASME Trans. Mechatron. 26(3), 1445–1454 (2021). [CrossRef]  

14. H. C. Park, Y. H. Seo, and K. H. Jeong, “Lissajous fiber scanning for forward viewing optical endomicroscopy using asymmetric stiffness modulation,” Opt. Express 22(5), 5818–5825 (2014). [CrossRef]  

15. H. C. Park, Y. H. Seo, K. Hwang, et al., “Micromachined tethered silicon oscillator for an endomicroscopic Lissajous fiber scanner,” Opt. Lett. 39(23), 6675–6678 (2014). [CrossRef]  

16. D. Brunner, H. W. Yoo, R. Schroedter, et al., “Adaptive Lissajous scanning pattern design by phase modulation,” Opt. Express 29(18), 27989–28004 (2021). [CrossRef]  

17. A. Bazaei, Y. K. Yong, and S. O. Reza Moheimani, “High-speed Lissajous-scan atomic force microscopy: Scan pattern planning and control design issues,” Rev. Sci. Instrum. 83(6), 063701 (2012). [CrossRef]  

18. W. Liang, K. Murari, Y. Y. Zhang, et al., “Increased illumination uniformity and reduced photodamage offered by the Lissajous scanning in fiber-optic two-photon endomicroscopy,” J. Biomed. Opt. 17(2), 021108 (2012). [CrossRef]  

19. D. Y. Kim, K. H. wang, J. Ahn, et al., et al., “Lissajous scanning two-photon endomicroscope for in vivo tissue imaging,” Sci. Rep. 9(1), 3560 (2019). [CrossRef]  

20. H. Chen, A. Chen, W. J. Sun, et al., “Closed-loop control of a 2-D mems micromirror with sidewall electrodes for a laser scanning microscope system,” Int. J. Optomechatronics 10(1), 1–13 (2016). [CrossRef]  

21. Y. M. Li and Q. S. Xu, “Adaptive Sliding Mode Control With Perturbation Estimation and PID Sliding Surface for Motion Tracking of a Piezo-Driven Micromanipulator,” IEEE Trans. Contr. Syst. Technol. 18(4), 798–810 (2010). [CrossRef]  

22. C. R. Vogel and Q. Yang, “Modeling, simulation, and open loop control of a continuous facesheet MEMS deformable mirror,” J. Opt. Soc. Am. A 23(5), 1074–1081 (2006). [CrossRef]  

23. J. B. Stewart, A. Diouf, Y. Zhou, et al., “Open-Loop control of MEMS deformable mirror for largeamplitude wavefront control,” J. Opt. Soc. Am. A 24(12), 3827–3833 (2007). [CrossRef]  

24. C. Blain, R. Conan, C. Bradley, et al., “Open-loop control demonstration of micro-electro-mechanical system MEMS deformable mirror,” Opt. Express 18(6), 5433–5448 (2010). [CrossRef]  

25. M. Kim, C. Park, S. Je, et al., “Real-Time Compensation of Simultaneous Errors Induced by Optical Phase Difference and Substrate Motion in Scanning Beam Laser Interference Lithography System,” IEEE/ASME Trans. Mechatron. 23(4), 1491–1500 (2018). [CrossRef]  

26. S. Zhao and K. K. Tan, “Adaptive feedforward compensation of force ripples in linear motors,” Control Engineering Practice 13(9), 1081–1092 (2005). [CrossRef]  

27. M. Kara-Mohamed, W. P. Heath, and A. Lanzon, “Enhanced tracking for nanopositioning systems using feedforward/feedback multivariable control design,” IEEE Trans. Contr. Syst. Technol. 23(3), 1003–1013 (2015). [CrossRef]  

28. M. Grotjahm and B. Heimann, “Model-based feedforward control in Industrial Robotics,” International Journal of Robotics Research 21(1), 45–60 (2002). [CrossRef]  

29. K. K. Leang, Q. Z. Zou, and S. Devasia, “Feedforward control of piezoactuators in atomic force microscope systems,” IEEE Control Syst. 29(1), 70–82 (2009). [CrossRef]  

30. S. Z. Sullivan, R. D. Muir, J. A. Newman, et al., “High frame-rate multichannel beam-scanning microscopy based on Lissajous trajectories,” Opt. Express 22(20), 24224–24234 (2014). [CrossRef]  

31. C. L. Hoy, N. J. Durr, and A. Ben-Yakar, “Fast-updating and nonrepeating Lissajous image reconstruction method for capturing increased dynamic information,” Appl. Opt. 50(16), 2376–2382 (2011). [CrossRef]  

32. G. Li, X. Duan, M. Lee, et al., “Ultra-Compact Microsystems-Based Confocal Endomicroscope,” IEEE Trans. Med. Imaging 39(7), 2406–2414 (2020). [CrossRef]  

33. D. Wu, K. Kim, G. El Fakhri, et al., “Iterative low-dose CT reconstruction with priors trained by artificial neural network,” IEEE Trans. Med. Imaging 36(12), 2479–2486 (2017). [CrossRef]  

34. G. A. Kastis, D. Kyriakopoulou, and A. S. Fokas, “An analytic reconstruction method for PET based on cubic splines,” J. Phys.: Conf. Ser. 490, 012128 (2014). [CrossRef]  

35. D. Xu and F. Zhang, “Parameters of Lissajous graphs,” Qufu Norm. Univ. 27, 54–56 (2001).

36. J. Y. Wang, G. F. Zhang, and Z. You, “Design rules for dense and rapid Lissajous scanning,” Microsyst. Nanoeng. 6(1), 101 (2020). [CrossRef]  

37. K. Hwang, Y. H. Seo, J. Ahn, et al., “Frequency selection rule for high definition and high frame rate Lissajous scanning,” Sci. Rep. 7(1), 14075 (2017). [CrossRef]  

38. Y. H. Seo, K. Hwang, H. Kim, et al., “Scanning MEMS mirror for high definition and high frame rate Lissajous patterns,” Micromachines 10(1), 67 (2019). [CrossRef]  

39. X. Han, “Study on the Features of Lissajous’Figure,” XinZhou Teachers Univ. 25, 18–22 (2009).

40. X. Zhang, “Analysis of influence of initial phase on Lissajous graph,” Hubei Norm. Univ. 20, 56–60 (2000).

41. Q. A. A. Tanguy, O. Gaiffe, N. Passilly, et al., “Real-time Lissajous imaging with a low-voltage 2-axis MEMS scanner based on electrothermal actuation,” Opt. Express 28(6), 8512–8527 (2020). [CrossRef]  

42. X. Zhang, C. Wang, Y. Han, et al., “Analysis of Error Sources in the Lissajous Scanning Trajectory Based on Two-Dimensional MEMS Mirrors,” Photonics 10(10), 1123 (2023). [CrossRef]  

43. B. Mandracchia, X. Hua, C. Guo, et al., “Fast and accurate sCMOS noise correction for fluorescence microscopy,” Nat. Commun. 11(1), 94 (2020). [CrossRef]  

44. K. Suhling, R. W. Airey, and B. L. Morgan, “Minimization of fixed pattern noise in photon event counting imaging,” Rev. Sci. Instrum. 73(8), 2917–2922 (2002). [CrossRef]  

45. A. Foi, M. Trimeche, V. Katkovnik, et al., “Practical Poissonian-Gaussian noise modeling and fitting for single-image raw-data,” IEEE Trans. on Image Process. 17(10), 1737–1754 (2008). [CrossRef]  

46. A. E. Mallahi and F. Dubois, “Dependency and precision of the refocusing criterion based on amplitude analysis in digital holographic microscopy,” Opt. Express 19(7), 6684–6698 (2011). [CrossRef]  

47. H. C. Huang, C. M. Chen, S. D. Wang, et al., “Adaptive symmetric mean filter: a new noise-reduction approach based on the slope facet model,” Appl. Opt. 40(29), 5192–5205 (2001). [CrossRef]  

48. R. H. Chan, C. W. Ho, and M. Nikolova, “Salt-and-pepper noise removal by median-type noise detectors and detail-preserving regularization,” IEEE Trans. on Image Process. 14(10), 1479–1485 (2005). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Schematic of TPM system based on a 2D MEMS mirror. (M, plane mirror; DM, dichroic mirror; L1, scanning lens; L2, aspheric lens; L3, condensed lens; SFB, soft fiber bundle; PMT, photomultiplier tube; PC, personal computer). The femtosecond excitation and fluorescence collection paths are depicted in red and green, respectively.
Fig. 2.
Fig. 2. Schematic diagram of MEMS mirror Lissajous scanning control and image analytical reconstruction based on feed-forward control algorithm (FR, frame rate; FOV, field of view; IARM, image analytical reconstruction model; DAQ, data acquisition). The green area generates the theoretically given parameters that drive the MEMS mirror. The yellow area produces the error compensation terms for the feed-forward control parameters. The gray area implements the cross-cutting synchronized Lissajous driving for the MEMS mirror and synchronized acquisition, caching, framing, and transmission of image data. The red area uses a PC to complete the image analytical reconstruction process, which generates the IARM and coordinate lookup table, determines the pixel gray, conducts image filtering and denoising, and finally forms a 2D image.
Fig. 3.
Fig. 3. Lissajous trajectories obtained with single- and double-group parameters. (a) Trajectory (red line) driven by the single-group parameters with fx1 = 2300 Hz, fy1 = 2200 Hz, $\frac{{{m_x}}}{{{m_y}}} = \frac{{23}}{{22}}$, φy1 = φx1 = 0°, Ax1 = 2 V, and Ay1 = 2 V, and SFD = 14.59%. (b) Trajectory (red and green lines) driven alternately by the double-group parameters. The parameters in the first group were the same as those used to obtain (a), and the parameters in the second group remained unchanged except for φx2 = 2° and φy2 = 0°, DFD = 27.05%.
Fig. 4.
Fig. 4. (a) Target imaging with the single-group parameters in Table 2 except for φy1 = 4°, φx1 = 259.7°, and SFD = 74.63%. (b) Target imaging with the double-group parameters in Table 2 and DFD = 99.87%.
Fig. 5.
Fig. 5. Images reconstructed with and without the segmented phase synchronization algorithm. (a) With the algorithm, the distribution of the fluorescent spots is uniform with good clustering. (b) Without the algorithm, the fluorescence spots show significant divergence and division.
Fig. 6.
Fig. 6. Image filtering. (a) One-frame original image, which mainly contains Poisson noise, Gaussian noise, and salt-and-pepper noise. (b) Five-frame fused image, where some noise is eliminated. (c) Ten-frame fused image, where noise is further suppressed. (d) Twenty-frame fused image, where the noise is suppressed, the image becomes slightly blurred, and salt-and-pepper noise still exists somewhat. (e) Median filtered image based on Fig. 6(d), where salt-and-pepper noise is eliminated, demonstrating optimal noise suppression (the filter window size is 3 × 3).
Fig. 7.
Fig. 7. Curves of FOV versus magnitude for X- and Y-axes. Red line: X axis. Blue line: Y axis. The input amplitude ratio $\frac{{{A_x}}}{{{V_{max}}}}$ and $\frac{{{A_y}}}{{{V_{max}}}}$ varied from 0 to 0.3.
Fig. 8.
Fig. 8. Results of fluorescent target for FOV calibration.
Fig. 9.
Fig. 9. (a) Fluorescent bead image for lateral resolution calibration (the image shown is a 20-frame fused image). (b) Average Gaussian fitting curve of the fluorescent beads in the red boxes in (a).
Fig. 10.
Fig. 10. Two-photon imaging results of ex vivo biological tissue (the images shown are 20-frame fused images, frame rate is 10 fps with 512 × 512 pixels). (a) Pollen granules. (b) Human renal tubules. (c) Human lymph nodes. (d) Human myocardium.
Fig. 11.
Fig. 11. Two-photon imaging results of in vivo human skin (the image shown are 20-frame fused images, frame rate is 10 fps with 512 × 512 pixels). (a) Cells of the basal layer of human skin. (b) Collagen fibers of the dermal layer of human skin.

Tables (2)

Tables Icon

Table 1. Phase Parameters and Lissajous Scan Densities for Different Frequency Combinations

Tables Icon

Table 2. Two Groups of Actual Control Parameters for Driving the MEMS Mirror

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

{ x ( t ) = A x sin(2 π f x t + φ x ) y ( t ) = A y sin(2 π f y t + φ y ) ,
f x f y = m x gcd ( f x , f y ) m y gcd ( f x , f y ) = m x m y .
F R = gcd ( f x , f y ) .
sin ( m x φ y m y φ x ) = sin ( m x acrsin y ( t ) A y ± m y arcsin ( x ( t ) A x ) ) .
{ V x = x ˙ ( t ) = 2 π f x A x sin(2 π f x t + φ x ) V y = y ˙ ( t ) = 2 π f y A y sin(2 π f y t + φ y ) .
{ V x ( t ) = x ˙ ( t ) = P π f x sin(2 π f x t + φ x ) V y ( t ) = y ˙ ( t ) = P π f y sin(2 π f y t + φ y ) .
V x y ( t ) = π P f x 2 sin 2 ( 2 π f x t + φ x ) + f y 2 sin 2 ( 2 π f y t + φ y )
V max = π P f x 2 + f y 2 .
{ X ( t ) = P x 2 sin(2 π f x t + φ x ) + P x 2 Y ( t ) = P y 2 sin(2 π f y t + φ y ) + P y 2 .
{ X ( t ) = 256 sin(2 π f x t + γ x + Δ λ x ) + 256 Y ( t ) = 256 sin(2 π f y t + γ y + Δ λ y ) + 256
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.