Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-shot videography with multiplex structured illumination using an interferometer

Open Access Open Access

Abstract

Frequency recognition algorithm for multiple exposures (FRAME) is a high-speed videography technique that exposes a dynamic object to time-varying structured illumination (SI) and captures two-dimensional transients in a single shot. Conventional FRAME requires light splitting to increase the number of frames per shot, thereby resulting in optical loss and a limited number of frames per shot. Here, we propose and demonstrate a novel FRAME method which overcomes these problems by utilizing an interferometer to generate a time-varying SI without light splitting. Combining this method with a pulsed laser enables low-cost, high-speed videography on a variety of timescales from microseconds.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

High-speed videography is an important technique for tracking two-dimensional transient phenomena and understanding their mechanisms in various research areas, ranging from physics to biomedical sciences. The most common method for capturing ultrafast phenomena is the pump-probe method using short laser pulses, wherein a pump pulse impinges on the sample and induces a transient, and the relaxation process is monitored by a probe pulse that arrives after the pump pulse with a finite delay. However, because this method assumes that the phenomenon is repeatable, it cannot be applied to nonreproducible or nonrepetitive phenomena, such as laser processing [1], chemically reacting flows [2], and sonoporation [3]. High-speed cameras have been studied to capture such phenomena, and the methods can be broadly classified into three categories [4]. The first uses state-of-the-art burst image sensors, such as in situ storage image sensors [5,6] and computational sensors [7], with a frame rate of hundreds of megaframes per second (Mfps). The second category of methods uses a framing camera that employs multiple synchronized cameras, or a rotating mirror camera that uses a fast-rotating mirror prism to sweep images along multiple image sensors, with frame rates reaching several hundred Mfps [8,9]. However, the number of frames captured in a single shot is limited, and the equipment is large and expensive because of the multiple image sensors in the set. The temporal resolutions of the first and second methods described above depend on the time dispersion of the photoelectrons moving through the sensor, which in turn depends on the sensor structure [4].

The third category uses an off-the-shelf camera without relying on a fast detector to capture high-speed phenomena in a single shot. This is possible by encrypting each time in the transients into other physical values (space, angle, wavelength, and spatial frequency) [10]. For example, sequentially timed all-optical mapping photography (STAMP) [11] and spectrally filtered (SF)-STAMP [12,13] use time-dispersed chirp pulses as probe light to encode time into wavelengths, and their frame rates can reach several teraframes per second (Tfps). The wavelength-encoded images are then spatially separated on the image sensor using a diffraction element. The pixel positions of the image sensor provide a timeline for this phenomenon. Compressed ultrafast photography (CUP) [14] and trillion-frame-per-second CUP (T-CUP) [15] combine high-speed detection using a streak camera with computer processing techniques to achieve a frame rate of 10 Tfps. In this method, a dynamic event is first imaged on a digital micromirror device (DMD), which adds a pseudorandom pattern to a light image. Subsequently, the spatially encoded images are captured as they are swept along one camera axis. A single-shot image can be decomposed into several frames using a compressed sensing algorithm [1417]. However, because these methods share numerous frames using a single image sensor, each frame size decreases as the number of frames increases.

By contrast, the frequency recognition algorithm for multiple exposures (FRAME) encodes time into spatial frequency to achieve high-speed videography in a single shot with a frame rate of 5 Tfps [18]. In FRAME, a pulse train, wherein each individual pulse is uniquely spatially modulated, is projected onto a dynamic object to capture transient phenomena in a single shot. By Fourier transformation of the captured image, the images carried by the individual pulses can be separated in Fourier space according to the type of spatial modulation. Therefore, the number of spatial modulations is equal to the number of frames per shot. This method provides a wide field of view, regardless of the number of frames. The temporal resolution of the frame is determined from the pulse width of the illuminating light. Unlike methods that encode time into wavelengths, FRAME is applicable to the imaging of self-luminescent events [1820]. In addition, FRAME is unique in its ability to capture three-dimensional information of dynamic objects in a single shot using a single image sensor [18,21]. However, to uniquely and spatially modulate each pulse in a pulse train, multiple beam splitters, Ronchi gratings, and light sources are required [1824]. Therefore, in previous studies, the number of frames was limited to approximately 10 because the optics become bulky and complex as the number of frames increases. In a subsequent study, the beam-splitter-based optical system was replaced by a set of a single diffractive optical element (DOE) and a DMD. This resulted in a compact optical system and increased the number of frames to 1024, although the frame rate was limited by the DMD switching speed [25]. Therefore, in FRAME, the number of light divisions directly affected the number of frames per shot. The problem remains that a high intensity illumination is required to increase the number of frames.

In this study, we propose an interferometer-based FRAME method without light splitting, herein referred to as iFRAME. A Michelson interferometer generates time-varying spatial modulation that is used as structured illumination (SI) of a dynamic object. To the best of our knowledge, this method has the highest light efficiency of any FRAME to date and thus avoids optical system expansion, complexity, and limitation in the number of frames owing to light splitting, which has been an issue in previous studies. To extract each frame from a single-shot image, the spatial frequency of the SI must be discretized in Fourier space, which can be achieved using pulsed light. The frame rate depends on the repetition rate and the varying speed of the SI spatial frequency, and the temporal resolution is determined by the pulse width. The frame rate multiplied by the exposure time is the number of frames in a single shot. In this study, as a proof of principle, we implemented iFRAME by imaging an animation consisting of patterns loaded and played on the DMD.

The remainder of this paper is organized as follows. In Section 2, we describe the concept of the experiment and iFRAME system. In Section 3, we discuss the effects of the system parameters (pulse width, repetition rate, and rotation speed of the interferometer mirror) on the SI code distribution in Fourier space and on images extracted by a frequency-sensitive 2D spatial lock-in algorithm. Subsequently, by capturing animations as it plays on the DMD, we demonstrate the ability of iFRAME to track dynamic phenomena. In Section 4, we conclude the study.

2. Experimental setup of interferometer-based FRAME

When building an interferometer, if the two beams exiting it are misaligned, a sinusoidal intensity pattern is created by interference. The spacing of the fringes and their angles depend on the relative positions of the beams. If the two beams are horizontally misaligned, vertical stripes are formed (Fig. 1(a)). If the misalignment increases further, the vertical stripes become finer (Fig. 1(b)). Whereas when the two beams are vertically misaligned, horizontal stripes are formed (Fig. 1(c)). Thus, arbitrary interference fringes can be produced by tilting the end mirrors of a Michelson interferometer. By Fourier transforming these interference fringes, they are dispersed as peaks in Fourier space (Fig. 1(d)). In iFRAME, these interference fringes are used as the SI of dynamic objects. The advantage of this method is that an arbitrary variety of SI can be easily generated without splitting the light, resulting in no loss of light and a compact optical system. As FRAME encodes time into the spatial frequency of the SI, we refer to each spatial frequency as an SI code. To extract each frame from a single-shot image in iFRAME, the SI codes must be separated in Fourier space, which is possible if the light source is pulsed. With continuous-wave (CW) light, the interference fringes change continuously as the mirror of the interferometer rotates. However, the SI code can be changed discretely by chopping the light before detection.

 figure: Fig. 1.

Fig. 1. (a)–(c) Interference fringes caused by the displacement of two beams. (d) Fourier transform of the fringes in (a)–(c).

Download Full Size | PDF

Figure 2(a) shows a schematic of the experimental setup. Herein, a CW, helium-neon laser (GLS5240A, Showa Optronics) was used as a light source and the output was injected into a Michelson interferometer. By vertically rotating one end mirror (M1 in Fig. 2(a)) of the interferometer using a rotation stage (OSMS-60YAW, OptoSigma) with the rotating surface perpendicular to the optical table, the spatial frequency of the interference fringes was changed along an axis in Fourier space, as shown by the bold vertical arrows in Fig. 2(b). Subsequently, the light exiting the interferometer was irradiated onto the DMD (DLP 4500 Light Crafter, Texas Instruments). As a proof-of-principle experiment, we measured an animation played on the DMD. To vary the SI over time, while maintaining the irradiation position on the DMD, we built an imaging optical system with a lens (L$_{1}$ in Fig. 2(a)) that began at each end mirror of the interferometer and ended at the DMD. Here, the distances between the mirror and lens ($d_{1}$ in Fig. 2(a)) and between the lens and DMD ($d_{2}$) were both 150 mm, twice the focal length of L$_{1}$. We loaded multiple-shaped patterns into the DMD and periodically switched between them. Note that the duty cycle of the pattern display is $t_{\mathrm {high}}/T$, where $t_{\mathrm {high}}$ is the time to display one pattern on the DMD, and $T$ is the time between patterns, i.e., the pattern period (Fig. 2(c)). This pattern switching also served to discretize the SI code in Fourier space, as shown by blue, green, and red circles in Fig. 2(b), by converting the CW light into a pseudopulsed light with a pulse width $t_{\mathrm {high}}$ and repetition rate $1/T$. The SI code interval is the pulse interval $T$ (this is the pattern period in this study) multiplied by the varying speed of the SI spatial frequency ($v_{\mathrm {SI}}$) in Fourier space (Fig. 2(b)). The light reflected from the DMD was imaged using a monochrome CMOS camera with a pixel resolution of 1080 $\times$ 1440 (CS165MU1/M, Thorlabs). Synchronization of the DMD pattern display, mirror rotation in the interferometer, and camera exposure were controlled by a function generator (AFG1062, Tektronix) that generated a rectangular wave, and a delay generator (DG645, Stanford Research Systems), which used it as an external trigger. The ground-truth images of individual patterns without SI codes were obtained by blocking the optical path of one arm of the interferometer for comparison with the images extracted using the frequency-sensitive 2D spatial lock-in algorithm. As discussed in detail later, when postprocessing the raw image to extract each frame, a low pass filter (LPF) was applied centered on each SI code in Fourier space. The LPF cutoff frequency (denoted as $k_{c}$ in Fig. 2(b)) should preferably be set equal to or less than the SI code interval ($v_{\mathrm {SI}}T$) so as not to be affected by the adjacent frames. Since the spatial resolution of each frame is inversely proportional to the cutoff frequency, when $v_{\mathrm {SI}}$ is fixed, the temporal and spatial bandwidth, the product of the frame interval and the spatial resolution, is constant. The temporal resolution of the videography is equal to the pulse width of the pseudopulsed light, $t_{\mathrm {high}}$.

 figure: Fig. 2.

Fig. 2. (a) Experimental setup; here, $\mathrm {L_{1}}$: $f$= 75 mm, $\mathrm {L_{2}}$: $f$= 75 mm, $\mathrm {L_{3}}$: $f$= 150 mm, BS: 50:50 beam splitter, DMD: digital micromirror device, FG: function generator, DG: delay generator. $d_{1}=d_{2}=150$ mm, $d_{3}=75$ mm, $d_{4}+d_{5}=225$ mm, $d_{6}=150$ mm. (b) SI codes mapped on Fourier space; here, SI: structured illumination. $k_{c}$ is the cutoff frequency of the low pass filter applied in Fourier space. $v_{\mathrm {SI}}$ is the varying speed of SI spatial frequency. $T$ is the pulse interval of a pseudopulsed light, which is the pattern period in this study. (c) Timing chart between camera exposure, mirror rotation, and DMD display.

Download Full Size | PDF

3. Results and discussions

3.1 Effect of system parameters on SI code distribution and on extracted frames

First, we examined the effect of the duty cycle of the DMD pattern on the frames extracted by postprocessing the raw images. The rotational speed of the mirror in the interferometer was set to $v=11~\mathrm {mrad~s^{-1}}$. A square pattern was loaded onto the DMD. The pattern period was fixed at $T=600~\mathrm {ms}$, whereas the pulse width varied at $t_{\mathrm {high}}= 100, 200, 300, 400,~\mathrm {and}~500~\mathrm {ms}$, thus resulting in duty cycles of 0.13, 0.33, 0.50, 0.67, and 0.83, respectively. Figure 3(a) shows the raw image when the camera exposure time ($t_{\mathrm {exp}}$) was 3000 ms and duty cycle was 0.13. Figure 3(b) shows its Fourier transform in the range $0 \leq |k_{y}| \leq {0.54}~\mathrm {rad~pixel^{-1}}$ and $0 \leq |k_{x}| \leq {0.40}~\mathrm {rad~pixel^{-1}}$. The cross mark in Fig. 3(b) shows the Fourier transform of the square, with four pairs of bright spots in the upper right and lower left derived from the SI codes. The line profile cut along $k_{y}$ axis at $k_{x} ={0.32}~\mathrm {rad~pixel^{-1}}$ is shown at the top in Fig. 3(c). For all other duty cycles, the line profiles cut at the same location as duty cycle 0.13 are shown in Fig. 3(c). The amplitudes of the line profiles were normalized and evenly offset to ensure visibility. The number of bright spots at duty cycle 0.13 was one less than at the other duty cycles. We believe that this was due to a slight discrepancy between the pattern display of the DMD and the exposure timing of the camera. These profiles indicate that the higher the duty cycle, the wider the bright spots in the Fourier space. The reason for the multiple-peak structures instead of the top-hat shape is that the rotation stage was driven by a stepping motor. As the pattern period was fixed, the period of the multiple peak structures was constant at ${0.082}~\mathrm {rad~pixel^{-1}}$, regardless of the duty cycle.

 figure: Fig. 3.

Fig. 3. (a) Single-shot image at $T=600~\mathrm {ms}$, duty cycle 0.13, $t_{\mathrm {exp}}=3000~\mathrm {ms}$, $v=11~\mathrm {mrad~s^{-1}}$; the scale bar is 0.5 mm. One pixel corresponds to 1.7 ${\mathrm{\mu} \mathrm{m}}$. (b) Amplitude of the Fourier transform of (a); the scale bar is ${0.20}~\mathrm {rad~pixel^{-1}}$. (c) Line profiles cut along $k_{y}$ axis at $k_{x}={0.32} ~\mathrm {rad~pixel^{-1}}$ with the duty cycle of 0.13, 0.33, 0.50, 0.67, and 0.83. (d) Extracted frames carried by the second SI code from the low-frequency side at each duty cycle (circled in (b) for a duty cycle of 0.13); the LPF cutoff frequency is $k_{c}={0.034} ~\mathrm {rad~ pixel^{-1}}$ (corresponding to the radius of the circle in (b)).

Download Full Size | PDF

Each frame can be extracted via the following five steps using a frequency-sensitive 2D spatial lock-in algorithm [18,19]. First, to obtain the peak coordinates $(k_{x_{1}},k_{y_{1}})$ of each SI code, a 2D Gaussian function is fitted to a Fourier image. Second, to demodulate the SI-coded image, the raw image is multiplied by $\exp [i(k_{x_{1}}x+k_{y_{1}}y)]$. Third, the demodulated image is Fourier-transformed. Fourth, a LPF is applied to the Fourier image to avoid crosstalk between adjacent frames [19]. Here, the cutoff frequency ($k_{c}$) was set to $k_{c}={0.034}~\mathrm {rad~pixel^{-1}}$, corresponding to the radius of the red circle in Fig. 3(b). Finally, the filtered image is inverse Fourier transformed to obtain a real-space image. Figure 3(d) shows the extracted frames carried by the second SI code from the low-frequency side for all duty cycles. This indicates that low spatial beat frequencies remained in the extracted frames, except for in the duty cycle of 0.13. This is because each SI code contained several spatial frequencies, as evident from the line profiles in Fig. 3(c).

In iFRAME, the pulse width ($t_{\mathrm {high}}$) and pulse period ($T$, the inverse of the repetition rate) of the pseudopulse correspond to the bandwidth and peak period of the SI codes in the Fourier space, respectively. Therefore, the duty cycle must be sufficiently low to assign a specific spatial frequency to each frame. As the optimal duty cycle depends on the mirror rotation speed, discussing the minimum bandwidth of the SI code is more appropriate than discussing that of the duty cycle to ensure that no spatial beat frequency remains in the extracted frame. The minimum bandwidth of the SI code corresponds to the frequency step of the Fourier transform of the raw image. The number of vertical ($y$) pixels of the image sensor is $N_{y}=1440$; consequently, the frequency step of the $k_{y}$ axis in Fourier space is $\Delta k_{y}=2\pi /N_{y} =4.4\times 10^{-3}~\mathrm {rad~pixel^{-1}}$, which is the minimum bandwidth of the SI code in this system. Furthermore, by dividing $\Delta k_{y}$ by the peak period of the SI code, the lower limit of the duty cycle in this experiment was calculated to be 0.054.

The bandwidth of the SI code is determined by the pulse width ($t_{\mathrm {high}}$) multiplied by the varying speed of the spatial frequency ($v_{\mathrm {SI}}$) in Fourier space. In this study, the pulse width was determined by the display time of the DMD because the pseudopulse was created by switching the light on/off using the DMD; however, in the case of using a pulsed laser, it is simply the pulse width of the light. By contrast, the varying speed of the spatial frequency can be controlled by the rotational speed of the mirror and magnification factor (ratio of $d_{2}$ to $d_{1}$) of the optical system producing SI on the dynamic object. Note that $v_{\mathrm {SI}}$ is equal to the peak period of SI codes divided by $T$; i.e., herein, $v_{\mathrm {SI}}={0.14}~\mathrm {rad~pixel^{-1}~s^{-1}}$ when $T= 600~\mathrm {ms}$. The pulse width that minimizes the bandwidth of the SI code at this varying speed was calculated to be $t_{\mathrm {high}}=(\Delta k_{y})/v_{\mathrm {SI}} ={2}\pi /(N_{y} v_{\mathrm {SI}})=31~\mathrm {ms}$. A pulse width shorter than 31 ms will not affect the appearance of the extracted frame but improve the temporal resolution of the videography. As the repetition rate increases, the number of frames per shot increases. However, the SI code spacing becomes narrower, thus resulting in a decrease in spatial resolution of the extracted image [25]. These are trade-offs. The higher the rotational speed of the interferometer mirror, the narrower the bandwidth of the SI code and wider the spacing of the SI code. The above discussion indicates that for high-speed videography using iFRAME, a short pulse width and high rotational speed of the interferometer mirror are preferable.

3.2 Imaging of dynamic scenes by iFRAME

Next, to demonstrate the ability of iFRAME to capture dynamic scenes, three different patterns (triangle, square, and rhombus) were loaded into the DMD and the pattern period and duty ratio were set to $T = 400~\mathrm {ms}$ and 0.25, respectively. Figure 4(a)-1 shows the raw image when the rotation speed of the mirror was $4.4~\mathrm {mrad~s^{-1}}$. Figure 4(a)-2 shows its Fourier transform in the range $0 \leq |k_{y}|\leq {0.76}~\mathrm {rad~pixel^{-1}}$ and $0\leq |k_{x}| \leq ~{0.58}~\mathrm {rad~pixel^{-1}}$. Three pairs of bright spots were observed in the Fourier space (circles in Fig. 4(a)-2). The line profile cut along $k_{y}$ axis at $k_{x}={0.28}~\mathrm {rad~pixel^{-1}}$ is shown in Fig. 4(b). The left column in Fig. 4(c) presents the images extracted for each SI code using the procedure described in the previous subsection. Here, the LPF cutoff frequency was $k_{c}={0.054}~\mathrm {rad~pixel^{-1}}$ (radius of circles in Fig. 4(a)-2), which was half the peak period of the SI codes.

 figure: Fig. 4.

Fig. 4. (a) 1. Single-shot image at $T=400$ ms, duty cycle 0.25, $t_{\mathrm {exp}}=1000~\mathrm {ms}$, $v=4.4~\mathrm {mrad~s^{-1}}$; the scale bar is 0.5 mm. 2. Amplitude of the Fourier transform; the scale bar is ${0.30}~\mathrm {rad~ pixel^{-1}}$. (b) Line profile cut along $k_{y}$ axis at $k_{x}={0.28}~\mathrm {rad~pixel^{-1}}$ (dashed line in (a)-2). (c) Left column: Extracted frames for all the SI codes. The cutoff frequency is $k_{c}={0.054}~\mathrm {rad~pixel^{-1}}$ (radius of circles in (a)-2, dashed vertical line in (d)). Right column: Ground-truth images of individual patterns loaded onto the DMD. (d) Cutoff frequency dependence of structural similarity (SSIM) between the dashed areas in (c) of the extracted and ground-truth images, where R, T, and S denote rhombus, triangle, and square, respectively.

Download Full Size | PDF

By comparing the ground-truth images shown in the right column of Fig. 4(c), we confirmed that iFRAME successfully extracted the images of the three patterns. However, the corners of the figures were rounded because the high-frequency components were cut off by the LPF. The structural similarity (SSIM) between the dashed areas in Fig. 4(c) of the extracted and ground-truth images was computed using the OpenCV-Python package. The SSIM for triangle, square, and rhombus were 0.47, 0.48, and 0.52 at $k_{c}= 0.054~\mathrm {rad~pixel^{-1}}$. We also obtained the cutoff frequency dependence of the SSIM and found that the SSIM for triangle and square (denoted as T and S in Fig. 4(d)) decreased when $k_{c}$ exceeded $0.062~\mathrm {rad~pixel^{-1}}$ due to the crosstalk between adjacent frames [19]. On the other hand, the SSIM for rhombus (denoted as R) was found to increase until $k_{c}$ reached $0.084~\mathrm {rad~pixel^{-1}}$, after which it decreased. We believe that this can be explained by the characteristics of the Fourier transform of the rhombus. Unlike the Fourier transforms of the triangle and square, that of the rhombus appears on the diagonal of the $k_{x}$-$k_{y}$ plane and off the $k_{y}$ axis, leading to an increase in SSIM as $k_{c}$ increased, although it was influenced by adjacent frames. The peak period of the SI codes must be increased to increase the LPF cutoff frequency. As discussed in the previous section, if the pulse width and repetition rate are fixed, increasing the mirror rotation speed and reducing the magnification factor of the optical system generating SI can increase the peak period and thus the cutoff frequency.

As shown in the line profiles in Figs. 3(c) and 4(b), the peak values of the SI code decreased as the spatial frequency increased. This is due to the optical transfer function of the lenses. If the system used was diffraction-limited, the spatial cutoff frequency was calculated to be ${0.91}~\mathrm {rad~pixel^{-1}}$ by dividing twice the numerical aperture (NA) of the lens ($\mathrm {L_{2}}$ in Fig. 2(a), 1 inch diameter, 75 mm focal length), NA = 0.17, by the wavelength, $\lambda$ = 632.8 nm, and multiplying by the length per pixel, 1.7 $ {\mathrm{\mu} \mathrm{m}~{\mathrm{pixel}}^{-1}}$. We believe that Wiener filtering, which is used in structured illumination microscopy to deconvolve images [26], can be used to increase the contrast of high spatial frequency components. The strategic placement of SI codes in Fourier space is important for increasing the number of frames per shot. The frequency range can be expanded to two dimensions by rotating both end mirrors of the interferometer in the horizontal and vertical directions, or by replacing one mirror with a two-axis galvanometer mirror.

For the experiment in this section, the temporal resolution was 100 ms, the frame rate was 2.5 Hz, and the spatial resolution was 0.19 mm. The temporal resolution and the frame rate were determined by the pulse width (time to display one pattern on the DMD) and the pulse interval (DMD pattern period), respectively. The spatial resolution was calculated by dividing the length per pixel by $k_{c}$ and multiplying by 2$\pi$. In the future, the frame rate and temporal resolution of iFRAME can be improved using a pulsed laser. The optimization of the light source and rotating mirror speed is important to increase the frame rate. In the current system, the rotating mirror rotates 6.6 mrad (at a mirror rotation speed of 11 $\mathrm {mrad~s^{-1}}$ and a pulse interval of 600 ms) between the generation of one SI and the next. If, for example, a galvanometer mirror rotating 0.17 rad in 42 ${\mathrm{\mu} \mathrm{s}}$ one way at a resonance frequency of 12 kHz is used, the time required to rotate 6.6 mrad is 1.6 ${\mathrm{\mu} \mathrm{s}}$. Thus, if only the rotating mirror in the current optical system is replaced by the galvanometer mirror, the minimum frame interval is 1.6 ${\mathrm{\mu} \mathrm{s}}$ and the maximum frame rate is 0.6 MHz. Therefore, until the repetition rate of a pulsed light reaches 0.6 MHz, the frame rate is equal to the repetition rate, however, thereafter the frame rate is limited by the rotational speed of the galvanometer mirror. Speckle-free measurements are possible using super-luminescent diodes or broadband pulsed lasers.

In conventional FRAME, the pulse energy per frame is reduced to less than $1/N$ owing to light splitting when the number of frames per shot increases to $N$. By contrast, iFRAME uses $1/4$ of the pulse energy for each frame, regardless of the number of frames per shot, because it uses a 50:50 beam splitter in a Michelson interferometer. Therefore, iFRAME can increase the number of frames without sacrificing pulse energy. By optimizing the light source and the mirror rotation, it is possible to capture transient scenes at multiple timescales, as demonstrated in Ref. [27], from microseconds almost continuously.

4. Conclusion

In this study, we demonstrated a novel FRAME method without light splitting using a simple optical system based on a Michelson interferometer. By rotating the interferometer mirror, we produced time-varying interference fringes that were used as the SI. iFRAME encodes the time during a transient event in the spatial frequency of the SI. As a proof of principle, we imaged an animation that was loaded and played on the DMD. We confirmed that the frequency-sensitive 2D spatial lock-in algorithm successfully extracted each frame of the animation. To improve the spatial resolution of each frame, the SI code interval has to be increased by increasing the varying speed of the SI spatial frequency. In iFRAME, the arrangement of the SI codes in Fourier space can be freely changed by changing the rotation speed, angle of the interferometer mirrors, magnification factor of the optical system producing SI, as well as the pulse width and repetition rate of the light. This approach eliminates the limitation on the number of frames due to light splitting, which is necessary with conventional FRAME, thereby resulting in a more compact and light-efficient system. Considering the influence of the system parameters on the SI code distribution in the Fourier space and on the extracted frames, we found that shorter pulse widths and higher rotation speeds of the interferometer mirror are better for high-speed videography. The optimization of the light source and mirror rotation will enable the tracking of transient scenes in various research and industrial fields at multiple timescales from microseconds.

Funding

Individual Special Research Subsidy of Kwansei Gakuin University; Advanced Integration Science Innovation Education and Research Consortium Program of MEXT.

Acknowledgments

We are grateful to the Individual Special Research Subsidy of Kwansei Gakuin University, and the Advanced Integration Science Innovation Education and Research Consortium Program of MEXT, Japan, for financial suppport.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available from the corresponding author upon request.

References

1. N. Levichev, M. R. Vetrano, and J. R. Duflou, “Melt flow and cutting front evolution during laser cutting with dynamic beam shaping,” Opt. Lasers Eng. 161, 107333 (2023). [CrossRef]  

2. J. H. Frank, “Advances in imaging of chemically reacting flows,” The J. Chem. Phys. 154(4), 040901 (2021). [CrossRef]  

3. J. Tu and A. C. Yu, “Ultrasound-mediated drug delivery: sonoporation mechanisms, biophysics, and critical factors,” BME Front. 2022, 1 (2022). [CrossRef]  

4. T. G. Etoh, A. Q. Nguyen, Y. Kamakura, K. Shimonomura, T. Y. Le, and N. Mori, “The theoretical highest frame rate of silicon image sensors,” Sensors 17(3), 483 (2017). [CrossRef]  

5. V. T. S. Dao, N. Ngo, A. Q. Nguyen, K. Morimoto, K. Shimonomura, P. Goetschalckx, L. Haspeslagh, P. De Moor, K. Takehara, and T. G. Etoh, “An image signal accumulation multi-collection-gate image sensor operating at 25 Mfps with 32× 32 pixels and 1220 in-pixel frame memory,” Sensors 18(9), 3112 (2018). [CrossRef]  

6. M. Suzuki, Y. Sugama, R. Kuroda, and S. Sugawa, “Over 100 million frames per second 368 frames global shutter burst CMOS image sensor with pixel-wise trench capacitor memory array,” Sensors 20(4), 1086 (2020). [CrossRef]  

7. K. Kagawa, M. Horio, A. N. Pham, T. Ibrahim, S. Okihara, T. Furuhashi, T. Takasawa, K. Yasutomi, S. Kawahito, and H. Nagahara, “A dual-mode 303-megaframes-per-second charge-domain time-compressive computational CMOS image sensor,” Sensors 22(5), 1953 (2022). [CrossRef]  

8. C. T. Chin, C. Lancée, J. Borsboom, F. Mastik, M. E. Frijlink, N. de Jong, M. Versluis, and D. Lohse, “Brandaris 128: A digital 25 million frames per second camera with 128 highly sensitive frames,” Rev. Sci. Instrum. 74(12), 5026–5034 (2003). [CrossRef]  

9. M. Versluis, “High-speed imaging in fluids,” Exp. Fluids 54(2), 1458 (2013). [CrossRef]  

10. J. Liang and L. V. Wang, “Single-shot ultrafast optical imaging,” Optica 5(9), 1113–1127 (2018). [CrossRef]  

11. K. Nakagawa, A. Iwasaki, Y. Oishi, R. Horisaki, A. Tsukamoto, A. Nakamura, K. Hirosawa, H. Liao, T. Ushida, K. Goda, F. Kannari, and I. Sakuma, “Sequentially timed all-optical mapping photography (STAMP),” Nat. Photonics 8(9), 695–700 (2014). [CrossRef]  

12. T. Suzuki, F. Isa, L. Fujii, K. Hirosawa, K. Nakagawa, K. Goda, I. Sakuma, and F. Kannari, “Sequentially timed all-optical mapping photography (STAMP) utilizing spectral filtering,” Opt. Express 23(23), 30512–30522 (2015). [CrossRef]  

13. T. Suzuki, R. Hida, Y. Yamaguchi, K. Nakagawa, T. Saiki, and F. Kannari, “Single-shot 25-frame burst imaging of ultrafast phase transition of Ge2Sb2Te5 with a sub-picosecond resolution,” Appl. Phys. Express 10(9), 092502 (2017). [CrossRef]  

14. L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature 516(7529), 74–77 (2014). [CrossRef]  

15. J. Liang, L. Zhu, and L. V. Wang, “Single-shot real-time femtosecond imaging of temporal focusing,” Light: Sci. Appl. 7(1), 42 (2018). [CrossRef]  

16. X. Liu, J. Liu, C. Jiang, F. Vetrone, and J. Liang, “Single-shot compressed optical-streaking ultra-high-speed photography,” Opt. Lett. 44(6), 1387–1390 (2019). [CrossRef]  

17. D. Qi, S. Zhang, C. Yang, Y. He, F. Cao, J. Yao, P. Ding, L. Gao, T. Jia, J. Liang, Z. Sun, and L. V. Wang, “Single-shot compressed ultrafast photography: a review,” Adv. Photonics 2(1), 014003 (2020). [CrossRef]  

18. A. Ehn, J. Bood, Z. Li, E. Berrocal, M. Aldén, and E. Kristensson, “FRAME: femtosecond videography for atomic and molecular dynamics,” Light: Sci. Appl. 6(9), e17045 (2017). [CrossRef]  

19. K. Dorozynska and E. Kristensson, “Implementation of a multiplexed structured illumination method to achieve snapshot multispectral imaging,” Opt. Express 25(15), 17211–17226 (2017). [CrossRef]  

20. Z. Li, J. Borggren, E. Berrocal, A. Ehn, M. Aldén, M. Richter, and E. Kristensson, “Simultaneous multispectral imaging of flame species using Frequency Recognition Algorithm for Multiple Exposures (FRAME),” Combust. Flame 192, 160–169 (2018). [CrossRef]  

21. E. Kristensson, Z. Li, E. Berrocal, M. Richter, and M. Aldén, “Instantaneous 3D imaging of flame species using coded laser illumination,” Proc. Combust. Inst. 36(3), 4585–4591 (2017). [CrossRef]  

22. K. Dorozynska, V. Kornienko, M. Aldén, and E. Kristensson, “A versatile, low-cost, snapshot multidimensional imaging approach based on structured light,” Opt. Express 28(7), 9572–9586 (2020). [CrossRef]  

23. V. Kornienko, E. Kristensson, A. Ehn, A. Fourriere, and E. Berrocal, “Beyond MHz image recordings using LEDs and the FRAME concept,” Sci. Rep. 10(1), 16650 (2020). [CrossRef]  

24. S. Ek, V. Kornienko, A. Roth, E. Berrocal, and E. Kristensson, “High-speed videography of transparent media using illumination-based multiplexed schlieren,” Sci. Rep. 12(1), 19018 (2022). [CrossRef]  

25. S. Ek, V. Kornienko, and E. Kristensson, “Long sequence single-exposure videography using spatially modulated illumination,” Sci. Rep. 10(1), 18920 (2020). [CrossRef]  

26. M. G. Gustafsson, L. Shao, P. M. Carlton, C. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008). [CrossRef]  

27. V. Kornienko, D. Andersson, M. Stiti, J. Ravelid, S. Ek, A. Ehn, E. Berrocal, and E. Kristensson, “Simultaneous multiple time scale imaging for kHz–MHz high-speed accelerometry,” Photonics Res. 10(7), 1712–1722 (2022). [CrossRef]  

Data availability

Data underlying the results presented in this paper are available from the corresponding author upon request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. (a)–(c) Interference fringes caused by the displacement of two beams. (d) Fourier transform of the fringes in (a)–(c).
Fig. 2.
Fig. 2. (a) Experimental setup; here, $\mathrm {L_{1}}$: $f$= 75 mm, $\mathrm {L_{2}}$: $f$= 75 mm, $\mathrm {L_{3}}$: $f$= 150 mm, BS: 50:50 beam splitter, DMD: digital micromirror device, FG: function generator, DG: delay generator. $d_{1}=d_{2}=150$ mm, $d_{3}=75$ mm, $d_{4}+d_{5}=225$ mm, $d_{6}=150$ mm. (b) SI codes mapped on Fourier space; here, SI: structured illumination. $k_{c}$ is the cutoff frequency of the low pass filter applied in Fourier space. $v_{\mathrm {SI}}$ is the varying speed of SI spatial frequency. $T$ is the pulse interval of a pseudopulsed light, which is the pattern period in this study. (c) Timing chart between camera exposure, mirror rotation, and DMD display.
Fig. 3.
Fig. 3. (a) Single-shot image at $T=600~\mathrm {ms}$, duty cycle 0.13, $t_{\mathrm {exp}}=3000~\mathrm {ms}$, $v=11~\mathrm {mrad~s^{-1}}$; the scale bar is 0.5 mm. One pixel corresponds to 1.7 ${\mathrm{\mu} \mathrm{m}}$. (b) Amplitude of the Fourier transform of (a); the scale bar is ${0.20}~\mathrm {rad~pixel^{-1}}$. (c) Line profiles cut along $k_{y}$ axis at $k_{x}={0.32} ~\mathrm {rad~pixel^{-1}}$ with the duty cycle of 0.13, 0.33, 0.50, 0.67, and 0.83. (d) Extracted frames carried by the second SI code from the low-frequency side at each duty cycle (circled in (b) for a duty cycle of 0.13); the LPF cutoff frequency is $k_{c}={0.034} ~\mathrm {rad~ pixel^{-1}}$ (corresponding to the radius of the circle in (b)).
Fig. 4.
Fig. 4. (a) 1. Single-shot image at $T=400$ ms, duty cycle 0.25, $t_{\mathrm {exp}}=1000~\mathrm {ms}$, $v=4.4~\mathrm {mrad~s^{-1}}$; the scale bar is 0.5 mm. 2. Amplitude of the Fourier transform; the scale bar is ${0.30}~\mathrm {rad~ pixel^{-1}}$. (b) Line profile cut along $k_{y}$ axis at $k_{x}={0.28}~\mathrm {rad~pixel^{-1}}$ (dashed line in (a)-2). (c) Left column: Extracted frames for all the SI codes. The cutoff frequency is $k_{c}={0.054}~\mathrm {rad~pixel^{-1}}$ (radius of circles in (a)-2, dashed vertical line in (d)). Right column: Ground-truth images of individual patterns loaded onto the DMD. (d) Cutoff frequency dependence of structural similarity (SSIM) between the dashed areas in (c) of the extracted and ground-truth images, where R, T, and S denote rhombus, triangle, and square, respectively.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.