We propose a real-time integral imaging system for light field microscopy systems. To implement a 3D live in-vivo experimental environment for multiple experimentalists, we generate elemental images for an integral imaging system from the captured light field with a light field microscope in real-time. We apply the f-number matching method to generate an elemental image to reconstruct an undistorted 3D image. Our implemented system produces real and orthoscopic 3D images of micro objects in 16 frames per second. We verify the proposed system via experiments using Caenorhabditis elegans.
© 2014 Optical Society of America
Visualizing a real object in three-dimensional (3D) space has been one of the main issues in 3D industries [1–15]. It is possible to extract 3D information from objects using a multi-camera , a time of flight camera , a structured light method , or a lens array . Among them, only a few methods are actually functional in real-time with 3D display systems such as stereoscopy, multi-view or integral imaging, which is a key technology for 3D broadcasting [3, 6, 11, 15]. Since stereoscopy and multi-view systems provide several view images, their base image can be easily generated by means of a multi-camera method [3, 18]. However, the multi-camera capturing method requires a large space, a delicate alignment between cameras, and a relatively high computational load for post processing.
For an integral imaging system, a set of elemental images can be obtained with a camera and a lens array as introduced by Lippmann in 1908 . The lens array capturing method is less bulky and is not constrained by alignment problems [1, 13, 14]. However, if the captured image is used as the set of elemental images without post-processing, the reconstructed 3D image is pseudoscopic [1, 8–14]. In the past decades, several methods have been proposed for solving the pseudoscopic problem, but most cannot satisfy real-time conditions , cannot provide a real 3D image  or they have a need of special optical devices [6, 7]. Recently, a simple pixel mapping algorithm was proposed, which can be used to produce real and orthoscopic 3D images in real-time [9–11].
Until now, however, these 3D visualization studies have been limited to real-scale objects. Extracting 3D information from a micro object is different from the capturing methods explained above for 3D display systems. Various optical microscopes with high resolving power objectives are used to acquire 3D information from a micro object [20–29]. First of all, ordinary optical microscopes provide two-dimensional (2D) orthogonal images with a limited depth of field, and the entire structure of a micro object can only be estimated by moving the stage up and down . Several approaches for acquiring 3D information have been developed over the past decades and include confocal microscopy or near-field scanning optical microscopy [20, 21]. However, most of these procedures are time-consuming and are not appropriate for observing in-vivo micro objects in real-time.
Light field microscopy (LFM) is a type of single-shot microscopy that reconstructs 3D structure of micro objects using a micro lens array [22–24]. LFM can provide perspective views and focal stacks in real-time by adding a simple micro lens array to a conventional optical microscope . Furthermore, LFM extends the depth of field greatly, thus permitting researchers to extract information on the 3D volume of a micro object in one-shot. However, the resolution of directional view images obtained by LFM is decreased by number of lenses in the micro lens array . A number of studies are proposed to improve the image quality of LFM by lens shifting technology , light field illumination , 3D deconvolution  or fluorescence scanning methods . However until now, studies on LFM have mainly dealt with 3D reconstruction in virtual space rather than in real space.
Since LFM has major advantages in one-shot imaging and real-time calculation, it would be more natural to organize a real-time visualization system or 3D interactive system with LFM. However, to the best of our knowledge, a real-time 3D display system for LFM has not been developed or even discussed. There is a structural symmetry between the LFM system and integral imaging: they both use a lens array to acquire and visualize 3D information [12, 22, 27]. Some studies already applied integral imaging principles to LFM [25, 28], and by using this symmetry between LFM and integral imaging, a micro object can be optically reconstructed in 3D.
In this paper, we propose a real-time integral imaging system for light field microscopy using the f-number matching method. A preliminary approach with a real-time algorithm was introduced by our group [11, 29]. However, the image quality was not sufficient to permit the 3D shape of a micro object to be examined because of the f-number mismatch between the pickup micro lens array and the display lens array. Furthermore, although the pixel mapping process was done in real-time, the rectification process caused by an alignment problem was time-consuming. As an extension of our previous work, we now present a real-time integral imaging system for LFM. Our proposed system offers a 3D in-vivo experimental environment in real-time so that the experimenter could obtain direct feedback to micro specimen immediately and share 3D images displayed on integral imaging with multiple experimenters and an audience in real-time for educational purposes. We performed simulations and prepared a demonstration with conventional LFM and an integral imaging system. A feasibility test was also done with a living organism Caenorhabditis elegans (c.elegans), which is often used to analyze the connection between animal behaviors and nervous system [30, 31].
In Section 2, real-time elemental image generation method with f-number matching is introduced and image simulation is also presented. The optical design and experimental setup are then introduced in Section 3. Experimental results for the proposed system with c.elegans are shown in images and in videos in Section 4. Finally this paper ends with the conclusion in Section 5.
2. Real-time elemental image generation from captured light field with f-number matching
2.1 Light field microscopy and integral imaging
As mentioned above, it is possible to reconstruct a 3D image using integral imaging system with the light field captured from LFM. Figure 1 shows a schematic diagram of our proposed method. The LFM system is composed of an objective lens and a micro lens array located at the image plane of the objective as shown in Fig. 1(a) . The light field cone from one point of the micro object at the focal plane is recorded at the sensor located behind one lens of the micro lens array, while the light fields from the point that is not located at the focal plane is imaged to the pixels behind a number of lenses. Each pixel of each lens contains information regarding the light field with a different direction, which is illustrated by the color in Fig. 1(a). The aperture of the light field cone is determined by the numerical aperture (NA) of the objective rather than that of the micro lens array. Since it is easier to build one objective lens with a high resolving power than thousands of lenses in a micro lens array, LFM takes advantage of high resolving power of the objective lens .
Figure 1(b) shows a 3D reconstruction of an enlarged micro object obtained with an integral imaging system. The integral imaging system consists of a flat display panel and a lens array, as shown in Fig. 1(b). To reconstruct a 3D image with integral imaging, an elemental image should be generated from the captured light field. In this study, we applied the real-time pixel mapping algorithm proposed by Jung et al. in 2013 to solve the pseudoscopic problem . By locating captured pixels at the proper position of an elemental image, a real and orthoscopic 3D image can be obtained, as shown in Fig. 1(b). The observer can also instantly adjust the depth plane of the reconstructed 3D image by changing the parameters of the elemental generation algorithm [9–11].
Since the pitch of the display lens array is usually bigger than that of the micro lens array in LFM, a reconstructed 3D image is magnified not only by magnification of the objective but also by the lens array difference. With the assumption that the number of sensor pixels is equal to the number of display pixels, a lateral magnification factor Mxy is derived by multiplication of the lens size difference and objective magnification as follows:
However, the axial magnification factor Mz is determined by the lateral magnification factor and angular resolution. Since the maximum angle of the light field cone is determined by the NA of the objective lens in LFM, the NA of the lenses in the display lens array should be equal to that of objective lens in order to reconstruct right depth information. Here, Mz is derived as follows:32].
2.2 Real-time elemental image generation method with f-number matching
To reconstruct a 3D image of a micro object without distortion, careful consideration of the f-number is required. The f-number of a lens (N) is defined as follows:
Figure 2 shows an example of the light field of c.elegans captured by the LFM system. We used a 40 × /0.65 NA objective, a Fresnel Tech. 125 μm micro lens array with 2.5 mm focal length, Olympus BX53T optical microscope and AVT Prosilica GX2300C charge coupled device (CCD) to build LFM system. In Fig. 2, the red lines indicate the micro lens array border, yellow circles show the circular aperture of the objective, and the sky blue rectangles indicate the region that can be expressed with a typical 1 mm lens array with the 3.3 mm focal length used in integral imaging. Detailed specifications for the implemented system are listed in Table 1. Due to the mismatch between the image-side f-number of the objective and the f-number of the micro lens array, the outer region of the sensor cannot receive a light field signal [22, 33], and the circular aperture stop inside the objective lens forms an array of image circles. However, the expressible region is only a small part of the captured light because of another f-number mismatch between the objective and the display lens array as shown in Fig. 2. Fortunately, the resolution of CCD is usually much greater than the resolution of the display device so that the light field information is enough to generate the elemental image. The resolution of the captured image for a single lens is 31 × 31. However, the display panel pitch is 125 μm and the pitch of the display lens array is 1 mm. Therefore, the resolution of a single elemental image is 8 × 8, so the set of elemental images is generated by undersampling. Therefore, the resolution of the reconstructed 3D image can be improved by cropping wasted regions such as black regions due to the circular aperture before the undersampling process. Nevertheless, the captured light field should be stored for full-resolution post-processing regardless of the elemental image generation method used.
To generate an accurate elemental image from a captured light field, only the sky blue regions in Fig. 2 would be used; otherwise the reconstructed 3D image is distorted in depth. Therefore, the sky blue region should be cropped first. Figure 3 shows the principle of the elemental image generation process with one part of captured light field. Figure 3(b) shows the rearranged image with the cropped images. The pixel mapping algorithm is then applied to the rearranged image to produce a real and orthoscopic 3D image without pseudoscopic problems. As mentioned above, the depth plane can be adjusted by changing the algorithm parameter k in the pixel mapping algorithm [9–11].
In this study, we set the parameter k to zero, which is the simplest way to solve the pseudoscopic problem: rotating each elemental image 180 degrees. This method was introduced earlier by Okano et al. in conjunction with a real-time display . However, this algorithm provides only virtual orthoscopic images with the conventional integral imaging pickup system, because the pickup system is capable of capturing 3D objects only behind the lens array [1, 8]. However in the LFM system, the micro lens array captures the light field relayed by the objective lens, and the experimenter can easily adjust the focal plane relayed with the objective lens by moving the stage up and down. Therefore, the use of zero for the algorithm parameter k is the best for the LFM system, because it is not necessary to adjust the depth planes with post processing . Orthoscopic 3D images are obtained as both virtual and real images by rotating each elemental image [11, 29]. Of course, one can apply another value for the parameter k in other cases (e.g. fitting expressible depth range of display system), but we conclude that this rotation method is the optimal for the LFM system.
Figure 4 shows the ray-tracing simulation results that were used to verify our proposed elemental image generation method with f-number matching. In the simulation, the practical experiment specification shown in Table 1 is assumed. Three micro objects ‘S’, ‘N’, and ‘U’ are located at distances of 25 μm below, at, and 25 μm above the focal plane, respectively. The object size is 150 μm for all objects and they are located at the center, and a yellow colored incoherent light source is used. Figure 4(a) shows the captured light field from three micro objects using LFM. As expected, the captured light field is composed of circle images caused by the objective aperture. The disparity between nearby lenses is also shown in Fig. 4(a), so the captured light field contains horizontal and vertical parallax. Figure 4(b) shows the elemental image generated with the pixel mapping algorithm without imaging cropping. As mentioned above, the elemental image is generated by undersampling so that the generated elemental image without image cropping contains waste information such as black regions with limited resolution. With this elemental image, black seams are observed and limited information is available to the observer, as reported in our previous work . Figure 4(c) shows the rearranged image obtained by cropping the image regions that can be expressed by the display lens array. Outer regions of each lens are essentially removed, but a disparity still exists. Images of nearby lenses contain different light field information, as shown in Fig. 4(c), and parallax occurs in these reconstructed 3D images as a result of these differences. With the pixel mapping algorithm, an elemental image is generated as shown in Fig. 4(d). Images in each lens are rotated by 180 degrees, as expected.
The processing time for generating an elemental image for one captured light field image is about 0.06 second with a PC (Intel i7 processor with a NVIDIA GTX 470 graphic card). Our implemented system can provide about 16 frames per second (FPS) in real-time with a 2336 × 1752 resolution. This speed is slightly lower than previous applications of the pixel mapping algorithm due to the additional cropping process, but still satisfies real-time conditions [9–11]. The pixel mapping algorithm was implemented with OpenCV without any GPU processing, so it would be possible to improve the processing time and frame rate by GPU processing.
3. Real-time integral imaging system for light field microscopy
Figure 5 shows the implemented system of our proposed real-time integral imaging system for LFM. An incoherent light source is located at the bottom, transmitted to the micro object, and imaged by a micro lens array. In practice, a relay lens (Canon EF 100 mm f/2.8 Macro USM) is used to image the light field from the micro lens array to the CCD sensor, as shown in Fig. 5. The captured light field information is transmitted to the PC at a 32 FPS frame rate. Therefore, half of the captured images are used for elemental image generation because the implemented pixel mapping algorithm is capable of providing only about 16 FPS. For integral imaging, a high resolution liquid crystal display (IBM 22 inch 3840 × 2400) and a 1 mm lens array with a 3.3 mm focal length are used, as listed in Table 1.
For real-time characteristics, the alignment of the optic devices is the most important issue, otherwise image rectification is needed, which usually requires much more time than the pixel mapping algorithm. In the proposed system, an optical zig is manufactured to calibrate optical elements as shown in Fig. 5. The tilted angle of the micro lens array is then aligned with the display, and the lens border and resolution are manually inserted into the elemental image generation code as the initial condition. After being calibrated, the implemented system is robust to external oscillations during an experiment.
4. Experimental results
With our implemented system, we present real-time integral imaging experiment with LFM. We first verified our LFM system with a moving micro object. Figure 6(a) shows a captured light field image of c.elegans. The captured image is composed of circular light field images, as expected. Perspective views are extracted from the captured light field image, as shown in Fig. 6(b). By recording the captured images as a video, perspective view videos can be obtained. Figure 6(c) shows synchronized perspective view videos extracted from recorded light field images (see Media 1). These results are in agreement with previous studies on LFM, and show that our proposed system is valid [22–24].
With the captured light field image, we presented an integral imaging experiment. Figure 7(a) shows the perspective views of reconstructed 3D images with the generated elemental image. As shown in Fig. 7(a), the developed system provides an orthoscopic 3D image in real-time (see Media 2). By using this real-time characteristic of the proposed system, real-time 3D experiments can be performed. Figure 7(b) shows the conceptual experiment for the proposed 3D experiment. The experimenter observes a micro object in 3D and in real-time, and instant feedback with the microscope is possible (See Media 3). Due to the multiple viewpoints of integral imaging, multiple experimenters can share in the microscopic experiment. These experimental results also provide validity for our proposed real-time system.
In this study, we proposed a real-time integral imaging system for use with an LFM system. We generated elemental images for an integral imaging system from a captured light field with LFM in real-time. We applied an f-number matching method for elemental image generation to reconstruct an undistorted 3D image. Our implemented system is capable of providing real and orthoscopic 3D images of micro objects in 16 FPS. We verified proposed system with experiments using c.elegans. This system could be used for the microscopic experiments for multiple experimenters and observers.
This research was supported by ‘The Cross-Ministry Giga KOREA Project’ of The Ministry of Science, ICT and Future Planning, Korea. [GK13D0200, Development of Super Multi-View (SMV) Display Providing Real-Time Interaction]. We wish to thank Professor Junho Lee (Department of Biological Sciences, Seoul National University) for the generous donation of the c. elegans samples used in this study.
References and links
1. F. Okano, J. Arai, H. Hoshino, and I. Yuyama, “Three-dimensional video system based on integral photography,” Opt. Eng. 38(6), 1072–1077 (1999). [CrossRef]
2. B. Javidi, S. Yeom, I. Moon, and M. Daneshpanah, “Real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events,” Opt. Express 14(9), 3806–3829 (2006). [CrossRef] [PubMed]
3. W. J. Matusik and H. Pfister, “3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes,” ACM Trans. Graph. 23(3), 814–824 (2004). [CrossRef]
5. G. Li, K.-C. Kwon, K.-H. Yoo, S.-G. Gil, and N. Kim, “Real-time display for real-existing three-dimensional objects with computer-generated integral imaging,” in Proceeding of International Meeting on Information Display (IMID), Daegu, Korea, Aug. 2012 (Society for Information Display and Korean Society for Information Display, 2012), pp. 471–472.
6. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37(11), 2034–2045 (1998). [CrossRef] [PubMed]
7. J. Arai, T. Yamashita, M. Miura, H. Hiura, N. Okaichi, F. Okano, and R. Funatsu, “Integral three-dimensional image capture equipment with closely positioned lens array and image sensor,” Opt. Lett. 38(12), 2044–2046 (2013). [CrossRef] [PubMed]
8. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13(23), 9175–9180 (2005). [CrossRef] [PubMed]
10. J. Kim, J.-H. Jung, and B. Lee, “Real-time pickup and display integral imaging system without pseudoscopic problem,” Proc. SPIE 8643, 864303 (2013). [CrossRef]
12. B. Lee, “Three-dimensional displays, past and present,” Phys. Today 66(4), 36–41 (2013). [CrossRef]
14. J. Hong, Y. Kim, H.-J. Choi, J. Hahn, J.-H. Park, H. Kim, S.-W. Min, N. Chen, and B. Lee, “Three-dimensional display technologies of recent interest: principles, status, and issues [Invited],” Appl. Opt. 50(34), H87–H115 (2011). [CrossRef] [PubMed]
15. M. Kawakita, K. Iizuka, H. Nakamura, I. Mizuno, T. Kurita, T. Aida, Y. Yamanouchi, H. Mitsumine, T. Fukaya, H. Kikuchi, and F. Sato, “High-definition real-time depth-mapping TV camera: HDTV axi-vision camera,” Opt. Express 12(12), 2781–2794 (2004). [CrossRef] [PubMed]
16. E.-H. Kim, J. Hahn, H. Kim, and B. Lee, “Profilometry without phase unwrapping using multi-frequency and four-step phase-shift sinusoidal fringe projection,” Opt. Express 17(10), 7818–7830 (2009). [CrossRef] [PubMed]
17. J.-H. Jung, K. Hong, G. Park, I. Chung, J.-H. Park, and B. Lee, “Reconstruction of three-dimensional occluded object using optical flow and triangular mesh reconstruction in integral imaging,” Opt. Express 18(25), 26373–26387 (2010). [CrossRef] [PubMed]
18. J.-H. Jung, J. Yeom, J. Hong, K. Hong, S. W. Min, and B. Lee, “Effect of fundamental depth resolution and cardboard effect to perceived depth resolution on multi-view display,” Opt. Express 19(21), 20468–20482 (2011). [CrossRef] [PubMed]
19. G. Lippmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).
20. P. Török and F. J. Kao, eds., Optical Imaging and Microscopy: Techniques and Advanced Systems (Springer, 2003).
22. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]
24. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef] [PubMed]
29. B. Lee and J. Kim, “Real-time 3D capturing-visualization conversion for light field microscopy,” Proc. SPIE 8769, 876908 (2013). [CrossRef]
30. A. Fire, S. Xu, M. K. Montgomery, S. A. Kostas, S. E. Driver, and C. C. Mello, “Potent and specific genetic interference by double-stranded RNA in Caenorhabditis elegans,” Nature 391(6669), 806–811 (1998). [CrossRef] [PubMed]
31. H. Lee, M. K. Choi, D. Lee, H. S. Kim, H. Hwang, H. Kim, S. Park, Y. K. Paik, and J. Lee, “Nictation, a dispersal behavior of the nematode Caenorhabditis elegans, is regulated by IL2 neurons,” Nat. Neurosci. 15(1), 107–112 (2011). [CrossRef] [PubMed]
32. J.-H. Park, H. Choi, Y. Kim, J. Kim, and B. Lee, “Scaling of three-dimensional integral imaging,” Jpn. J. Appl. Phys. 44(1A), 216–224 (2005). [CrossRef]
33. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Rep. CTSR 2005–02 (Stanford University, 2005).
34. C. Jang, J. Kim, J. Yeom, and B. Lee, “Analysis of color separation reduction through the gap control method in integral imaging,” J. Inf. Disp. 15(2) (to be published).