Artificial compound eyes are typically designed on planar substrates due to the limits of current imaging devices and available manufacturing processes. In this study, a high precision, low cost, three-layer 3D artificial compound eye consisting of a 3D microlens array, a freeform lens array, and a field lens array was constructed to mimic an apposition compound eye on a curved substrate. The freeform microlens array was manufactured on a curved substrate to alter incident light beams and steer their respective images onto a flat image plane. The optical design was performed using ZEMAX. The optical simulation shows that the artificial compound eye can form multiple images with aberrations below 11 μm; adequate for many imaging applications. Both the freeform lens array and the field lens array were manufactured using microinjection molding process to reduce cost. Aluminum mold inserts were diamond machined by the slow tool servo method. The performance of the compound eye was tested using a home-built optical setup. The images captured demonstrate that the proposed structures can successfully steer images from a curved surface onto a planar photoreceptor. Experimental results show that the compound eye in this research has a field of view of 87°. In addition, images formed by multiple channels were found to be evenly distributed on the flat photoreceptor. Additionally, overlapping views of the adjacent channels allow higher resolution images to be re-constructed from multiple 3D images taken simultaneously.
© 2013 Optical Society of America
Compound eyes found in nature consist of optical channels, or ommatidia, arranged on a curved focal plane. Each ommatidium includes a set of microlenses and several photoreceptor cells at the focal plane associated with the microlenses. Assembly of the ommatidia forms a natural 3D microlens array with optical axes pointing at different directions. This natural microlens array has many advantages, such as a large field of view (FOV), fast signal processing speed, and extremely compact size.
Considerable efforts have been made to develop various types of artificial compound eyes to mimic ones found in nature. The majority of these structures were designed and manufactured on planar substrates due to the limitations of 2D machining methods. Typically, an artificial 2D compound eye consists of a 2D microlens array layer and a photoreceptor layer [1–3]. However, limited by a planar substrate, the FOV of a 2D apposition compound eye is much smaller than a naturally occurring compound eye on a curved substrate. Although layered lens arrays can be precisely stacked and aligned, such as the Gabor super lens  and the microoptical telescope compound eye , these structures are still based on 2D planar substrates and their FOV cannot match that of a naturally occurring compound eye.
In one design, a concave lens is combined with a 2D gradient lens array to enlarge the FOV . Optical lenses have also been set up with an automatic mechanical arm, forming a scanning retina . With recent developments in ultraprecision machining technology, 3D microlens arrays have been successfully developed. Specifically, Jeong et. al have created an innovative 3D compound eye using a self-aligned waveguide method . A 3D microlens array alone cannot form an image on a planar photoreceptor. Therefore, optical relay devices are necessary to steer the images of a 3D microlens array from a curved surface onto a planar photoreceptor. Sometimes, optical fiber bundles can be used as relay devices. However, assembly of the optical fibers involves complex processes at relatively high cost. Thus, the systems using fiber bundles are either bulky or very hard to be fabricated [9–11].
Based on 3D micromachining methods [12–16], an artificial compound eye using a 3D prism array was developed in a previous study . This design was used successfully to create a microsensor with a large FOV. However, not all aberrations were corrected in that design. In a subsequent optical design, a freeform prism array was fabricated  and the imaging quality was improved.
In this study, an artificial 3D compound eye on a curved substrate was developed. This design includes a 3D microlens array and an optical relay device. The optical relay device includes a freeform lens array layer and field lens array layer. Each optical component used in the system was designed using ZEMAX. The freeform lens array and field lens array were designed to form images onto a planar surface. All three optical lens array layers were fabricated by microinjection molding process and the mold inserts were directly machined by the slow tool servo (STS) method. The performance of this artificial compound eye sensor was simulated using ZEMAX. The complete system was tested on a home-built setup using different targets to show that the 3D compound eye was fully functional.
2. Design and simulation
In a previous study , a 3D microlens array was designed to mimic an insect’s compound eye found in nature. There are 1,219 lenslets arranged at the bottom of a spherical substrate providing a wide FOV while maintaining a high fill factor and compact size. The radius of curvature of a single lenslet was 3.80828 mm, its aperture diameter was 0.5 mm, and its back focal length was 5.8 mm. Similar to an insect’s compound eyes, the imaging planes of the lenslets were arranged on a curve instead of a planar substrate, as shown in Fig. 1.
However, limited by current planar photoreceptors, a reliable optical relay device is in demand to steer incident beams and form images on a planar CCD or CMOS surface. The goals of this optical design are listed as follows:
- 1) To steer light rays and form images on a planar surface.
- 2) To correct aberrations for improved imaging quality.
- 3) To control the image sizes of adjacent channels to prevent crosstalk at the imaging plane.
2.1. Design of a three-layer compound eye system
Accurately beam steering can be accomplished using techniques such as waveguides, optical fibers, and prisms. Considering cost effective manufacturing, prisms are currently preferred as beam steering devices to transfer images onto planar surfaces. Aberrations can be corrected by adding a series of lenses, similar to commercially available cameras. However, this usually leads to expensive, complex, and bulky optical systems. The use of freeform optics provides a better option for reducing aberrations while maintaining compact and simple designs.
Additionally, a layer of field lenses is necessary to control the size of the images and to re-focus the incident beams. The proposed design strategy is shown in Fig. 2(a), in which two layers of lenses are precisely aligned beneath a tilted microlens, forming a three-layer optical system. This system includes a tilted microlens, a freeform lens, and a field lens. Both the freeform lens and the field lens are arranged on curved substrates, affording them prism-like properties for accurate beam steering, as shown in Figs. 2(b) and 2(c). In principle, the freeform lens will reduce aberrations caused by beam steering and the field lens will control the size of the image and adjust the focal length of each channel.
Beam steering is accomplished by the prism-like freeform lens and the field lens. As illustrated in Fig. 2(b), ray exits from a tilted microlens and intersects the freeform surface S1 at point O1. The coordinates of point O1 can be solved for from the equations of ray and surface S1. The normal direction of surface z = S1(x, y) at point O1 is:
Because ray is incident on surface S1 and ray is the corresponding refractive ray with respect to surface S1, vectors,, and are in the same plane. Thus,
The incident angle θ1 and the refractive angle obey Snell’s law:
Similarly, for surface z = S2(x, y) the following relations exist:
The expression for the exit ray can be obtained from Eqs. (5)-(8). This ray is steered, with an angle of (), by surface S2. Therefore, the incident ray,, is steered to ray , with an angle difference ∆θ 1, by the freeform lens between S1 and S2. Therefore ∆θ 1 is given as:
A similar derivation can also be applied to the field lens between surface S3 and S4, as seen in Fig. 2(c). Ray is further steered to ray with angle difference ∆θ 2, given as:
The entire steered angle from ray to ray is then
The cross section of the proposed 3D compound eye is shown in Fig. 3. This structure consists of a 3D microlens array, a freeform lens array, and a field lens array. The 3D microlens array is formed by 21 circles of tilted microlenses arranged on the bottom surface of the spherical substrate. Due to its compact size and machining size limitations, in the current system, only limited circles of lenslets were selected as working lenslets. These lenslets are marked in blue as seen in Fig. 3. The working lenslets form five imaging channels, named channel 1−channel 5, as listed in Table 1. To prevent crosstalk caused by adjacent microlenses, an aperture array is necessary to block the extra lenslets, as seen in Fig. 3. The field lens and freeform lens were specially designed and optimized using ZEMAX software for each imaging channel.
As discussed previously, the field lens array is distributed on top of a spherical substrate whose radius of curvature is 8.7 mm. The top surface of the field lens is a standard conic aspherical surface expressed as
The freeform lens array is distributed on top of a spherical substrate whose radius of curvature is 9 mm. An extended polynomial surface was used to create the freeform surface. The sag equation of this surface can be divided in two portions: the conventional conic asphere surface and an extended polynomial deviation. This equation is given as:
For each channel, the 3D microlens has a different normal angle with respect to the Z-axis and a different distance to the expected image plane. Thus, the optimization was performed separately to identify the parameters of each freeform and field lens for each channel. The generated freeform surface array is shown in Figs. 4(a) and 4(b). Except for channel 1, the freeform surfaces of each channel were not axis-symmetric, as seen in Fig. 4(c).
2.2. Performance simulation using ZEMAX
Once the surface parameters were identified, optical performance of the artificial compound eye was simulated using ZEMAX software. First, beam steering capabilities were evaluated using a parallel incident beam whose field angle was (0°, 0°). The corresponding focal point of each beam is the center of each image. Consider channel 4, the focal point of the incident beam is P without using the other two layers of lenses. When the proposed freeform and field lenses are used, the focal point is steered to P1, as illustrated in Fig. 5. The coordinates of the original focal point, P, and the steered focal point, P1, for all five channels were compared under the world coordinate system, as listed in Table 2. It can be seen that all the z-coordinates of focal point P1 are 0, indicating that these points have been steered from a spherical surface to a planar surface. Considering the y-coordinates of P, the distances between adjacent P along the y-direction have a large variation due to the curved distribution. However, the distances between adjacent P1 along the y-direction approach 1.2 mm. Therefore, the steered focal points of all the channels are evenly distributed on the new plane. For aberrations, the root mean square (RMS) radius of each P1 is less than 11 μm which is sufficient for imaging. Hence, the (0°, 0°) incident beam of each channel was successfully imaged by the optimized optical structure.
Spot diagrams were also generated to study imaging quality. The FOV of individual channels is approximately 14°. In the ZEMAX simulation, five different field angles were selected including (0°, 0°), (0°, −7°), (0°, 7°), (−7°, 0°), and (7°, 0°). Additionally, the corresponding focal spots were generated, as seen in Fig. 6. The F, D, and C visible wavelengths (486.1327, 587.5618 and 656.2725 nm) were used to simulate white light illumination. Consider channel 4, the positions and RMS radii of corresponding spots are listed in Table 3.
It can be seen that the RMS radius of every spot is controlled below 11 μm. Within the given field of view, all focal spots are within the boundary of a 1 mm x 1mm rectangular region whose center is at the focal spot of the (0°, 0°) field angle, shown in Fig. 6(b). Using the same approach, image sizes of all the channels were controlled within 1 mm x 1mm rectangular boundary. Because the average distance between the image centers for adjacent channels is approximately 1.2 mm, adjacent channels have no crosstalk on the imaging plane, indicating that the image sizes are controlled well for the given field angles.
Modulation transfer function (MTF) was studied for quantitative comparison among all five channels. For each channel, the MTF of the (0°, 0°) incident beam is generated from two orthogonal orientations (sagittal and tangential. The F, D, and C visible wavelengths were chosen to simulate white light illumination. The obtained sagittal MTF for each of the five channels can be seen in Fig. 7(a), and the respective tangential MTF are shown in Fig. 7(b). The sagittal and tangential MTF curves are identical for channel 1 because of its axis-symmetry. Conversely, the tangential MTF curves become increasingly sinuous and lower than the corresponding sagittal curves from channel 2 to channel 5, due to the increasing asymmetry. Based on this simulation, the imaging effect will vary for the five channels, due to their differences in symmetry.
3. Manufacturing process
As discussed previously, microinjection molding method is preferred for mass production and was used in this study. Since the STS method is capable of machining complex 3D mold inserts of optical quality, mold inserts in this study were machined in this manner. The lens manufacturing was performed in two steps. First, the mold inserts were machined on a Moore Nanotech 350 FG (Freeform Generator) ultraprecision machining center. The finished mold inserts for the freeform lens array and the field lens array are displayed in Fig. 8. Fiducial marks (straight lines) were machined on the surface of the mold inserts for alignment during lens assembly.
Injection molding was performed using a Sodick microinjection molding machine (Plustech Inc.). Polymethylmethacrylate (PMMA, Plexi-glas1 V825-100, GE polymerland) was used as the working material. The injection molding process was performed in four steps. First, PMMA pellets were dried in an oven for 12 hours at 80 °C. Second, PMMA pellets were heated up to a temperature of 245°C, higher than its glass transition temperature (Tg). Third, in the packing stage, a higher packing pressure is preferred to reduce volume shrinkage. However, excessively high packing pressure may lead to expansion and increase warpage of the work piece. Therefore, the packing pressure was set to 90 MPa, with a packing time of 5 sec. Last, during cooling, the cooling gradient should be controlled carefully to prevent accumulated stresses in PMMA. Based on previous experience, the cooling time was set 20 sec and the mold was heated to 80 °C. The finished 3D lenses are shown in Fig. 9.
4. Results and discussions
The optical performance of the artifical compound eye was tested using a home-built setup, as seen in Fig. 10. The compound eye sensor is fixed on a stage with five degree of freedom. A zoom lens (VZMTM 450i, Edmund Optics) was employed to observe the images from the sensor. The magnification of the zoom lens ranges from 0.75x to 4.5x. A CCD camera (PL-B957F, pixeLINK) was used to collect the images. At first, the channels of the compound eye were individually evaluated. A USAF 1951 test target was used for the MTF test. The images formed by each channel are illustrated in Fig. 11.
The MTF was obtained by reading the contrast of each line pair in the above images as follows,Fig. 12(a), and the MTF curve of tangential direction is shown in Fig. 12(b). The deviation between the MTF measurement and the simulated MTF may be caused by many factors such as illumination, aberrations, resolution of the printed targets, and performance of the CCD camera. The five saggital MTF curves showed a miminum variation of less than 50 lp/mm. When the spatial frequency increases, the sagittal MTF curve of channel 5 is lower than that of the other four channels. The tangential MTF curves of channel 1, 2, and 3 are higher than that of channel 4 and 5. The tangential MTF curve of channel 4 is slightly higher than channel 5. This shows that the imaging quality of the artifical compond eye deteriorates when the steer angle becomes larger, mainly due to increased chromatic aberration.
The spatial imaging capability of the compound eyes was evaluated using multiple targets spanning a large angle. The targets were arranged in a manner as shown in Fig. 13(a). From left to right: a bird, a cup with the OSU logo, and a flowerpot. The targets were placed in front of the compound eye with channel 1 directly facing the flower. For comparison, the images from the 3D microlens array without using the freeform and field lens arrays are shown in Fig. 13(b). The captured image from the compound eye is illustrated in Fig. 13(c).
In Fig. 13(b), it is seen that images from the lenslets blur from the center to edge. In addition, only the flower is captured, while the cup and the bird are not imaged. This implies that the imaging of the 3D microlens array is seriously restricted by the planar CCD. As seen in Fig. 13(c), all the targets are clearly captured by the imaging channels. Figure 13(d) shows a close-up view where the imaging channels are marked with numbers. The flower is imaged by channel C1 and C2_2. The cup with OSU logo is captured by C2_2 and C3_2. The bird is imaged by C4_2 and C5_2. In C5_2, the eyes and eyebrows of the bird are clearly visible. This demonstrates that the freeform and field lens arrays successfully steer images of the 3D microlens array from a curved surface to a planar surface. The distance from the bird to the center of the flower, d1, is 300 mm. The distance from the compound eye to the flower is 320 mm. The viewing angle for these targets is 43.1524° according to
Thus, the half of the FOV (only one side of the compound eye was used in Fig. 13) is approximately 43° to 44°, and the overall FOV is approximately 87°. The adjacent channels share common regions in the FOV, as seen in Fig. 13(d). The flower in C1 is also seen in C2_1, C2_2, and C2_3. The cup is captured by C2_2, C3_1, C3_2, and C3_3. The bird appears in six channels from C4_1 to C5_3. The common FOV regions provide possibilities for further image processing and super resolution reconstruction.
5. Image reconstruction demonstration
Based on previous experiments and discussions it is clear that the adjacent imaging channels share common views around the boundaries. This makes it possible for image processing and improvement. A simple demonstration for image mosaicing was performed using a grid target as seen in Fig. 14(a). An OSU logo was used to locate the common regions with respect to adjacent channels. This figure was displayed on a 14-inch laptop screen, with channel 1 facing the letter U. The captured images by the artificial compound eyes are shown in Fig. 14(b), where the grids are imaged by multiple channels. Afterwards, the image of each channel was extracted, and the image mosaicing was performed by matching the corner points in the common FOV of the adjacent imaging channels. The re-constructed image is displayed in Fig. 14(c). In this image, the entire grid figure was restored from the images of the channels without any blind regions. Future efforts will be focused on developing imaging applications for the compound eye. Geometrical distortion and vignetting will be corrected using a self-written Matlab program.
In this study, a three-layer 3D artificial compound eye consisting of a 3D microlens array, a freeform lens array, and a field lens array was successfully developed. The lens array layers were precisely stacked and aligned together, forming a functional 3D optical image sensor.
The 3D microlens array has a large field of view, and was used to collect incident beams from different directions within the FOV. An aperture array was machined to prevent crosstalk between the neighboring lenslets. Five groups of 3D microlenses on five concentric circles were selected as functional imaging channels by the aperture array. The freeform and field lens arrays were designed to steer images of the selected 3D microlenses from the curved substrate onto a planar surface. The freeform lens array was designed for aberration correction, and the field lens array was designed to adjust the focal lengths of the channels and control their image sizes.
The freeform and field lenses were specially designed and optimized for each channel using ZEMAX software. The ZEMAX simulation demonstrated that images from all five channels are successfully distributed on a planar surface without crosstalk. The RMS radii were controlled to well below 11 μm, which is sufficient for imaging.
All three layers of lens arrays were manufactured using microinjection molding process to reduce cost. The mold inserts were directly machined by the STS diamond turning method. This study demonstrates that a combination of ultraprecision diamond machining and microinjection molding processes is an effective approach for fabricating 3D microoptics on curved substrates of optical quality. Compared with other 3D micromachining methods, the method proposed in this paper has the advantages of low cost and high efficiency, making it a preferred process for industrial scale production.
The performance of the 3D compound eye was tested using a home-built optical setup with two different types of targets, the USAF 1951 target and the stereo targets. The imaging capability shown in subsequent optical testing shows that the microlens based 3D compound eye is capable of forming high quality images on the planar CCD photoreceptor. Experiments show that images from all five channels were also evenly separated on the planer substrate. This test furthermore verifies that the FOV of the compound eye is as large as 87°. Additionally, adjacent channels were also demonstrated to have FOV overlapping which can be utilized for image reconstruction and improvement.
This study was partially the result of collaborative efforts between the Ohio State University and Fraunhofer IOF under the Fraunhofer program “ProfX2.” This study was also based on the work supported by National Science Foundation under Grant Numbers CMMI-0928521 and EEC-0914790. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. The authors would also like to acknowledge Professional Instruments in Hopkins, MN for their continuous support.
References and links
2. S. Ogata, J. Ishida, and T. Sasano, “Optical sensor array in an artificial compound eye,” Opt. Eng. 33(11), 3649–3655 (1994). [CrossRef]
3. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt. 40(11), 1806–1813 (2001). [CrossRef] [PubMed]
4. K. Stollberg, A. Brückner, J. Duparré, P. Dannberg, A. Bräuer, and A. Tünnermann, “The Gabor superlens as an alternative wafer-level camera approach inspired by superposition compound eyes of nocturnal insects,” Opt. Express 17(18), 15747–15759 (2009). [CrossRef] [PubMed]
5. J. Duparré, P. Schreiber, A. Matthes, E. Pshenay-Severin, A. Bräuer, A. Tünnermann, R. Völkel, M. Eisner, and T. Scharf, “Microoptical telescope compound eye,” Opt. Express 13(3), 889–903 (2005). [CrossRef] [PubMed]
6. K. Hamanaka and H. Koshi, “An artificial compound eye using a microlens array and its application to scale-invariant processing,” Opt. Rev. 3(4), 264–268 (1996). [CrossRef]
7. K. Hoshino, F. Mura, and I. Shimoyama, “Design and performance of a micro-sized biomorphic compound eye with a scanning retina,” J. Microelectromech. Syst. 9(1), 32–37 (2000). [CrossRef]
9. R. Krishnasamy, W. Wong, E. Shen, S. Pepic, R. Hornsey, and P. Thomas, “High precision target tracking with a compound-eye image sensor,” Canadian conference on electrical and computer engineering 2004, San Jose, California, USA (2004). [CrossRef]
10. W. C. Sweatt and D. D. Gill, “Microoptical compound lens,” United States Patent, Patent No.: 7,286,295 B1 (2007).
11. F. M. Reininger, “Fiber coupled artificial compound eye,” United States Patent, Patent No.: 7,376,314 B2 (2008).
14. L. Li and A. Y. Yi, “Microfabrication on a curved surface using 3D microlens array projection,” J. Micromech. Microeng. 19(10), 105010 (2009). [CrossRef]
15. H. Zhang, L. Li, D. L. McCray, D. Yao, and A. Y. Yi, “A microlens array on curved substrates by 3D micro projection and reflow process,” Sens. Actuators A Phys. 179, 242–250 (2012). [CrossRef]
16. S. Scheiding, A. Y. Yi, A. Gebhardt, L. Li, S. Risse, R. Eberhardt, and A. Tünnermann, “Freeform manufacturing of a microoptical lens array on a steep curved substrate by use of a voice coil fast tool servo,” Opt. Express 19(24), 23938–23951 (2011). [CrossRef] [PubMed]
17. J. R. Meyer, “Photoreceptors” (General Entomology). http://www.cals.ncsu.edu/course/ent425/tutorial/photo.html