Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Imaging with random 3D reference structures

Open Access Open Access

Abstract

We describe a sensor system based on 3D ‘reference structures’ which implements a mapping from a 3D source volume on to a 2D sensor plane. The reference structure used here is a random three dimensional distribution of polystyrene beads.We show how this bead structure spatially segments the source volume and present some simple experimental results of 2D and 3D imaging.

©2003 Optical Society of America

1. Introduction

Most optical imaging systems implement an isomorphic one-to-one mapping between the source space and the measurement space. The focus of an optical system design has primarily been on implementations of better isomorphisms in order to improve resolution, depth of field and field of view. However, with the recent advances in digital processing and focal planes,the attraction of isomorphic imaging have begun to fade. A new class of imaging systems that integrate optical and electronic processing have emerged. These systems are referred to as “Integrated Computational imaging systems” because they use nonconventional optical elements to preprocess the field for digital analysis [1, 2]. Computational imaging systems have increasingly become popular as they can implement multidimensional or multispectral imaging [3, 4, 5]. Other potential advantages include increased depth of field [6, 7], improved computational efficiency and improved target recognition or tracking capabilities.

This paper describes a novel computational system based on reference structure tomography (RST) for scan-free three dimensional imaging. Conventional tomographic systems reconstruct a three dimensional source by capturing several projected images of the source. These systems compensate the loss in dimensionality by temporally scanning the source. Specifically, confocal microscopy [9] scans a zero dimensional focal point, a fan-beam tomography system scans one-dimensional (1D) projections, and a cone-beam system scans 2D projections. These scans are integrated to construct 3D source models. We propose reference structure tomography that enables instantaneous spatial sampling and scan-free estimation of the 3D source.

The most immediate physically realized precursor to reference structure tomography is coded aperture imaging. Coded aperture imaging systems [10, 11, 12, 13] use a 2D mask to modulate projections from the source points onto a detector array. Due to the negligible thickness of the mask, the projection of the object is shift invariant transversely. By designing the mask carefully, a correlation function with sharp peaks can be achieved. Coded aperture systems have been used to obtain depth information. Sources distant to the detector cast smaller aperture shadows than closer sources. By correlating the recorded image with decoding patterns of different sizes, images of the source distribution at different depths can be retrieved [14]. However, tomographical imaging with coded apertures involve deconvolution algorithms which suffer from defocus artifacts. These artifacts can be avoided by using algebraic inversions with 3D reference structures.

Reference structure tomography uses reference modulations in the space between sources and sensors to encode propagating energy radiated or scattered from the sources. These modulations are utilized in such a way that the sensor produces an enhanced measurement. The reference modulations are achieved by using a reference structure. A reference structure consists of a multidimensional distribution structured both transversely and longitudinally to the primary direction of propagation for the projection. Rather than modulating the wavefront of the sensed field, the reference structure modulates the visibility of the source space [8].

Reference structures generalize the physical interface of coded apertures by using 3D rather than 2D modulation. Due to the three dimensional nature of the reference structures, they implement a more general class of transformations. Transformations implemented by the reference structure is shift variant in contrast to the shift invariant transformations implemented by coded aperture systems [14].

This paper aims to simply demonstrate the feasibility of a scan-free three dimensional imaging using reference structures. The reference structure that we consider is a random distribution of polystyrene beads. The structure is fabricated by stacking several layers of beads one on top of the other. In Section 2, we show how a bead structure can spatially segment the source space. We will demonstrate how this can be used an imaging system. The experimental details and some reconstruction examples will be presented in sections 3 and 4 respectively.

2. Spatial segmentation using random bead structures

Figure 1 shows an example of the spatial segmentation achieved by an arbitrary arrangement of obscurants placed in front of the detectors. Due to the obscurants, the visibility of each detector is segmented into several cones. Thus, the reference structure segments the source space into regions with distinct signatures. The signatures are to be realized by the design of a reference structure geometry given the set of measurement points. If the structure segments the source space into N distinct signature regions, then N sources can be estimated. The size of the obscurants are chosen such that the diffraction can be ignored and this system is treated under geometrical conditions.

 figure: Fig. 1.

Fig. 1. Spatial segmentation with random 3d bead structures

Download Full Size | PDF

The beads that we consider here are strongly scattering so that each bead will act as an obscurant. Thus, a particular source element is either visible or not visible to a particular measurement element. For imaging applications, it is necessary for every source point to be visible to one or more sensors. For each pair of source and measurement point at r and rm respectively, we associate a visibility function v(rm,r). The visibility function in this case is binary valued depending on whether the source r is visible to the measurement point rm.

The measurement space is an M dimensional vector of sensor measurements. The visibility of the source point at r from the ith sensor at the measurement point ri is vi(r)=v(ri,r). The ith measurement is

mi=vi(r)s(r)dr

where the source state s(r) is the density function over the embedding source space. For every source state at point r, we define a binary vector signature ξ(r)∊(0,1)M, the ith element of which is the visibility vi(r) of the ith measurement element.

The regions in the source space which have the same signature is designated as a “cell”. The source space is thus partitioned into cells so that the points in each cell have the same signature. Let ξj denote the signature of the jth cell. In discrete form, the measurements in Eq. (1) can be expressed as

mi=jξijsj

where ξij is the ith element of the signature ξj and sj is the mean source density over the jth cell. In matrix form, we express the measurement process as

 figure: Fig. 2.

Fig. 2. Fabricated reference structure used for the experiments

Download Full Size | PDF

m=ξs

This equation is the mathematical basis for this imaging system. In the next sections, we will demonstrate how the reference structure is experimentally characterized and reconstruct s on a three dimensional plane from the measurements m on a 2D focal plane using Eq. (3).

3. Experiments

A reference structure is fabricated by stacking several layers polystyrene beads one on top of the other. The beads were mixed in optical quality adhesive and then UV cured to make the structure. The trasmission coefficient along the beads is very small as the refractive index difference between the epoxy and the beads is small. Thus, the beads act as scatterers making the reference structure opaque along rays striking a bead. Since the beads were mixed into the adhesive randomly, the structure obtained is a random 3D spatial distribution of the glass beads. Figure 2 shows one such structure.

An imaging system based on these bead structures is shown in Fig. 3. The structure is placed between the source space and the measurement space. The measurements are made on a Panasonic video camera with 480×640 pixels with a pixel size of 8.4×9.8µm. The detector size is not critical to the source resolution.

The source volume is divided into voxels of size 200µm×200µm×400µm. The CCD is placed on the other side structure, which images the light passing through the structure. A fiber light source of size 62.5µm is placed centered at each of the voxels in the source volume and a separate image is recorded at each voxel. Fig. 4 shows two such images for source voxels separated by 200µm along the lateral dimension. Figure 5 shows the difference between these two images, which shows the two distinct signatures, produced by the two sources on the pixel array. From the images, it is clear that there is a distinct signature produced by a source illuminating each voxel. Due to the three dimensional nature of the structure, distinctive signatures patterns are observed even along the depth dimension.

 figure: Fig. 3.

Fig. 3. An Imaging system based on random 3D bead structures

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. The image obtained when a fiber light source is placed in front of the structure at two positions in the source space separated by 200 µm along the lateral dimension

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Difference between the two images shown in Fig. 4

Download Full Size | PDF

A source volume can be reconstructed using these distinct signatures. The procedure is as follows. We assume that the reference structure implements a linear transformation under geometric RST assumptions as shown in Eq. (3). Since the reference structure is made using a random spatial distribution of beads, we have to characterize the structure to find out the transformation it implements. To characterize the structure, we cycle the fiber source through each one of the voxel and record the signature patterns obtained on the sensor array. These patterns describe the transformation ξ.

The fiber source is moved across each one of these voxels using a Newport Auto Align system. The Auto Align system can be programmed to move it stages in X, Y and Z directions to an accuracy of 100 nm. The Newport system is programmed so that this whole process is done automatically. This characterization is equivalent to finding the point spread function of the imaging system. The transformation ξ is used to numerically reconstruct some unknown sources. Some examples are presented in the next section.

4. Reconstruction examples

First, a 2D source is reconstructed using this structure. The source space consists of several point sources arranged in the shape of a letter D. The reconstruction procedure is as follows. The source space is divided into 40×40 resolution elements in the X–Y grid. A fiber light source (of diameter 62.5µm) is placed at each one of the source element and the corresponding image is recorded. This image forms one column of the transformation matrix ξ.

Since the reconstruction involves the inversion of ξ, ξ is preferred to be a square matrix. The inversion is done using an iterative least squares algorithm with a constraint that the source is non negative. Square matrix is not a necessary condition for this algorithm. However, square matrix is preferred for ease of computation and representation.

Thus each image of size 480×640 is re-sized to a 40×40 image using a bilinear transformation. This image is then reshaped as a single column vector of size 1600×1 and is placed as one column of ξ. This process is repeated for each source element and a transformation ξ of size 1600×1600 is obtained.

 figure: Fig. 6.

Fig. 6. Reconstructed source displayed as a 40×40 image

Download Full Size | PDF

This transformation ξ characterizes the reference structure and an unknown source can be reconstructed by the multiplying the measurements with the inverse of ξ. For this it is required that ξ is invertible and is a full rank matrix. Our experiments did verify this. The rank of the transformation ξ is numerically found to be 1600. Now, this ξ is used to reconstruct point sources arranged as letter D.

When letter D is placed in the source plane, the image obtained is a linear combination of the images obtained when a point source was illuminating each of the source elements. This image is compressed to 40×40 pixels and multiplied it with ξ -1. The reconstruction is reshaped as a 40×40 image and is shown in Fig. 6.

A similar procedure is used to reconstruct a source volume. The source space is divided into 10×10×10 voxels of 200 µm lateral resolution and 400 µm depth resolution. The reference structure is characterized by moving the fiber light source through each one of the voxels. The transformation matrix is then used to reconstruct a source volume which consists of a series of point sources located along the solid diagonal of a source volume. Figure 7 shows the composite volume in which all the point source reconstructions are combined together and displayed. This figure demonstrates the depth imaging capabilities of reference structures.

5. Conclusion

This paper demonstrated a 3D random bead structure based imaging system to estimate a 3D source volume. The reference structure spatially segments the source volume to generate regions with distinct signatures on the sensor array. These distinct signatures can be used to reconstruct the 3D source volume. In future, we hope to implement a scan-free three dimensional microscope using this principle.

 figure: Fig. 7.

Fig. 7. Reconstructed source volume consisting of a series of point sources located along the solid diagonal of a cube shaped source volume: All the reconstructions are combined together and shown as a composite 3D volume

Download Full Size | PDF

Acknowledgments

The authors are thankful to Nikos Pitsianis, Mike Sullivan, Xiaoboi Sun, Steve Feller, Evan Cull and Unnikrishnan Gopinathan for useful discussions. This work was supported by the Defense Advanced Research Projects Agency through the grant DAAD 19-01-1-0641.

References and links

1. D.J. Brady and Z.U. Rahman, “Integrated analysis and design of analog and digital processing in imaging systems: introduction to feature issue,” Appl. Opt. 41, 6049–6049 (2002). [CrossRef]   [PubMed]  

2. W.T. Cathey and E.R. Dowski, “New paradigm for Imaging systems,” Appl. Opt. 41, 6080–6092 (2002). [CrossRef]   [PubMed]  

3. G. Barbastathis and D.J. Brady, “Multidimensional tomographic imaging using volume holography,” Proceedings of the IEEE 872098–2120 (1999). [CrossRef]  

4. D.L. Marks, R.A. Stack, D.J. Brady, D.C. Munson, and R.B. Brady, “Visible Cone beam tomography with a lensless interferometric camera,” Science 284, 1561–1564 (1999). [CrossRef]  

5. M.R. Descour, C.E. Volin, E.L. Derenaiak, T.M. Gleeson, M.F. Hopkins, D.W Wilson, and P.D. Maker, “Demonstration of a computed tomography imager spectrometer using a computer-generated hologram dispenser,” Appl. Opt. 36, 3694–3698 (1997). [CrossRef]   [PubMed]  

6. E.R. Dowski and W.T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995). [CrossRef]   [PubMed]  

7. P. Potuluri, M.R. Fetterman, and D.J. Brady, “High depth of field microscopic imaging using an interferometric camera,” Opt. Express 8, 624–630 (2001), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-8-11-624. [CrossRef]   [PubMed]  

8. P. Potuluri, U. Gopinathan, J.R. Adleman, and D.J. Brady, “Lensless sensor system using a reference structure,” Opt. Express 11, 965–974 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-8-965. [CrossRef]   [PubMed]  

9. T. Wilson, Confocal Microscopy (Academic Press, London, 1992).

10. T. Cannon and E. Fenimore, “Coded aperture imaging - many holes make light work,” Opt. Eng. 19 pp. 283–289, (1980).

11. E.E. Fenimore, “Coded aperture imaging - predicted performance of uniformly redundant arrays,” Appl. Opt. 17, 3562–3570 (1978). [CrossRef]   [PubMed]  

12. A.R. Gourlay and J.B. Stephen, “Geometric coded aperture masks,” Appl. Opt. 22, 4042–4047 (1983). [CrossRef]   [PubMed]  

13. K.A. Nugent, “Coded aperture imaging- a fourier space analysis,” Appl. Opt. 26, 563–569 (1987). [CrossRef]   [PubMed]  

14. T.M. Cannon and E.E. Fenimore, “Tomographical imaging using uniformly redundant arrays,” Appl. Opt. 18, 1052–1057 (1979). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Spatial segmentation with random 3d bead structures
Fig. 2.
Fig. 2. Fabricated reference structure used for the experiments
Fig. 3.
Fig. 3. An Imaging system based on random 3D bead structures
Fig. 4.
Fig. 4. The image obtained when a fiber light source is placed in front of the structure at two positions in the source space separated by 200 µm along the lateral dimension
Fig. 5.
Fig. 5. Difference between the two images shown in Fig. 4
Fig. 6.
Fig. 6. Reconstructed source displayed as a 40×40 image
Fig. 7.
Fig. 7. Reconstructed source volume consisting of a series of point sources located along the solid diagonal of a cube shaped source volume: All the reconstructions are combined together and shown as a composite 3D volume

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

m i = v i ( r ) s ( r ) d r
m i = j ξ ij s j
m = ξ s
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.