Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Development and uncertainty characterization of 3D particle location from perspective shifted plenoptic images

Open Access Open Access

Abstract

This work details the development of an algorithm to determine 3D position and in plane size and shape of particles by exploiting the perspective shift capabilities of a plenoptic camera combined with stereo-matching methods. This algorithm is validated using an experimental data set previously examined in a refocusing based particle location study in which a static particle field is translated to provide known depth displacements at varied magnification and object distances. Examination of these results indicates increased accuracy and precision is achieved compared to a previous refocusing based method at significantly reduced computational costs. The perspective shift method is further applied to fragment localization and sizing in a lab scale fragmenting explosive.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In recent years, the development of non-invasive 3D diagnostics has become a significant area of research due to the wide variety of available techniques and the multitude of engineering problems that benefit from these measurements. The current work is motivated by the need to measure explosively generated fragment fields in 3D. For example, the top row in Fig. 1 shows the output from a lab-scale detonator (RP-80 from Teledyne RISI). Here, rapid expansion of the detonating explosive causes the surrounding metal case to stretch until failure leading to the formation of sharp metal fragments traveling near one kilometer per second [1]. Accurate knowledge of the 3D location, size, shape, and velocity of these fragments is critical to understand and mitigate hazards from such explosives [2].

 figure: Fig. 1

Fig. 1 Selected frames from high-speed video of a lab scale detonation (top) and vertically shifted perspective views created from a simultaneously captured plenoptic image (bottom).

Download Full Size | PDF

For nearly a century, explosion analysis has been performed through assessment of pressure and acceleration measurements as well as examination of post explosion fragment locations, however, the resolution and accuracy of these techniques are limited [3–5]. More recently optical methods such as high-speed video and digital holography have been used to improve the information that can be obtained about these fragment fields [1,6]. The technique developed in this work uses a plenoptic camera to measure these fragment or particle characteristics and allows for instantaneous collection of the 3D data from a single snapshot by extracting the volumetric information in post-processing [7,8].

Plenoptic imaging is an implementation of light field imaging in which a camera is modified by the insertion of a microlens array between the main lens and the image sensor. This microlens array allows collection of the spatial locations of the incoming light rays (as in a traditional camera) while also encoding the angular information of the light rays as a function of their position in the sub-aperture images. A raw plenoptic image can be post-processed to create computationally refocused or perspective shifted images producing a 3D representation of the scene from a single instantaneous image [8]. Previous works have applied plenoptic imaging to particle-based measurements in a variety of applications and scales [9–15]. Extensive work has also been conducted in the determination of depth maps from light field data for depth estimation of surface profiles and continuous objects [16–18]. The benefits to depth measurements resulting from the large number of views available in plenoptic imaging has been shown specifically in Roberts et al. [19]. Other works have used commercial software for stereo fragment tracking applications similar to those of interest here, however, those codes generally do not allow the large number of views available in a light field data set [1]. A direct comparison of these types of methods to the algorithm developed here is of interest, but not included due to the non-trivial nature of converting methods originally designed for significantly different applications, for use with plenoptic data structures.

The current work builds upon the refocusing based method developed in [20] and is inspired by the limitations of that method. In application of the refocusing based method, particles are located by first creating refocused images of the scene at closely spaced depths, termed a focal stack. Then, particle 3D locations are measured using metrics of minimum intensity to determine in-plane location and maximum edge sharpness to determine optical depth. In [21], accuracy and precision of experimental depth displacement error is reported along with a comparison to theoretical values of uncertainty. The most prominent limitation of this refocusing based implementation is the restricted depth of field within which measurements remain accurate. Precision outside the depth of field degrades significantly and the data collected at the most extreme depths cannot be processed because particles cannot be computationally brought into sufficient focus. Additionally, the necessity of creating dense focal stacks from each raw image requires significant computational resources making the processing requirements of a refocusing based particle location algorithm unreasonable for large data sets with typical computational resources [20].

As an alternative, the perspective shift capability of plenoptic imaging is demonstrated in the bottom row of Fig. 1. These images are created from a single instantaneous raw plenoptic image of the same detonator as shown in the top row. Experimental methods are similar to those reported previously in [12,20]. In Fig. 1, perspective is shifted by selecting the data from different locations within sub-aperture images of the plenoptic camera. Comparison of the two circled fragments shows a vertical shift in perspective. Given knowledge of the camera configuration, this shift encodes the 3D location of the particles.

Here, particle localization algorithms are proposed which utilize this perspective information to overcome the limitations of the previous refocusing based method [20]. In the sections that follow, the perspective shift algorithm is first defined. Next, results from a well-controlled experiment are used to quantify accuracy and precision. Measurement error is shownto be significantly reduced compared to the previous refocusing based method. Furthermore, the range of measurable optical depths is increased while the computational requirements are markedly decreased. Finally, the work concludes with application of the method to quantify the fragments shown in the instantaneous plenoptic image in Fig. 1.

2. Perspective shift algorithm

The algorithm developed in this work uses the discrepancy in 2D particle centroids between perspective views to determine 3D particle positions. The large number of views available in plenoptic imaging provides redundancy and allows erroneous measurements to be identified and removed which reduces uncertainty. An outline of the method is shown in Fig. 2 and is described in detail in the following subsections. The accuracy and precision of particle location measurements achievable using this method is assessed using the experimental data set examined in previous work [20], in which a rigid static particle field is simulated by straight pin heads inserted at random orientations into a foam ball as shown in Fig. 3. The particle field is mounted on a translation stage allowing precise displacements along the optical depth direction, z.

 figure: Fig. 2

Fig. 2 Flow chart outlining the depth from perspective method to determine 3D particle position from a raw plenoptic image.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Experimental configuration including simulated particle field, translation stage, and plenoptic camera [20].

Download Full Size | PDF

2.1 Identification of particle centroids within each perspective view

As shown in Fig. 4, the first step is the creation of all available perspective views from the single, instantaneous raw plenoptic image. Here, perspective views are generated using the Light Field Imaging Toolkit [22], and each view is assigned a (u,v) position corresponding to angular position within the main lens aperture. Note, in Fig. 4, three example perspective images are shown from the 97 total perspective images generated with the current camera and processing algorithm.

 figure: Fig. 4

Fig. 4 Perspective views of a static particle field demonstrating a horizontal shift in perspective. Red circles indicate identified particles in each view. Black ovals indicate particles of interest.

Download Full Size | PDF

Next, in each perspective view, particle locations are determined by segmentation. This work utilizes typical MATLAB region finding tools which define particles (or fragments) as connected regions of pixels with intensities below dynamically determined maximum intensity thresholds. User inputs allow selection of acceptable particle size and shape parameters and intensity threshold window size; therefore, results can be improved in applications where some particle size and shape characteristics are known. Note that the literature contains a wide range of alternative image segmentation tools. These could likely provide additional benefits and tuning parameters when required for other applications.

Red circles shown in Fig. 4 give examples of the identified particle regions for each perspective view. In this image, pinheads are used to represent particles. Within each perspective view, the centroids of these segmented regions are assigned (s,t) spatial coordinates. This results in a list of measured (u,v,sm,tm) light field coordinates depicting all measured particle centroids. At this point, the centroids have been identified but it has not yet been determined which particle centroids correspond to each particle. Note, in Fig. 4, some particles are not identified in every view due to in-plane proximity with other particles. Still, as shown in the subsequent discussion, the 3D locations of these particles can nevertheless be determined due to the large number of available views.

2.2 Clustering of corresponding particle centroids from each view

As seen in Fig. 4, an individual particle is assigned different (u,v,sm,tm) centroids from each perspective view. Next it is necessary to match each of the corresponding particle centroids which belong to the same physical particle. In the following subsections 3D particle location is determined using the centroid coordinates. In addition, the identified particle regions in each view are used for in plane size and shape determination and to filter potential shape outliers from centroid clusters. Conceptually, this is similar to particle tracking, except that the views are spatially not temporally correlated. It is also similar to the correspondence problem in stereo photogrammetry, except that many views are used. This means that all views are equally correspondent to all other views and any sequential method of matching particle centroids between views is susceptible to bias resulting from the order in which views are matched. To avoid this bias, all particle centroids from all views are examined simultaneously using a variation of a k-means clustering technique in combination with depth plane projections. To simplify the discussion, this technique is first shown schematically in Fig. 5, where a simulated example with two point particles (color coded blue and orange) is shown.

 figure: Fig. 5

Fig. 5 Schematic depiction of necessity and functionality of shifted plane projection clustering.

Download Full Size | PDF

Starting on the right-hand side of Fig. 5, an orange particle is located at an optical depth beyond the nominal focal plane. Light rays from this orange particle (shown by solid lines) propagate to the aperture of the main lens where they are focused at a plane in front of the microlens. When these rays reach the microlens plane, they spread back out and are imaged as a cluster of discrete (sm,tm) locations as shown by the orange dots in the center image in the bottom row of Fig. 5. Similarly, the blue particle, located before the nominal focal plane, is imaged as a cluster of discrete (sm,tm) locations as shown by the blue dots in the center image in the bottom row of Fig. 5. For scientific applications such as here, grayscale image sensors are typically used to maximize pixel resolution, and the color information shown in Fig. 5 cannot be differentiated. Consequently, with no further information these two discretized realizations of out-of-focus particle images are difficult to separate and quantify as two distinct particles.

Fortunately, the plenoptic architecture also quantifies the angular (u,v) coordinates for each of the light rays in Fig. 5. As shown in [8], with both the spatial (sm,tm) and angular (u,v) information, any ray can be numerically propagated to an alternative image plane at a distance α times the nominal image distance. At that plane, the refocused spatial coordinates (s'm,t'm) are

s'm=u+(smu)αandt'm=v+(tmv)α.
For example, when the rays in Fig. 5 are projected to the (s',t')blue plane, which is close to the image depth of the blue particle, the rays belonging to that particle form a tight cluster that can be more easily identified. A similar effect occurs at the (s',t')orange plane for the other particle. Therefore, the method proposed here combines numerical refocusing via Eq. (1) with spatial k-means clustering to identify matching particle centroids which form tight clusters at projected image planes near their focus location. As clusters of particle centroids are identified, they are removed from consideration, improving the likelihood of successful clustering of remaining data. (Note, although the α at which a cluster is identified provides an estimate of particle depth, this is not used as a depth measurement due to the sparsity of projection planes and because the necessary volumetric calibration is not applied to calculate the projection.)

At a given image depth (fixed α), the k-means algorithm minimizes the average squared distance between points in the same cluster for a given number of clusters, k. In this application k is the maximum number of particle centroids identified in a single perspective view. The general k-means algorithm is as follows:

  • 1. A set of k initial cluster centers, c, is selected from the pool of all available (sm,tm) particle centroids. The k-means variation used here also strategically selects these initial centers to improve performance.
  • 2. Each of the remaining (sm,tm) positions is assigned to the closest center c.
  • 3. The center of mass of each of these clusters is calculated and defined as the new center of the cluster.
  • 4. Steps 2 & 3 are repeated until the centers no longer change [23].

This clustering is executed at a range of planes throughout the volume of interest by projecting the particle centroids to different image planes using Eq. (1). At each plane, k-means clustering is executed. The size of each cluster is calculated as the maximum distance between the particle (sm,tm) positions assigned to that cluster. Any cluster with a size below a predetermined threshold is accepted as a correct particle cluster and removed from consideration at following planes. The size threshold, T, is the expected maximum size that a correct particle cluster could be if the physical location of the particle is closer in depth to the current plane than any of the other considered planes defined by,

T=1.2[Ffnα1/2αsα1/2],
where, F is the focal length of the main lens, fn is the f-number of the main lens, α1/2 is the value of α halfway between the current and neighboring shifted planes, and αs is the value of α at the current shifted plane. This expected value is multiplied by 1.2 to allow for experimental error.

To reduce the impact of the order in which the depth planes are examined in the clustering procedure, planes are examined from smallest to largest size threshold. At smaller thresholds it is less likely that centroids from an incorrect particle would be included in a cluster. Since centroids are removed from consideration as they are sorted into a tight cluster, this results in a smaller number of particles at the planes with larger size requirements. The number of cluster planes at which the k-means algorithm is executed is a parameter with expected relevance in application to data sets with increased density. The smaller the number of clustering planes, the larger the size threshold. This results in more particles per plane and increased probability of cluster overlap even at the optimal plane for a given particle. In the current work the clustering was executed at 10 planes for each image. It was determined that fewer planes resulted in significant errors and that more planes had a negligible effect on the measurements. It is expected that more planes could provide a benefit in the measurement of denser particle fields.

An example of this projection and clustering process applied to the experimental image shown in Fig. 4 is given in Fig. 6. This shows the compilation of measured particle centroid locations from all perspective views projected to two different planes. Of particular interest are the particles in the circled regions. First, consider the projections in the dashed oval. From Fig. 4, it is evident that there are three particles in this region. At the projection plane shown on the left of Fig. 6, only two clusters are created due to the proximity of the particle centroids, however by projecting these positions to the plane shown on the right, the three particles are separated enough to allow correct clustering. At this plane, the particle images shown in blue form a tight cluster and are identified as a particle; the other two particles are identified at planes not shown here. Other examples of tight clusters created by the projection process are shown in the solid and dotted ovals where the particle centroids form a tight cluster at one plane but not the other. In the case of the solid circle, a single particle is incorrectly identified as two large clusters in the plane shown on the left, but correctly identified in the plane shown on the right.

 figure: Fig. 6

Fig. 6 Example of particle centroid clustering executed at two different projection depth planes, color indicates centroids assigned to the same cluster. Ovals indicate particles of interest corresponding to those shown in Fig. 4.

Download Full Size | PDF

In some cases of overlapping particles, it is possible that no single view contains all overlapped particles. In this case, it is impossible to identify these overlapping particles because the value of k is smaller than the true number of particles. To rectify this, if the above process is completed and a large number of unassigned particle centroids remain, the possible number of particles input into the k-means algorithm is incrementally increased and the process is repeated with the remaining unassigned particle centroids.

For each physical particle in the measurement volume, the end results of this clustering portion of the algorithm is list of all measured (u,v,sm,tm) particle centroids combined from all perspective views. To exploit the measured in plane characteristics of the particles, user inputs allow limitations to be placed on acceptable variation in size and shape parameters within a cluster. Given the possibility of erroneous cluster assignment, the 3D location procedure allows the rejection of any measurements which may have been inappropriately assigned.

2.3 3D particle location

The calculation of the 3D object space location of a particle is determined using the relationship defined by the Direct Light Field Calibration (DLFC), which is a volumetric calibration method that uses a 3D polynomial to relate light field coordinates, (u,v,s,t), to object space coordinates (x,y,z). Details can be found in [24]. A schematic depiction of the 3D triangulation process is given in Fig. 7. For a given particle, first the (x,y,z) position that minimizes the DLFC polynomial for the (u,v,sm,tm) coordinates is determined using a MATLAB nonlinear solver. Next, the DLFC relationship is used directly to define calculated light field coordinates, (u,v,sc,tc), which correspond to the (x,y,z) position. The measured and calculated light field coordinates are compared and any measurements that show large discrepancies are rejected as outliers. Then the process is repeated by calculating a new (x,y,z) position using the remaining (u,v,sm,tm) until no large discrepancies are found. This allows removal of a (typically small) number of erroneous measurements while still allowing a valid measurement of the particle.

 figure: Fig. 7

Fig. 7 Schematic depiction of 3D particle location process. Translation between light field and object space coordinates is repeated via DLFC until no outliers remain.

Download Full Size | PDF

2.4 Confidence determination and particle centroid addition

At this point, an attempt is made to find particle centroids that may belong to an identified particle but have not been assigned to that particle due to overlap, the influence of other nearby particles, or unsuccessful clustering. First, a confidence value for each identified particle is defined as,

C=nv/npd/zn,
where, nv is the number of particle centroids identified for the cluster, np is the maximum number of possible centroids (defined by the number of available perspective views), d is the average difference in the measured and calculated (s,t) positions for all matching (u,v) for the particle, and zn is the depth position of the particle relative to the nominal focal plane normalized by the volume depth. The numerator, nv/np, goes to one when a cluster contains particle centroids located from every perspective view. The denominator, d/zn, is a measure of the in-plane uncertainty of the particle centroids, which is normalized by the optical depth from the nominal focal plane. This normalization accounts for the expected increase in uncertainty away from the nominal focal plane. Therefore, Eq. (3) produces high confidence values for particles that are measured in many perspective views and that have small in-plane uncertainty relative to the nominal focal depth.

Next, for every cluster where nv < np, starting with the cluster with highest C, the remaining unassigned particle centroids are examined and any centroid which falls within an allowable positional range is added to the cluster. The 3D position is then recalculated using the more complete set of particle centroids. If unassigned centroids remain, the entire process is repeated starting by clustering only the remaining particle centroids.

Finally, Fig. 8 shows the results when the entire process outlined in this section is applied to the example presented in Figs. 4 and 6. Again, particles of interest are circled as in Figs. 4 and 6. Note that even the partially occluded particle has been located.

 figure: Fig. 8

Fig. 8 Final measured positions of the particles shown in Figs. 4 and 6 where color indicates the measured depth of each particle.

Download Full Size | PDF

3. Experimental assessment

Additional experimental details and a detailed description of the error calculation can be found in previous work examining this data set, a brief synopsis is given here [20]. Experimental assessment of the algorithm is conducted using the experimental configuration shown in Fig. 3. The plenoptic camera used is an Imperx Bobcat B6640 29 MP which has a CoaXPress KAI-29050 CCD image sensor (6600 × 4400 pixels, 5.5 μm pixel pitch) and is modified by the addition of a microlens array with 471 × 362 hexagonally arranged microlenses positioned approximately 308 μm from the image sensor using a custom mount. Data is collected at four different magnifications and with the camera positioned at three different distances from the particle field. This results in 12 configurations, which allow examination of trends based on field of view and object distance. In each of these configurations, the particle field is translated in 1 mm increments along the entire 50 mm travel distance of a translation stage and images are captured at each position. This process is repeated 50 times for each configuration to achieve a large statistically significant data set of over 30000 raw images.

Measurement uncertainty obtained using the perspective shift method developed here is compared to that of the previous refocusing implementation [20]. To account for the imprecisely known offset between the z = 0 position of the traverse with respect to the nominal focal plane, measured particle depths, z, as a function of traverse positions are first fit to a line with known slope and variable intercept. Depth displacement error is then defined by the z-distance between individual particle measurements and this line [20]. In the following analysis, accuracy (or conversely bias) is quantified by average depth displacement error and precision is quantified by standard deviation of depth displacement error.

3.1 Accuracy

Accuracy is considered in Fig. 9, which displays the average depth displacement error as a function of z-location measured using the perspective shift based method on the left and the results of the previous refocusing based method on the right. Vertical error bars represent 99% confidence bounds. The perspective shift results show not only error decreased by approximately a factor of 2 but also smaller confidence bounds as compared to the refocusing results, demonstrating measurements that are more consistent. Over a range of 50 mm, depth displacement error is within 0.1 mm, or approximately 0.2% relative error. This can be compared to the refocusing implementation in which errors of approximately 0.4% were measured. As in the refocusing implementation, there is no clear trend in accuracy as a function of depth indicating that the applied DLFC volumetric calibration effectively removes depth bias caused by lens distortion and alignment errors.

 figure: Fig. 9

Fig. 9 Average depth error as a function of particle depth, z, using perspective shift (left) and refocusing (right) based depth measurement.

Download Full Size | PDF

3.2 Precision

Precision is examined in Fig. 10 showing similar plots with standard deviation of depth error as a function of depth. The perspective shift results show generally improved precision and smaller error bars as compared to the refocusing results. Over a range of 50 mm, the standard deviation of depth displacement error is within 0.4 mm. Again, this is significantly improved compared to the refocusing case where values up to 1.8 mm were measured. In the refocusing based implementation, an improvement in precision was also evident with increasing magnification, a trend not seen in the perspective shift results. This difference is likely a result of differences in the metrics used to determine particle depth. In refocusing, depth is determined using sharpness metrics; therefore, the precision of the measurements is likely related to the number of edge pixels of each particle. Since a fixed particle size is examined in all cases, at larger magnifications each particle has more edge pixels, possibly contributing to the increased precision of these measurements. In the perspective shift based implementation, depth is determined based on the location of the particle centroids, the accuracy of which is not strongly affected by particle size except at extremes. Additionally, the perspective shift method allows removal of low quality portions of measurements by rejection of imprecise particle centroids from individual perspective views. In refocusing, these measurements are integrated into the final position measurement and may negatively affect precision.

 figure: Fig. 10

Fig. 10 Standard deviation of depth error as a function of particle depth, z, using perspective shift (left) and refocusing (right) based depth measurement.

Download Full Size | PDF

3.3 Depth range

The improvements provided by the perspective shift algorithm as compared to the refocusing algorithm are further demonstrated by the depth range over which measurements can be obtained. In the previous refocusing based method, much of the data at the largest magnification (0.75) could not be processed. This was attributed to the small depth of field, which did not allow objects to be brought into sufficient focus for valid sharpness measurements. In contrast, the perspective-shift method requires only the centroid of a particle to be located. Therefore, data can be processed at depths significantly outside the depth of field. This is illustrated by the results shown in Fig. 11, where standard deviation of depth error is plotted as a function of depth. Comparison of the results of the two methods in the near focal plane region (|z| near zero) demonstrates similar results. This similarity suggests that there may be some configurations in which refocusing could provide an improvement in precision, particularly if combined with the perspective shift technique discussed here. At the extremes, some degradation is evident in the perspective shift results, particularly at the depths farthest from the camera where only a relatively small number of particles could be located. Still, at these depths, the refocusing method failed to measure any particles. Therefore, the improved ability of the perspective method to quantify particles at extreme optical depths is another advantage of the method proposed here.

 figure: Fig. 11

Fig. 11 Standard deviation of depth error as a function of particle depth, z, for a magnification of 0.75.

Download Full Size | PDF

3.4 Computational efficiency

This section compares the computational resources required by the perspective shift and the previous refocusing based methods. The base plenoptic data processing functionalities required in this algorithm use those available in the Light Field Imaging Toolkit, a collection of MATLAB functions developed by the Advanced Flow Diagnostics Laboratory at Auburn University and available at https://github.com/AFDL/LFIT. Computational times reported here consider a MATLAB implementation parallelized on a 12-core desktop with dual Intel Xeon E5-2600 v4 processors and 144 GB of RAM. Both methods initially require a preprocessing step in which the raw hexagonal grid is interpolated onto a rectilinear grid, which takes about 1 minute. The calculation of a focal stack of the required density and subsequent determination of particle centroid locations using the refocusing method takes about 1 hour for a single image. In comparison, the calculation of perspective views and execution of the particle location algorithm described in this paper takes approximately 10 seconds – a decrease of about 2 orders of magnitude. It should be noted that for more complicated or dense particle fields the processing time requirements of each method increases.

The use of perspective views rather than a focal stack also reduces the memory requirements. The refocusing method requires memory storage of a large focal stack (approximately 2 GB of RAM required for each raw image in the current example). In contrast, storage of the perspective views requires only about 0.2 GB. Considering the improved uncertainties that can also be obtained using this method, these computational reductions are particularly significant.

3.5 Experimental application

Figure 12 shows the results of applying the perspective shift method developed here to the detonator application shown in Fig. 1. The bottom left shows a center perspective view which is overlaid with measured fragment shapes. Color indicates fragment depth, red being farther from the camera, blue being closer to the camera. To the right of this is a reconstructed orthogonal side view while a top-down view is shown above. Finally, the top-right shows an isometric view of the measured 3D fragment locations. The grayscale is proportional to measured fragment image area.

 figure: Fig. 12

Fig. 12 Plenoptic measurements of the lab-scale fragmenting explosive shown in Fig. 1. Center perspective view (bottom left), overlaid with measured fragment shapes colored by optical depth, z. Reconstructed side (bottom right), top-down (top left), and isometric (top right) views showing the measured 3D fragment locations. Grayscale intensity and scatter sizes are proportional the measured fragment image area.

Download Full Size | PDF

Though the detailed fragment locations and shape characteristics are not known a priori, a qualitative assessment of the effectiveness of this measurement can be made. First, as expected due to the cylindrical symmetry of the detonator, fragments are measured in an approximately circular ring pattern at this instant in time. Second, in this experiment, the detonator was placed at an angle tilted slightly towards the camera; therefore, it is known that the fragments in the upper portion of the ring are located farther from the camera compared those in the lower portion of the ring. The depth map shows agreement with this orientation. This example indicates that a plenoptic camera combined with the data processing algorithms developed here can resolve 3D fragment properties from lab-scale explosive events. Future work will aim to quantify the uncertainty of extended objects such as the fragments in this example.

4. Conclusions

This work presents the development of an algorithm that measures 3D particle position and in plane particle size and shape from perspective shifted plenoptic images. This is done by creating a range of perspective views from a raw plenoptic image and determining particle centroids in each of these views. These particle centroids are then projected to a variety of virtual image sensor planes and sorted using a k-means clustering routine. Finally, the 3D position of the particle is determined by minimizing the relationship defined by the Direct Light Field Calibration polynomial and the position of all particle centroids from all perspective views. This method allows the removal of individual low-quality measurements from the quantities used to determine final 3D particle positions.

Application of this method to a particle image data set previously examined in [20] shows improvement in measurement uncertainty and computational efficiency as compared to the previous refocusing based implementation. In this work, average depth precision within 0.4 mm and accuracy within 0.1 mm is achieved over a range of 50 mm. In addition, the required computational time is reduced by approximately two orders of magnitude. To expand the application space of the methodologies proposed here, future work should consider the effects of particle density on uncertainty. Preliminary results of the application of this method for images with increased density can be found in Hall et al. [25]. The perspective shift method is also applied to measure the metal fragment field created by a lab-scale explosive device and results qualitatively match the expected 3D position, size, and shape of these fragments.

Funding

Supported by the Laboratory Directed Research and Development program at Sandia National Laboratories, a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.

References

1. D. R. Guildenbecher, J. D. Olles, T. J. Miller, P. L. Reu, J. D. Yeager, P. D. Bowden, and A. M. Schmalzer, “Characterization of hypervelocity fragments and subsequent HE initiation,” in 16th International Deontonation Symposium (2018).

2. A. M. Mellor, T. L. Boggs, J. Covino, C. W. Dickinson, D. Dreitzler, L. B. Thorn, R. B. Frey, P. W. Gibson, W. E. Roe, M. Kirshenbaum, and D. M. Mann, “Hazard initiation in solid rocket and gun propellants and explosives,” Pror. Energy Combust. Sci. 14(3), 213–244 (1988). [CrossRef]  

3. B. Hopkinson, “A method of measuring the pressure produced in the detonation of high explosives or by the impact of bullets,” Philos. Trans. R. Soc. Lond. 213(497-508), 437–456 (1914). [CrossRef]  

4. J. M. Brett, G. Yiannakopoulos, and P. J. Van Der Schaaf, “Time-resolved measurement of the deformation of submerged cylinders subjected to loading from a nearby explosion,” Int. J. Impact Eng. 24(9), 875–890 (2000). [CrossRef]  

5. G. Yiannakopoulos, “Accelerometer adaptor for measurements of metal plate response from a near field explosive detonation,” Rev. Sci. Instrum. 68(8), 3254–3255 (1997). [CrossRef]  

6. J. D. Yeager, P. R. Bowden, D. R. Guildenbecher, and J. D. Olles, “Characterization of hypervelocity metal fragments for explosive initiation,” J. Appl. Phys. 122(3), 035901 (2017). [CrossRef]  

7. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992). [CrossRef]  

8. R. Ng, M. Levoy, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech Rep. CTSR 1–11 (2005).

9. T. W. Fahringer, K. P. Lynch, and B. S. Thurow, “Volumetric particle image velocimetry with a single plenoptic camera,” Meas. Sci. Technol. 26(11), 115201 (2015). [CrossRef]  

10. H. Chen and V. Sick, “Three-dimensional three-component air flow visualization in a steady-state engine flow bench using a plenoptic camera,” SAE Int. J. Engines 10(2), 625–635 (2017). [CrossRef]  

11. H. Chen, V. Sick, M. A. Woodward, and D. Burke, “Human iris 3D imaging using a micro-plenoptic camera,” Opt. Life Sci.2017, BoW3A.6(2017).

12. E. M. Hall, B. S. Thurow, and D. R. Guildenbecher, “Comparison of three-dimensional particle tracking and sizing using plenoptic imaging and digital in-line holography,” Appl. Opt. 55(23), 6410–6420 (2016). [CrossRef]   [PubMed]  

13. K. C. Johnson, B. S. Thurow, T. Kim, G. Blois, and K. T. Christiansen, “Volumetric velocity measurements in the wake of a hemispherical roughness element,” AIAA J. 55(7), 2158–2173 (2017). [CrossRef]  

14. T. T. Truscott, J. Belden, R. Ni, J. Pendlebury, and B. McEwen, “Three-dimensional microscopic light field particle image velocimetry,” Exp. Fluids 58(3), 16 (2017). [CrossRef]  

15. M. Jambor, V. Nosenko, S. K. Zhdanov, and H. M. Thomas, “Plasma crystal dynamics measured with a three-dimensional plenoptic camera,” Rev. Sci. Instrum. 87(3), 033505 (2016). [CrossRef]   [PubMed]  

16. Z. Yu, X. Guo, H. Ling, A. Lumsdaine, and J. Yu, “Line assisted light field triangulation and stereo matching,” Proc. IEEE Int. Conf. Comput. Vis. 422792–2799 (2013). [CrossRef]  

17. S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light fields,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.2012,41–48 (2012).

18. H. G. Jeon, J. Park, G. Choe, J. Park, Y. Bok, Y. W. Tai, and I. S. Kweon, “Accurate depth map estimation from a lenslet light field camera,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit.1547–1555 (2015).

19. W. A. Roberts and B. S. Thurow, “Correlation-based depth estimation with a plenoptic camera,” AIAA J. 55(2), 435–445 (2017). [CrossRef]  

20. E. M. Hall, D. R. Guildenbecher, and B. S. Thurow, “Uncertainty characterization of particle location from refocused plenoptic images,” Opt. Express 25(18), 21801–21814 (2017). [CrossRef]   [PubMed]  

21. D. R. Guildenbecher and E. M. Hall, “Plenoptic imaging for three-dimensional particle field diagnostics,” Sandia Rep. June, SAND2017–6732 (2017). [CrossRef]  

22. J. Bolan, E. Hall, C. Clifford, and B. Thurow, “Light-field imaging toolkit,” SoftwareX 5, 101–106 (2016). [CrossRef]  

23. D. Arthur and S. Vassilvitskii, “K-means++: the advantages of careful seeding,” Proc. Eighteenth Annu. ACM-SIAM Symp. Discret. Algorithms8, (2007).

24. E. M. Hall, T. W. Fahringer, D. R. Guildenbecher, and B. S. Thurow, “Volumetric calibration of a plenoptic camera,” Appl. Opt. 57(4), 914–923 (2018). [CrossRef]   [PubMed]  

25. E. M. Hall, Z. P. Tan, D. R. Guildenbecher, and B. S. Thurow, “Refinement and application of 3D particle location from perspective-shifted plenoptic images,” in AIAA SciTech Forum (2018).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 Selected frames from high-speed video of a lab scale detonation (top) and vertically shifted perspective views created from a simultaneously captured plenoptic image (bottom).
Fig. 2
Fig. 2 Flow chart outlining the depth from perspective method to determine 3D particle position from a raw plenoptic image.
Fig. 3
Fig. 3 Experimental configuration including simulated particle field, translation stage, and plenoptic camera [20].
Fig. 4
Fig. 4 Perspective views of a static particle field demonstrating a horizontal shift in perspective. Red circles indicate identified particles in each view. Black ovals indicate particles of interest.
Fig. 5
Fig. 5 Schematic depiction of necessity and functionality of shifted plane projection clustering.
Fig. 6
Fig. 6 Example of particle centroid clustering executed at two different projection depth planes, color indicates centroids assigned to the same cluster. Ovals indicate particles of interest corresponding to those shown in Fig. 4.
Fig. 7
Fig. 7 Schematic depiction of 3D particle location process. Translation between light field and object space coordinates is repeated via DLFC until no outliers remain.
Fig. 8
Fig. 8 Final measured positions of the particles shown in Figs. 4 and 6 where color indicates the measured depth of each particle.
Fig. 9
Fig. 9 Average depth error as a function of particle depth, z, using perspective shift (left) and refocusing (right) based depth measurement.
Fig. 10
Fig. 10 Standard deviation of depth error as a function of particle depth, z, using perspective shift (left) and refocusing (right) based depth measurement.
Fig. 11
Fig. 11 Standard deviation of depth error as a function of particle depth, z, for a magnification of 0.75.
Fig. 12
Fig. 12 Plenoptic measurements of the lab-scale fragmenting explosive shown in Fig. 1. Center perspective view (bottom left), overlaid with measured fragment shapes colored by optical depth, z. Reconstructed side (bottom right), top-down (top left), and isometric (top right) views showing the measured 3D fragment locations. Grayscale intensity and scatter sizes are proportional the measured fragment image area.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

s ' m =u+( s m u )α and t ' m =v+( t m v )α.
T=1.2[ F f n α 1/2 α s α 1/2 ],
C= n v / n p d/ z n ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.