Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Method for GPU-based spectral data cube reconstruction of integral field snapshot imaging spectrometers

Open Access Open Access

Abstract

In this paper, the principles of spectral data cube reconstruction based on an integral field snapshot imaging spectrometer and GPU-based acceleration are presented. The primary focus is on improving the reconstruction algorithm using GPU parallel computing technology to enhance the computational efficiency for real-time applications. And the computational tasks of the spectral reconstruction algorithm were transferred to the GPU through program parallelization and memory optimization, resulting in significant performance gains. Experimental results indicate that the average processing time of the GPU-based parallel algorithm is approximately 29.43 ms, showing a substantial acceleration ratio of about 14.27 compared to the traditional CPU serial algorithm with an average processing time of around 420.46 ms. The study aims to refine the GPU parallelization algorithm for continued improvement in computational efficiency and overall performance. The anticipated applications of this research include providing crucial technical support for the perception and monitoring of crop growth traits in agricultural production, contributing to the modernization and advancement of intelligence in the field.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

With the rapid development of agricultural technology, there is a burgeoning requirement for accurate perception of crop growth characteristics [1]. Simultaneously, advancements in sensing technology enable agricultural professionals to understand the growth status of crops more comprehensively and in real time [2]. The emergence of hyperspectral imaging sensors allows farmers and agricultural experts to observe farmlands with higher spatial and spectral resolutions. By collecting and analyzing hyperspectral data, minute changes in the crop growth process, including the nutritional status of plant leaves, moisture content and early signs of pests and diseases, can be identified [3].

Recently, the use of drone equipped with spectrometers for capturing images of crops becomes an emerging trend in technological advancements for agriculture [4]. This approach not only enables efficient real-time monitoring but also is coupled with the maneuverability of drone, which allows for coverage of extensive agricultural areas, and provides new tools and perspectives for agricultural management and fosters the development of precision agriculture [5].

A snapshot imaging spectrometer, as an optical instrument for acquiring spectral information, is capable of obtaining spectral information of multiple wavelengths across the entire scene within a single exposure time [6]. In comparison with line-scan based spectrometer, the snapshot imaging spectrometer utilizes a technology that captures the entire spectrum in a single instance, providing advantages in terms of high speed and real-time capabilities, which makes it particularly effective in areas requiring immediate monitoring or rapid data collection [7]. Moreover, it can instantaneously capture the entire spectrum in the dynamic or fast-changing scenes [8]. In contrast, line-scanning spectrometers offer higher spectral resolution by gradually scanning across different wavelengths. However, this process is more time-consuming to complete a measurement, which restricts their suitability in applications that demand high temporal resolution [9].

In the literature, the vast majority of the applications are regarding to line-scanning based spectrometer in combination with drones. For example, Milewski et al. (2022) [10] conducted a comprehensive analysis of the impact of soil conditions and degradation on vegetation vitality and crop yield in Camarena using line-scanning airborne hyperspectral VNIR-SWIR-TIR data, which successfully predicted soil erosion and deposition stages through the application of hyperspectral technology combined with machine learning methods. Longmire et al. (2022) [11] demonstrated successful predictions of wheat grain protein content in the southern wheat belt of Australia through the analysis of airborne hyperspectral and thermal infrared remote sensing images. In their study, they utilized data acquired by airborne hyperspectral and thermal sensors, in the meanwhile, machine learning techniques were employed to establish a relationship between physiological features extracted from hyperspectral and thermal images and wheat grain protein content. Zhang et al. (2021) [12] utilized hyperspectral images obtained by unmanned aerial vehicles, in conjunction with crop height and narrowband vegetation indices, successfully estimated aboveground biomass in maize. However, most research focused on post-processing operation of the acquired images of line-scanning spectrometers. In fact, real-time detection is crucial for capturing subtle changes during crop growth, and snapshot spectrometers are more appropriate to achieve real-time monitoring. There are a few of studies highlighted the application of this technology. For instance, Cao et al. (2018) [13] conducted a mangrove species identification study using field close-range snapshot hyperspectral imaging and machine-learning techniques, which demonstrated that the use of hyperspectral datasets can further improve the accuracy of mangrove species classification. Yue et al. (2017) [14] utilized crop height data from the UAV-based snapshot hyperspectral sensor, in combination with spectral parameters, notably improved the accuracy of AGB estimation for winter wheat.

In the snapshot imaging based spectrometer, the 3D spectral information of objects can be reconstructed from single one shot (one frame of image) after appropriate calibration and data processing, which is also known as spectral reconstruction [15]. The spectral reconstruction is the key step for obtaining the spectral characteristics of substances, calibrating and optimizing optical systems, as well as recovering lost spectral information [16]. Despite significant progress has been made in the combination of hyperspectral technology with drone in agricultural detection, little attention has been paid on real-time spectral data cube reconstruction. Wei et al. [17] presented an efficient reconstruction algorithm based on the tensor analysis and the low-rank constraint, which can greatly decrease the computation time without unfolding the hyperspectral data cube into 2D patches. But the computation time of 3D-LRC method only can be shortened to seconds. Koundinya et al. [18] proposed a 2D convolution neural network and a 3D convolution neural network based approaches for hyperspectral image reconstruction from RGB images. A 2D-CNN model primarily focuses on extracting spectral data by considering only spatial correlation of the channels in the image, while in 3D-CNN model the inter-channel co-relation is also exploited to refine the extraction of spectral data, but the recovery time reached 11s. Future research should focus on refining real-time monitoring and reconstructing methods to promote the widespread application of hyperspectral drone in agriculture.

This paper focus on a spectral data cube reconstruction method for a snapshot imaging spectrometer based on GPU parallel computing. The purpose is to introduce a spectral reconstruction method which is accelerated by GPU, in order to meet the real-time requirements in the drone application area. The novelty of our study mainly involves two aspects: hardware-wise, the sensor prototype in this study has been improved in performance by increasing the number of spatial sampling points and spectral channels; software-wise, based on this specific sensor prototype, we utilize GPU parallel computing to accelerate the spectral data cube reconstruction process to achieve real-time performance. By transferring crucial algorithms for spectral data cube reconstruction to the GPU, profiting from complete use of GPU parallel computing, we enhance data processing speed to a significant extent. This technological advancement is anticipated to hasten hyperspectral data processing, rendering it more fitting for instant monitoring and quick response applications.

2. Experimental setup, material and methodology

2.1 Principle of imaging and data acquisition

The optical imaging principle of integral field snapshot imaging spectrometer is presented in Fig. 1: the target object is imaged onto a pinhole array through the front lens, which samples the imaged scene and transmits the light to a spectrometer. Then, the spectrometer disperses the light that entered the pinholes, forming a spectral image on a CMOS detector. Finally, the hyperspectral data of the target can be obtained with the assistance of a hyperspectral reconstruction algorithm.

 figure: Fig. 1.

Fig. 1. The operating principle of snapshot hyperspectral imaging technology with a small aperture array.

Download Full Size | PDF

The schematic representation of the spectral dispersion bands formed on the detector is depicted in Fig. 2. In this illustration, each colored stripe corresponds to the spectral dispersion band of a pinhole, aligned with the detector's column direction. In this imaging configuration, a rotation angle θ exists between the rows and columns of the pinhole array and the detector. Such a design is to allow each spectral dispersion band to precisely fill the gaps between the pinholes, thus maximizing the use of the CMOS space.

 figure: Fig. 2.

Fig. 2. The imaging schematic of dispersion bands.

Download Full Size | PDF

For this imaging system, the Hkvision MV-CH120-10(11)GM Ethernet camera is utilized as the detector. Due to the rotation angle between the pinhole array and the detector, the sampling area (Fig. 3) takes in the shape of a parallelogram. The dimension of this sampling surface is 115 ${\times} $ 70, resulting in a total of 8050 sampling points.

 figure: Fig. 3.

Fig. 3. The receiving status of the sampling points on the light source surface of the detector.

Download Full Size | PDF

2.2 Spectral calibration

The purpose of spectral calibration is to determine the relationship between spatial pixel coordinates and wavelength in each slit dispersion band. In this study, the dispersion range of the 400-900 nm wavelength range covers 231 pixels.

For instance, the 480 nm, 550 nm, 630 nm monochromatic dot plots are illustrated in Fig. 4. The white rectangular delineates the boundaries of a special spectral band, with x_min representing the lower-left horizontal coordinate and y_max denoting the upper-right vertical coordinate. Within the y-range of each band, there are 231 channels. Experimentally, each dispersion band is approximately 3 pixels wide.

 figure: Fig. 4.

Fig. 4. 480 nm, 550 nm, 630 nm monochromatic dot plots with white borders indicating the boundaries of a particular spectral band.

Download Full Size | PDF

Given the multi-dimensional nature of hyperspectral data, where each dimension corresponds to distinct aspects of the data, it is typically represented in tensor form. Imagining hyperspectral data as a data cube, each axis signifies a dimension. In this study (Fig. 5), one axis of the cube corresponds to wavelength, while the other two axes respectively correspond to the spatial horizontal and vertical coordinates.

 figure: Fig. 5.

Fig. 5. The tensor decomposition model for the hyperspectral image.

Download Full Size | PDF

Therefore, the interior of the spectral data cube reconstruction in this paper is to recover 3D spectral information from 2D data. It must be noted that this paper does not involve reflectance and radiance calibration, so grayscale values are used here as shown in Fig. 6, where I stands for the grayscale value. This process typically involves multiple loops in the algorithm, and for large images, the loop processing is slow and cannot meet the data acquisition speed of drone during rapid flights.

 figure: Fig. 6.

Fig. 6. Recover grayscale values from 2D data.

Download Full Size | PDF

2.3 GPU-based spectral data cube reconstruction

In view of the presence of a total of 8050 dispersed spectral bands in the dispersion image output by the spectrometer, it is necessary to loop the dispersive image in order to extract the spectral data for all the bands. Utilizing the dispersion band range obtained in section 2.2, the grayscale values corresponding to each channel of each dispersed spectral band in the dispersion image can be computed. The grayscale value I of the k-th channel in the i-th dispersed spectral band is determined by following Eq. (1), where k ranges from 1 to 231, and i ranges from 1 to 8050:

$$I(i,k) = \sum\limits_{{x_{{i_{\min }}}}}^{{x_{{i_{\max }}}}} {I({x_i},{y_k})}$$

At the same time, the corresponding actual wavelength can be determined based on the pixel position of the band, and one-dimensional linear interpolation can be conducted between the grayscale value and the actual wavelength. The fundamental concept can be elucidated using Eq. (2).

$$I(\lambda ) = I({\lambda _1}) + (\lambda - {\lambda _1})\frac{{I({\lambda _2}) - I({\lambda _1})}}{{{\lambda _2} - {\lambda _1}}}$$
where $\mathrm{\lambda }$ represents the position to be interpolated, ${\lambda _1}$ and ${\lambda _2}$ denote known wavelengths, $\textrm{I}({{\lambda_1}} )\; $ and $\textrm{I}({{\lambda_2}} )$ mean grayscale values corresponding to the known wavelengths.

Considering that the emitted wavelength from the laser may not always be precisely accurate (e.g., if a wavelength of 401 nm is set, the actual wavelength may be 401.13 nm), the initial dispersed spectral band traversed by the loop is taken as the reference to standardize the wavelengths across all 231 channels. For standardization, the wavelength difference between channels is calculated, and a new wavelength array is constructed. Subsequently, the reference wavelength array is employed for result correction. It is assumed that the actual captured wavelengths fall within the range of 400.13 nm to 900.14 nm across the 231 channels. Grayscale values corresponding to each of the 231 channels are determined through one-dimensional linear interpolation. Finally, the grayscale values corresponding to 231 channels between 400 nm and 900 nm are established. This leads to a total of 8050 sets of dispersed spectral bands, each with 231 wavelengths and their corresponding grayscale values, resulting in a total of 115*70*231 (* $\textrm{I}({\textrm{i},\textrm{k}} )$), achieving spectral data cube reconstruction as depicted in Fig. 7. Here, the values in the equation represent the dimensions of the data cube, where 115 represents the width, 70 represents the height, and 231 represents the depth. The last term $\textrm{I}({\textrm{i},\textrm{k}} )$ signifies the grayscale value of each pixel in the cube, which depends on its position $({\textrm{i},\textrm{k}} )$. To assess the reconstruction speed, a 13th Gen Intel Core i9-13900HX CPU is used. The time spent on the CPU for the above reconstruction process is approximately 420.46 ms.

 figure: Fig. 7.

Fig. 7. Schematic diagram of the recovery of spectral data cube based on dispersion strips.

Download Full Size | PDF

To expedite the reconstruction speed, a GPU based parallel computing technique is implemented. This process involves utilizing a Python script to invoke a C++ dynamic-link library (DLL) for traversing and interpolating the target dispersion image. Specifically, Python is employed for data processing and calling interface functions of the C++ DLL. The C++ DLL executes image looping, grayscale calculation, and correction to enhance operational efficiency. The implementation process is illustrated in Fig. 8.

 figure: Fig. 8.

Fig. 8. Basic process of 3D spectral reconstruction.

Download Full Size | PDF

The process is initiated by registering the GPU data and transferring them to the GPU memory. Subsequently, the image data will be read and transmitted to GPU memory. Following that, the interpolation of grayscale with the corresponding wavelengths is performed using the CUDA kernel function.

By leveraging the index of each thread, the wavelength and grayscale values corresponding to each dispersed spectral band element from the input dispersion image are obtained for interpolation, with the goal of deriving an interpolation function. Utilizing this interpolation function, the grayscale values corresponding to each wavelength are computed following wavelength difference correction. Finally, the interpolated results are transferred back from the GPU to the host memory, the previously allocated memory space on the GPU is released, and the host memory is unregistered.

The process of thread parallel processing involves each thread in the CUDA kernel function computing its own tid, which is the globally unique index of the current thread. BlockIdx.x represents the index of the current block, blockDim.x refers to the number of threads in each block, and threadIdx.x is the index of the current thread within the block. By combining these values, the global index of the current thread can be acquired [19]. The calculation formula is as follows:

$$tid = blockldx.x\ast blockDim.x + threadldx.x$$
then the data will be processed by the current thread is determined using tid.

In parallel data processing, each thread handles different data based on its index tid. The data set includes the ranges of 8050 dispersed spectral bands obtained from previous processing, the corresponding wavelengths for each pixel, the reference wavelength array, and the target dispersion image. Each thread reads and processes the portion of data assigned to it based on its index tid, and then stores the computation results in the cube array (115*70*231). In this context, blockSize is set to 256, indicating that each CUDA block has 256 threads. numBlocks is calculated by dividing the total number of threads (8050) by the block size (256) and rounding up, determining the number of CUDA blocks to be launched. The total number of threads is numBlocks * blockSize.

Memory allocation uses cudaMalloc(), cudaMemcpy(), and cudaHostRegister() for graphics memory allocation, CPU-to-GPU transfers, and registerion of the host memory as mappable memory [20]. And in the case of the NVIDIA GeForce RTX 4060 Laptop GPU, the time consumed by each method for handling the same size of memory is shown in Tab. 1.

Tables Icon

Table 1. Time consumed by each method for handling the same size of memory

3. Results and discussion

3.1 Experimental results of integral field snapshot imaging spectrometer

This section demonstrates the spectral data cube reconstruction of an object's dispersion image captured using an integral field snapshot imaging spectrometer.

In Fig. 9(a), the target image captured using an RGB camera is displayed, showcasing the original appearance of the object. The dispersion image formed on the detector's target surface through the pre-optical system is represented in Fig. 9(b). Initially, the original target is imaged onto a pinhole array, followed by discrete sampling of the imaged scene by the pinhole array, thereby generating the dispersion image of the object.

 figure: Fig. 9.

Fig. 9. Target object captured using RGB camera(a) and dispersion diagram of the actual acquisition on the detector(b).

Download Full Size | PDF

Spectral data cube reconstruction can be achieved by processing the spectral data of the 8050 dispersion bands. In this experiment, the 68th ($\mathrm{\lambda } = \textrm{}450\textrm{nm}$), 112th ($\mathrm{\lambda } = \textrm{}550\textrm{nm}$) and 179th ($\mathrm{\lambda } = \textrm{}650\textrm{nm}$) bands from the hyperspectral image were inputted as the values for the R, G, and B components, respectively, displaying a hypercube image as shown in Fig. 10. This provides an intuitive representation of the final reconstruction result.

 figure: Fig. 10.

Fig. 10. 3D reconstructed hypercube(a) and plan view of the spectral data cube reconstruction(b).

Download Full Size | PDF

3.2 Impact of GPU parallel computing on the performance of spectral data cube reconstruction

After parallelizing the algorithm and optimizing memory usage, the spectral reconstruction algorithm now performs most of the computations on the GPU, significantly enhancing computational efficiency. To evaluate the performance of the optimized parallel reconstruction algorithm in comparison to CPU serial processing, experimental comparisons were conducted. The specific results are summarized in Tab. 2.

Tables Icon

Table 2. Comparison of runtime between CPU and GPU algorithms

As observed, the average processing time of the algorithm is approximately 29.43 ms based on GPU parallelization, whereas the traditional CPU serial algorithm takes around 420.46 ms on average. This results in an acceleration ratio of about 14.27. Through program parallelization and leveraging the advantages of GPU parallel computing, the efficiency of spectral reconstruction has been greatly enhanced to real-time level.

4. Conclusion and outlook

In conclusion, we introduced a snapshot imaging based spectral meter, covering its optical design, calibration, the principle of spectral data cube reconstruction, and GPU based reconstruction acceleration. This study particular emphasized enhancing the reconstruction algorithm by means of GPU parallel computing technology, aiming to obtain a considerable boost to computational efficiency for real-time application. Noteworthy performance gains were achieved by transferring the majority of the computational tasks of the spectral reconstruction algorithm to the GPU through program parallelization and memory optimization.

The experimental results demonstrated that the average processing time of the GPU-based parallel algorithm is approximately 29.43 ms, while the average processing time of the traditional CPU serial algorithm is around 420.46 ms, resulting in an acceleration ratio of about 14.27. This indicates a substantial improvement in spectral reconstruction efficiency through the advantage of GPU parallel computing.

In the future, our focus is to refine the GPU parallelization algorithm for further enhancement computational efficiency and overall performance. Our study will continue to evolve and develop through further research and improvement. We anticipate that this endeavor will provide crucial technical support for the perception and monitoring of crop growth traits in agricultural production, representing a significant step towards modernizing and advancing intelligence in the field.

Funding

National Key Research and Development Program of China (2021YFD2000101); National Natural Science Foundation of China (52105197).

Acknowledgments

The authors would like to thank the National Key Research and Development Program of China(2021YFD2000101) and National Natural Science Foundation of China(52105197).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the corresponding author upon reasonable request.

References

1. A. Jung, R. Michels, and R. Graser, “Portable snapshot spectral imaging for agriculture,” Acta agrar. Debr. 150, 221–225 (2018). [CrossRef]  

2. A.-K. Mahlein, E.-C. Oerke, U. Steiner, et al., “Recent advances in sensing plant diseases for precision crop protection,” Eur. J. Plant Pathol. 133(1), 197–209 (2012). [CrossRef]  

3. K. Dave and Y. N. Trivedi, “Crop-specific hyperspectral band selection method using limited ground-truth data,” J. Indian Soc. Remote Sens. 43(19-24), 7104–7116 (2022). [CrossRef]  

4. L. Li, X. Zheng, K. Zhao, et al., “Potential Evaluation of High Spatial Resolution Multi-Spectral Images Based on Unmanned Aerial Vehicle in Accurate Recognition of Crop Types,” J. Indian Soc. Remote Sens. 48(11), 1471–1478 (2020). [CrossRef]  

5. A. Montes De Oca and G. Flores, “The AgriQ: A low-cost unmanned aerial system for precision agriculture,” Expert Systems with Applications 182, 115163 (2021). [CrossRef]  

6. Y. Wang, M. E. Pawlowski, S. Cheng, et al., “Light-guide snapshot imaging spectrometer for remote sensing applications,” Opt. Express 27(11), 15701 (2019). [CrossRef]  

7. H. Hu, H. Zhou, Z. Xu, et al., “Practical snapshot hyperspectral imaging with DOE,” Opt. Lasers Eng. 156, 107098 (2022). [CrossRef]  

8. X. Cao, T. Yue, X. Lin, et al., “Computational Snapshot Multispectral Cameras: Toward dynamic capture of the spectral world,” IEEE Signal Process. Mag. 33(5), 95–108 (2016). [CrossRef]  

9. F. Cai, J. Chen, X. Xie, et al., “The design and implementation of portable rotational scanning imaging spectrometer,” Opt. Commun. 459, 125016 (2020). [CrossRef]  

10. R. Milewski, T. Schmid, S. Chabrillat, et al., “Analyses of the Impact of Soil Conditions and Soil Degradation on Vegetation Vitality and Crop Productivity Based on Airborne Hyperspectral VNIR–SWIR–TIR Data in a Semi-Arid Rainfed Agricultural Area (Camarena, Central Spain),” Remote Sens. 14(20), 5131 (2022). [CrossRef]  

11. A. R. Longmire, T. Poblete, J. R. Hunt, et al., “Assessment of crop traits retrieved from airborne hyperspectral and thermal remote sensing imagery to predict wheat grain protein content,” ISPRS Journal of Photogrammetry and Remote Sensing 193, 284–298 (2022). [CrossRef]  

12. Y. Zhang, C. Xia, X. Zhang, et al., “Estimating the maize biomass by crop height and narrowband vegetation indices derived from UAV-based hyperspectral images,” Ecol. Indic. 129, 107985 (2021). [CrossRef]  

13. J. Cao, K. Liu, L. Liu, et al., “Identifying Mangrove Species Using Field Close-Range Snapshot Hyperspectral Imaging and Machine-Learning Techniques,” Remote Sens. 10(12), 2047 (2018). [CrossRef]  

14. J. Yue, G. Yang, C. Li, et al., “Estimation of Winter Wheat Above-Ground Biomass Using Unmanned Aerial Vehicle-Based Snapshot Hyperspectral Sensor and Crop Height Improved Models,” Remote Sens. 9(7), 708 (2017). [CrossRef]  

15. W. Feng, H. Rueda, C. Fu, et al., “3D compressive spectral integral imaging,” Opt. Express 24(22), 24859–24871 (2016). [CrossRef]  

16. Z. Yuanhong and L. Bo, “The RGB Digital Camera’s Multi-Channel Spectral Reconstruction Based on Basis Function Theory,” Procedia Eng. 29, 3594–3599 (2012). [CrossRef]  

17. C. Wei, Q. Li, X. Zhang, et al., “A Fast Snapshot Hyperspectral Image Reconstruction Method Based on Three-Dimensional Low Rank Constraint,” Can. J. Remote Sens. 47(4), 588–595 (2021). [CrossRef]  

18. S. Koundinya, H. Sharma, M. Sharma, et al., “2D-3D CNN Based Architectures for Spectral Reconstruction from RGB Images,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2018), pp. 957–9577.

19. M. Garland, S. Le Grand, J. Nickolls, et al., “Parallel Computing Experiences with CUDA,” IEEE Micro 28(4), 13–27 (2008). [CrossRef]  

20. I. Gelado and M. Garland, “Throughput-oriented GPU memory allocation,” Proceedings of the 24th Symposium on Principles and Practice of Parallel Programming27–37 (2019).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the corresponding author upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. The operating principle of snapshot hyperspectral imaging technology with a small aperture array.
Fig. 2.
Fig. 2. The imaging schematic of dispersion bands.
Fig. 3.
Fig. 3. The receiving status of the sampling points on the light source surface of the detector.
Fig. 4.
Fig. 4. 480 nm, 550 nm, 630 nm monochromatic dot plots with white borders indicating the boundaries of a particular spectral band.
Fig. 5.
Fig. 5. The tensor decomposition model for the hyperspectral image.
Fig. 6.
Fig. 6. Recover grayscale values from 2D data.
Fig. 7.
Fig. 7. Schematic diagram of the recovery of spectral data cube based on dispersion strips.
Fig. 8.
Fig. 8. Basic process of 3D spectral reconstruction.
Fig. 9.
Fig. 9. Target object captured using RGB camera(a) and dispersion diagram of the actual acquisition on the detector(b).
Fig. 10.
Fig. 10. 3D reconstructed hypercube(a) and plan view of the spectral data cube reconstruction(b).

Tables (2)

Tables Icon

Table 1. Time consumed by each method for handling the same size of memory

Tables Icon

Table 2. Comparison of runtime between CPU and GPU algorithms

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

I ( i , k ) = x i min x i max I ( x i , y k )
I ( λ ) = I ( λ 1 ) + ( λ λ 1 ) I ( λ 2 ) I ( λ 1 ) λ 2 λ 1
t i d = b l o c k l d x . x b l o c k D i m . x + t h r e a d l d x . x
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.