Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Broad dual-band temporal compressive imaging with optical calibration

Open Access Open Access

Abstract

For applications such as remote sensing and bio-imaging, images from multiple bands can provide much richer information compared to a single band. However, most multispectral imaging systems have difficulty in acquiring images for high-speed moving objects. In this paper, we use a DMD-based temporal compressive imaging (TCI) system to obtain high-speed images of moving objects over a broad dual-band spectral range, in the visible and the near-infrared (NIR) bands simultaneously. To deal with the degraded reconstruction caused by the optics, four nonuniform calibration strategies are studied, which can also be implemented into other compressive imaging systems. Moving objects covered by paint or through a diffuser are reconstructed to demonstrate the superior performance of the calibrated broad dual-band TCI system.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In applications such as astronomy [1,2], remote sensing [3], agriculture assessment and management [4], food quality inspection [5], and bio-imaging [6], multi-band images can provide much richer information of an object than single-band images. To obtain multispectral or hyperspectral images, two classes of spectral imaging systems are mainly used [7], the systems with scanning parts such as a pushbroom spectrometer or a tunable filter camera [8], and snapshot spectral imaging systems such as a coded aperture snapshot spectral imager (CASSI) [9] or a snapshot hyperspectral imaging Fourier transform spectrometer (SHIFT) [10,11]. Generally, both classes have difficulty in imaging moving objects [7]. For the former, the scanning mechanism limits its application for moving objects. For the latter, the frame rate of a detector array limits its imaging speed. Therefore, we turn to temporal compressive imaging (TCI) system with the aim to increase the imaging system speed.

TCI was first studied for single-band high-speed imaging. It is motivated by the need to push the tradeoff between spatial and temporal resolutions in an imaging system, as these two critical imaging parameters often restrict each other. For a high spatial resolution system, the imaging speed is generally slow. For a high-speed system, the resolution is generally limited. The reason causing such a tradeoff includes two factors, namely, a large data transmission bandwidth requirement from the system, and the data transmission capacity defined by the integrated circuit in a detector array and its driver. Generally, in an imaging system, more data require more transmission time. To overcome this issue, TCI has been studied, where high-speed object frames are modulated by a spatial light modulator (SLM), and then a low-speed sensor is used to make measurements for reconstruction [12,13]. The idea is also called single snapshot compressive imaging.

In a TCI system, the SLM is a central component. An early setup uses Liquid Crystal on Silicon (LCoS) to modulate the high-speed frames [14]. Later, binary masks on motorized stages [12,15] and Digital Micro-mirror Devices (DMDs) [13,16] have also been used for TCI. Comparing these light modulation devices, a system using an LCoS or a mask has a comparably simple optical part due to the co-axis object and the imaging planes. However, an LCoS device modulates light phase. Thus, incoherent object light will not work. For masks on a motorized stage, the light modulation patterns are limited. Because of these limitations, we study TCI using a DMD in this work.

As discussed in the beginning, conventional spectral imaging systems are not suitable for moving objects. However, temporal compressive imaging is designed to deal with moving object. Thus, combining temporal compressive imaging with snapshot compressive spectral imaging, specifically with CASSI, a spectral imaging system for moving objects can be obtained [17]. Note that generally, the spectral resolution and range of a spectral imaging system are restricted by each other [18]. The spectral range of spectral or spectral-temporal compressive imaging is restricted to a comparable narrow band such as a set of channels/very narrow bands in the visible band. To be workable in a wide band, such as a visible plus NIR band, multiple cameras for different bands are required. On the other hand, it should be noted that a DMD has two reflection directions. It naturally works like a beam splitter and an SLM. Combined with two detector arrays, it functions as an imaging system in two wide bands. Additionally, with dispersive elements in each branch, it will become a spectral-temporal compressive imaging system in two wide bands.

The idea of using both reflection directions of DMD has not been common in compressive imaging. In most systems, only one reflection direction is used. Several groups have used the two reflection directions to study single-pixel dual-band spatial compressive imaging [1921]. However, to the best of our knowledge, we are the first to work on visible plus NIR dual-band temporal compressive imaging, which has much higher requirement in system optics design or system optical calibration. For dual-band single pixel spatial compressive imaging, a single detector is used to collect system measurements for one band. Thus, the optical lens between an SLM and a detector is simple. The only requirement is to collect all light from the SLM to the detector. For TCI, the detector resolution is the same as the resolution of an SLM. Thus, the errors caused by lens aberration and misalignment will degrade the system performance significantly.

It has been recognized that calibration is an important factor that restricts the application of compressive imaging [22]. For compressive imaging or many fields in computational imaging, nontraditional imaging architectures or elements such as an SLM are used. These nontraditional architectures require new or specifically designed optical lens. Even with customized lenses, aberrations and misalignment can easily introduce errors into the system measurements. Thus, calibration is critical for computational imaging. However, over the years, not many calibration methods have been studied. In compressive imaging, there is an SLM in an intermediate position of the system besides multiple lenses. It is difficult to obtain the ideal measurement at the SLM position. Hence, traditional calibration methods [23,24] are not valid. In a compressive imaging system, usually there are two parts: one from an object to an SLM, which is referred to as Part 1 in this paper, and the other from a DMD to a detector or a focal plane array (FPA), which is referred to as Part 2. A uniform calibration method has been studied for Part 2 in a single band spatial compressive imaging system [25]. In this work, we study four nonuniform calibration strategies for visible plus NIR dual-band TCI.

Although it is not our focus in this work, reconstruction methods are very important for TCI. Several kinds of reconstruction methods have been discussed thoroughly, such as Two-step Iterative Shrinkage/Thresholding (TwIST) [13], Generalized Alternating Projection based Total Variation (GAP-TV) [26], Decompress Snapshot Compressive Imaging (DeSCI) [27], and Gaussian Mixture Model (GMM) based methods [28]. The first two methods use total variation as a constraint for reconstruction. The DeSCI method uses the weighted nuclear norm minimization (WNNM) model to reformulate the penalty term in object reconstruction. These three methods have different regularization for the reconstruction optimization problem. In the GMM-based method, the problem is defined in another way. Each pixel value is modeled as a random variable, whose probability density function is a summation of weighted Gaussian distributions. The reconstruction process is to find the mean value of this GMM model. Although these methods solve the reconstruction problem from different aspects, they all use iterative algorithms, which is time consuming.

To shorten the reconstruction process, several deep learning networks have been studied [2932]. In this work, we use the GMM method for reconstruction, because the method has nice performance with tolerable time consumption, besides that our calibration strategies are not sensitive to different reconstruction methods. In addition to reconstruction methods, different application fields of TCI has been explored, such as stereo imaging [33], spectral video [34], expanded field of view [35], microscopy [36], and imaging through scattering media [37]. As discussed before, in this work, we studied a broad dual-band TCI system [38], which uses the two reflection directions of a DMD device to capture high-speed object frames using low frame rate cameras in the visible and the NIR bands simultaneously. If one adds dispersive elements in both branches, the system will become a broad dual-band spectral-temporal compressive multichannel imaging system.

In summary, our main contributions are twofold. One is that for the first time, we show how to build a dual-band TCI system, and demonstrate its potential applications for broad visible and NIR dual-band fast imaging. The other is to study four calibration strategies for TCI. These calibration strategies are simple, fast and can be easily implemented. They can also be used for other computational imaging problems, such as block-wise spatial compressive imaging [39,40].

The paper is organized as follows. In Section 2., we discuss the broad dual-band temporal compressive imaging idea with an ideal measurement model. Then in Section 3., we discuss the effect of a lens point spread function (PSF) to a TCI system. We study four strategies for calibration. The calibration process for the part from an object to a DMD (Part 1) and the part from a DMD to a FPA (Part 2) are discussed with details. In Section 4., experimental results are used to evaluate the four strategies. Then we use them for high-speed object reconstruction in different visible plus NIR broad dual-band scenarios. In the end, we draw conclusions.

2. Broad dual-band temporal compressive imaging (TCI)

In Fig. 1, we present a system diagram for the visible plus NIR dual-band TCI. In such a system, the moving object is imaged onto a DMD. Then, the DMD device modulates the image sequences of the moving object with high speed. The two reflection directions of DMD in Fig. 1 are used for the visible and the NIR bands, respectively. In each direction, a modulated image sequence is refocused onto a low-speed detector array to make measurements. Notice that the resolutions of the two detector arrays are the same as the resolution of the DMD. This is different from spatial compressive imaging [3941]. Based on the system diagram, a temporal compressive imaging system is similar to a spatial compressive imaging system. Both of them use a DMD as a central modulation device. However, spatial compressive imaging utilizes the spatial resolution of a DMD, while TCI uses its high speed. Thus, the detector array in spatial compressive imaging has smaller resolution than the DMD, while in TCI the resolutions of a DMD and a detector array are the same.

 figure: Fig. 1.

Fig. 1. A system diagram for broad dual-band temporal compressive imaging (TCI).

Download Full Size | PDF

To model the measurement collection process, we define the moving object sequence as $O(x,y,t,\lambda )$ and the DMD modulation pattern as $W(x,y,t,\lambda )$. Here, $x$ and $y$ indicate the spatial coordinates. The parameter $t$ indicates the time. The parameter $\lambda =\{1,2\}$ indicates the visible and the NIR bands, respectively. The measurements at a detector array can be written as

$$D(x,y,\lambda)=\int_{t_1}^{t_2} O(x,y,t,\lambda)W(x,y,t,\lambda) \, dt .$$

In a discrete format, the measurements become $D(m,n,\lambda )=\sum\limits_{t_i=1}^{K}O(m,n,t_i,\lambda )W(m,n,t_i,\lambda )$ with $1\leq m \leq M$ and $1\leq n \leq N$. In a matrix form, we can rewrite the measurements as

$${\mathbf{d}}_{\lambda}=\left[ \begin{array}{c} d_{1,\lambda} \\ d_{2,\lambda} \\ \vdots \\ d_{MN,\lambda} \end{array} \right] =\left[ \begin{array}{cccc} {\mathsf{W}}_{1,\lambda} & {\mathsf{W}}_{2,\lambda} & \cdots & {\mathsf{W}}_{K,\lambda} \end{array} \right] \left[ \begin{array}{c} {\mathbf{o}}_{1,\lambda} \\ {\mathbf{o}}_{2,\lambda} \\ \vdots \\ {\mathbf{o}}_{K,\lambda} \end{array} \right] = \sum_{k=1}^{K} {\mathsf{W}}_{k,\lambda}{\mathbf{o}}_{k,\lambda} ={\mathsf{W}}_{\lambda}{\mathbf{o}}_{\lambda},$$
where matrix $ {\mathsf {W}}_{k,\lambda }\in \mathbf {R}^{MN \times MN}$ is a diagonal matrix for the $k$th modulation pattern in the $\lambda$th band, $k=(1,\ldots ,K)$ and $\lambda =\{1,2\}$. Its diagonal elements are the modulation pattern values. The vector $ {\mathbf {o}}_{k,\lambda } \in \mathbf {R}^{MN \times 1}$ is for the $k$th frame of an object. Notice that in a dual-band TCI system, the measurement matrices in the two bands are complementary. Thus, the measurement matrix $ {\mathsf {W}}_{1}$ in the visible band is equal to $1- {\mathsf {W}}_{2}$, where $ {\mathsf {W}}_{2}$ is the matrix in the NIR band. Although there is the connection between these two measurement matrices, we focus on the difference of a moving object in the two bands. Thus, we formulate the dual-band measurement processes separately. The reconstruction processes are also implemented independently for the two bands. Because the calibration strategies are the same for both bands, to simplify notations we eliminate the subscript $\lambda$ for the rest of the paper. To reconstruct the original high-speed moving object sequences, we use the GMM algorithms [13].

3. Optical calibration in broad dual-band TCI

Based on our previous study, the two main factors to restrict a compressive imaging system reconstruction performance are truncation error and noise [42]. The truncation error comes from the fewer number of measurements than the dimension of an object. In TCI, it means fewer measurement frames than the reconstructed frames. The noise can be detector additive noise such as thermal noise, or nonadditive noise such as shot noise, when the light level is very low. Besides these two factors, another important factor limiting reconstruction performance is the system calibration error. It is also a main factor that restricts the applications of compressive imaging to different fields [22]. To calibrate the system error, the first thing is to reformulate the measurement model more carefully. Most of previous works on compressive imaging use a mathematical model similar to Eq. (2), which does not consider the influence of optics in a system. On the other hand, an imaging system never has a perfect impulse response function. Thus, an inherent system error is in the model, which degrades system performance.

To reduce this inherent error, we remodel a system as

$$D(x,y)=\int_{t_1}^{t_2}H_2(x,y)*\left[W(x,y,t)\left(H_1(x,y)*O(x,y,t)\right)\right]dt.$$

The function $H_1(x,y)$ represents the point spread function (PSF) of the lens $L_1$ between an object and a DMD. $H_2(x,y)$ is for the PSF of the lens $L_2$ or $L_3$ between a DMD and an FPA. If we digitize the equation, then it becomes

$${\mathbf{d}}=\sum_{k=1}^{K} {\mathsf{H}}_2{\mathsf{W}}_k{\mathsf{H}}_1{\mathbf{o}}_k,$$
with $ {\mathsf {H}}_1\in \mathbf {R}^{MN \times MN}$ and $ {\mathsf {H}}_2\in \mathbf {R}^{MN \times MN}$. Notice that in broad dual-band TCI, the imaging part from a moving object through a lens $L_1$ to a DMD is similar to a conventional imaging system. However, a dual-band TCI works in two bands. This increases the requirement to lens $L_1$. Additionally, in the part from a DMD to a detector array, neither the DMD nor the detector array is perpendicular to the optical axis. Thus, it is easy to introduce more aberration. Therefore, in this work, we first focus on the second part, Part 2, which is from a DMD to a detector array.

3.1 From a DMD to an FPA (Part 2)

To focus on the calibration of Part 2, we first simplify the dual-band TCI measurement process as

$${\mathbf{d}}=\sum_{k=1}^{K} {\mathsf{H}}_2{\mathsf{W}}_k{\mathbf{o}}_k.$$

Because $ {\mathsf {H}}_2$ is time invariant, the measurement process can be written as

$${\mathbf{d}}={\mathsf{H}}_2\sum_{k=1}^{K}{\mathsf{W}}_k{\mathbf{o}}_k = {\mathsf{H}}_2{\mathbf{d}}_{ideal},$$
where $ {\mathbf {d}}_{ideal}$ represents the ideal measurements, or the measurements with a perfect lens PSF. Note that with Eq. (6) and a set of samples of $ {\mathbf {d}}$ and $ {\mathbf {d}}_{ideal}$, we could estimate $ {\mathsf {H}}_2$, and then use the estimation and Eq. (5) to reconstruct a moving object. Another way for reconstruction is to pre-process the system raw measurements $ {\mathbf {d}}$ to obtain $ {\mathbf {d}}_{ideal}$, and then use Eq. (2).

To estimate $ {\mathsf {H}}_2$, a conventional method is to use a pinhole scanning over the object field of view. Here, it is equivalent to making a single DMD pixel value as 1. However, such a calibration process is time-consuming. Thus in the rest of the paper, instead of using a single DMD pixel one time, we use a set of random binary patterns for optical calibration. Additionally, we study nonuniform PSFs for lens $L_2$ or lens $L_3$ in this work, which is different from the uniform PSF assumption as discussed in literature [25]. Thus, the columns of $ {\mathsf {H}}_2$, which represent the PSF at different pixel positions, are not the same with each other.

We study four dual-band TCI calibration strategies using random binary patterns. In the first one $M1$, we assume that the detector measurements in a small area of size $(m\times n)$ are from its corresponding $(m\times n)$ DMD pixels. Figure 2(a) presents the imaging model between DMD pixels and detector pixels. We define the measurements as

$${\mathsf{Y}}=\hat{{\mathsf{H}}}_2{\mathsf{W}}_{cali}.$$

In the equation, the columns of $ {\mathsf {W}}_{cali}\in \mathbf {R}^{mn \times L}$ and $ {\mathsf {Y}}\in \mathbf {R}^{mn \times L}$ represent the random binary patterns in DMD and their images collected by the detector array, respectively. Each column of $\hat { {\mathsf {H}}}_2\in \mathbf {R}^{mn \times mn}$ represents the PSF from a DMD pixel to the set of detector pixels. To estimate $\hat { {\mathsf {H}}}_2$, we use the Least Square (LS) method. Then, using the estimation $\hat { {\mathsf {H}}}_{2,est}= {\mathsf {Y}} {\mathsf {W}}_{cali}^{T}\left ( {\mathsf {W}}_{cali} {\mathsf {W}}_{cali}^{T}\right )^{-1}$ and Eq. (5), we can obtain object reconstructions.

 figure: Fig. 2.

Fig. 2. The imaging model between a DMD and a detector array in calibration strategy (a) $M1$ or $M2$, (b) $M3$, and (c) $M4$.

Download Full Size | PDF

It should be noted that using the imaging model in Fig. 2, we can also calculate a matrix $\hat { {\mathsf {H}}}_{2}^{(inv)}$, which is the solution to $\hat { {\mathsf {H}}}_{2}^{(inv)} {\mathsf {Y}}= {\mathsf {W}}_{cali}$. Then, we can pre-process broad dual-band TCI measurements using $\hat { {\mathsf {H}}}_{2}^{(inv)}= {\mathsf {W}}_{cali} {\mathsf {Y}}^{T}\left ( {\mathsf {Y}} {\mathsf {Y}}^{T}\right )^{-1}$ to have $\hat { {\mathsf {H}}}_{2}^{(inv)} {\mathbf {d}}$ before reconstructing the moving object frames. We name this as the second strategy $M2$.

In the third strategy $M3$, we use the measurements of a detector and a set of DMD pixel values of size $(m\times n)$. Figure 2(b) presents the imaging model for $M2$. We model the detector measurements as

$${\mathbf{y}}^{T}={\mathbf{h}}_2^{T}{\mathsf{W}}_{cali}.$$

Here, a row vector $ {\mathbf {y}}^{T}\in \mathbf {R}^{1 \times L}$ represents the multiple measurements of a detector. Each column of $ {\mathsf {W}}_{cali}\in \mathbf {R}^{mn \times L}$ represents a set of DMD pixel values. These DMD pixels are centered at the conjugate position of the detector. The row vector $ {\mathbf {h}}_2^{T}\in \mathbf {R}^{1 \times mn}$ represents the convolution kernel from the set of DMD pixels to the detector. We use the pseudo-inverse of $ {\mathsf {W}}_{cali}$ to estimate $ {\mathbf {h}}_2^{T}= {\mathbf {y}}^{T} {\mathsf {W}}_{cali}^{T}\left ( {\mathsf {W}}_{cali} {\mathsf {W}}_{cali}^{T}\right )^{-1}$. Repeating this process for different detector pixels, we can obtain a set of $ {\mathbf {h}}_2^{T}$ vectors, and then form a matrix $\hat { {\mathsf {H}}}_{2,est}$. Notice that this method is similar to calculating each row of $ {\mathsf {H}}_2$ in $M1$. However, we can see that this method may have a larger error, because it does not include other rows of $ {\mathsf {H}}_2$ in the calibration process.

In the last method $M4$, we use the measurements of a set of detectors with size $(m\times n)$. We assume that these measurements come from a single DMD pixel. Figure 2(c) presents the imaging model in $M4$. The following equation is used to estimate a filter for system measurement preprocessing, where

$${\mathsf{Y}}={\mathbf{h}}_2{\mathbf{w}}_{cali}^{T}.$$

In this equation, each column of $ {\mathsf {Y}}\in \mathbf {R}^{mn \times L}$ represents one set of detector measurements corresponding to one DMD pixel value. The row vector $ {\mathbf {w}}_{cali}^{T}\in \mathbf {R}^{1 \times L}$ represents the $L$ values of the DMD pixel. The vector $ {\mathbf {h}}\in \mathbf {R}^{mn \times 1}$ indicates the PSF from a DMD pixel to detector pixels. From Eq. (9), we can obtain a vector $ {\mathbf {h}}_2^{(inv)}= {\mathbf {w}}_{cali}^{T} {\mathsf {Y}}^{T}\left ( {\mathsf {Y}} {\mathsf {Y}}^{T}\right )^{-1}$ to preprocess TCI measurements. The result $ {\mathbf {h}}_2^{(inv)} {\mathsf {Y}}$ is an estimation of $ {\mathbf {w}}_{cali}^{T}$. For TCI measurements $ {\mathbf {d}}$, the preprocessing result becomes $\hat { {\mathsf {H}}}_{2}^{(inv)} {\mathbf {d}}$. Notice that the mathematical model represents the imaging process accurately if we use individual DMD pixel to calibrate the system. However, random binary patterns are used as discussed before. On first sight, Eq. (9) does not match with the imaging process very well, because the measurements in $ {\mathsf {Y}}$ include light from DMD pixels which are not defined in $ {\mathbf {w}}_{cali}^{T}$. The idea is to consider them as error or noise. Thus, using method $M4$, we expect that $ {\mathbf {h}}_2^{(inv)}$ is more tolerant to system errors.

As a summary, with two of the four calibration strategies, we process measurements $ {\mathbf {d}}$ of binary patterns, making the results close to the ideal patterns $ {\mathbf {d}}_{ideal}$. Then we use the obtained $\hat { {\mathsf {H}}}_{2}^{(inv)}$ or $ {\mathbf {h}}_2^{(inv)}$ to preprocess system measurement for high-speed object reconstruction. With the other two strategies, we estimate $ {\mathsf {H}}_{2,est}$ to make $ {\mathsf {H}}_{2,est} {\mathbf {d}}_{ideal}$ close to the binary pattern measurements $ {\mathbf {d}}$. Then $ {\mathsf {H}}_{2,est}$ and system raw measurements are used for reconstruction.

3.2 From an object to a DMD (Part 1)

In the calibration for Part 2, we assume an all-white object. Thus, the results do not include the calibration from an object to a DMD, or the calibration of Part 1. Comparing to Part 2, there are three difficulties in Part 1. First, the system measurements are collected at an FPA, instead of at a DMD. Thus, we do not have the raw measurements at the DMD plane. Second, it is hard to know the ideal measurement or image of an object at the DMD, which is critical for optical calibration. Third, it is hard to generate multiple objects for calibration. In Part 2, we do not have the last two difficulties, because we use a DMD, which is a programmable device, as an object. Thus, we know the objects exactly. In Part 1, we are short of such kind of a device. Hence, it is hard to generate ideal objects for calibration.

To deal with these issues, we have the following solutions. For the first difficulty, the measurement at DMD is the image of an object, or the reconstruction without considering the PSF of Part 1. Thus we use the system reconstructions with calibration of Part 2 as the acquired raw measurements at the DMD. For the second difficulty, a pre-defined object such as a checkerboard is used for the calibration. We reconstruct an image of the checkerboard at DMD. Although this image is not very sharp, we can estimate its basic parameters, such as the edges, height and width. From these parameters, we can generate the ideal image of the checkerboard. For the third difficulty, we still only use one checkerboard for calibration. To have more, we assume a locally uniform PSF for the lens $L1$. We divide a reconstructed image into several parts, such as in a $3\times 3$ array. In each part, we assume a uniform PSF. A sliding window is used in each sub-image for calibration. Then, for each sub-area, the same $M1\sim M4$ strategies as for Part 2 are used for the calibration of Part 1.

4. Experimental results

Figure 3(a) presents our experimental system. A visible plus NIR light source is used to illuminate a rotating object. In the experiment, the spatial resolution of the object is $120\times 120$. Figure 3(b) and (c) present a binary pattern and its image in the visible band detector array. It can be seen that aberration causes the image to be blurred. The detector array speed is set as 50fps. The temporal compression ratio is 10. Thus from one frame system measurement, we reconstruct 10 object frames. In other words, the system imaging speed is 500fps.

 figure: Fig. 3.

Fig. 3. (a) The experimental setup for broad dual-band TCI; (b) a binary pattern displayed on the DMD; (c) the measurement of the pattern collected by the visible band detector array.

Download Full Size | PDF

4.1 Reconstruction of broad dual-band TCI without calibration

In the first set of experimental results, we reconstruct moving objects from system measurements using the GMM method without optical calibration. This is a baseline method. Figures 4(a), (b), and (c) present three raw TCI measurements in the visible band. The three objects are a moving number 7, a moving number 0 covered by paint, and a moving checkerboard. Figures  4(d), (e), and (f) show the raw measurements in the NIR band. Because of the paint, we cannot see the number 0 in the visible band as shown in (b). However, in the NIR band, it can be observed clearly.

 figure: Fig. 4.

Fig. 4. The raw measurements of (a) a rotating number 7, (b) a rotating number 0 covered by paint, and (c) a rotating check board in the visible band; (d$\sim$f) the raw measurements of the same objects as in (a$\sim$c) in the NIR band.

Download Full Size | PDF

We use the raw measurements with the ideal binary sensing matrix $ {\mathsf {W}}$ as presented in Fig. 3(b) for reconstruction. Figure 5(a) presents the frames 1, 3, 5, 7, and 9 among the 10 reconstructed frames using the baseline method. It is clear that, without calibration, the reconstruction is not good. In Fig. 5(b), we present the 5 reconstructed frames using a measured sensing matrix $ {\mathsf {W}}_{meas}$ as shown in Fig. 3(c). Note that no calibration is used in the reconstruction of these frames. Even without PSF calibration, the reconstructions in Fig. 5(b) are much better than the reconstructions using the ideal sensing matrix $ {\mathsf {W}}$. However, for an object with more details such as the checkerboard, the reconstructed frames can be improved further.

 figure: Fig. 5.

Fig. 5. The frames 1, 3, 5, 7, and 9 in the 10 reconstructed frames in the visible band, using (a) $ {\mathsf {W}}$ and (b) $ {\mathsf {W_{meas}}}$ with raw measurements without PSF calibration.

Download Full Size | PDF

To observe the reconstruction quality more clearly, we plot the pixel values along two lines in the 4th reconstructed checkerboard frame. The two lines are shown in Fig. 6. The pixel values at the two lines are plotted in Figs. 7(a) and (b), respectively. It is clear that the blue curves with diamond markers for the reconstructions using $ {\mathsf {W}}_{meas}$ have better contrast and less noise. The shapes are also closer to square curves.

 figure: Fig. 6.

Fig. 6. The positions of lines 1 and 2 in the reconstructed frame 4 for the checkerboard object.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. The pixel values at lines 1 (a) and 2 (b) in the reconstructed frame 4 for the checkerboard object in the visible band without calibration.

Download Full Size | PDF

We repeated the reconstruction for the NIR band measurements. The results are shown in Fig. 8. Once again, we can observe that the object covered by paint can be reconstructed clearly. The reconstructions obtained using $ {\mathsf {W}}_{meas}$ are much better than the reconstructions using the ideal matrix $ {\mathsf {W}}$. Compared with the rotating numbers, the reconstructed checkerboard frames can be improved. In Fig. 9, we also plot the pixel values at the two lines as shown in Fig. 6. Once again, we can observe that the curves for the reconstructions using $ {\mathsf {W}}_{meas}$ have less noise. The shapes are closer to square curves.

 figure: Fig. 8.

Fig. 8. The frames 1, 3, 5, 7, and 9 in the 10 reconstructed frames in the NIR band, using (a) $ {\mathsf {W}}$ and (b) $ {\mathsf {W_{meas}}}$ with raw measurements without PSF calibration.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. The pixel values at lines 1 (a) and 2 (b) in the reconstructed frame 4 for the checkerboard object in the NIR band without calibration.

Download Full Size | PDF

4.2 Optical calibration

To improve system reconstruction performance, we study 4 optical calibration strategies. As discussed in Section 3., we calibrate the system as two parts. Part 1 is from an object to a DMD. Part 2 is from a DMD to an FPA. In this subsection, we discuss the calibration methods for Part 2 first, then for Part 1.

4.2.1 Optical calibration for Part 2 (the part between a DMD and an FPA)

To calibrate part 2, we use 200 random binary patterns and four strategies $M1$, $M2$, $M3$, and $M4$. Here we only present the calibration results in the visible band. The process for the NIR band is the same. We obtain similar results and the same conclusions for the NIR band.

In $M1$, we set the block size as $(12\times 12)$. Using Eq. (7), we obtain the estimated matrix $\hat { {\mathsf {H}}}_{2,est}$ for each block. The matrices $\hat { {\mathsf {H}}}_{2,est}$ for all blocks are stitched together to have a $(1440\times 1440)$ matrix $\hat { {\mathsf {H}}}_{2,est}^{(total)}$. In the first column of Fig. 10(a), we present the three estimated PSFs at the positions of $(20,20)$, $(60,60)$, and $(100,100)$ in the $(120\times 120)$ area. It is clear that the three PSFs are different from each other. The center column of Fig. 10(a) is an enlarged part of the matrix $\hat { {\mathsf {H}}}_{2,est}^{(total)}$. The right column is the estimated detector measurements using Eq. (7) and $ {\mathsf {H}}_{2,est}^{(total)}$. Although there are some blocking issues in the estimated PSF, the measurement estimate in the right column is close to the raw detector measurements as presented in Fig. 3(c).

 figure: Fig. 10.

Fig. 10. The PSFs at $(20,20)$, $(60,60)$, and $(100,100)$, an enlarged part of $ {\mathsf {H}}_{2,est}$, and the estimated binary pattern measurements or the estimated patterns using (a) $M1$; (b) $M3$ ; (c) $M2$; and (d) $M4$ for Part 2.

Download Full Size | PDF

Another method which we use to obtain $\hat { {\mathsf {H}}}_{2,est}$ is $M3$. In a $(9\times 9)$ DMD pixel block area, every DMD pixel contributes some light to the detector. The estimated results using $M3$ is presented in Fig. 10(b). The figure on the left is for the PSFs at the same 3 positions as in Fig. 10(a). The central figure in Fig. 10(b) is for the enlarged matrix $\hat { {\mathsf {H}}}_{2,est}^{(total)}$, while the right one is for the estimated measurements. Comparing the estimated measurements in (a) and (b), they are not different from each other much.

We repeat the calibration process using $M2$ and $M4$. The estimated results are presented in Figs. 10(c) and (d). Once again, the estimated PSFs at $(20,20)$, $(60,60)$, and $(100,100)$ are different from each other. The estimated vectors $ {\mathbf {h}}_{inv}$ are stitched together. An enlarged part of these matrices are presented. The block size used in $M2$ and $M4$ are $(12\times 12)$ and $(9\times 9)$, respectively. We observe more blocking issue in $M2$. Using $ {\mathbf {h}}_{inv}$, we pre-process the detector measurements. The results in (c) and (d) are close to the original binary DMD pattern as shown in Fig. 3(b).

To evaluate the performance of the four strategies, we summarize the PSNR values in Table 1 for the estimated measurements using M1$\&$M3 and the preprocessed results using M2$\&$M4. From the table, the highest PSNR value appears in $M1$. Generally $M1$ and $M3$ work better than $M2$ and $M4$. However, these PSNR values are for the calibration process evaluation. It might be different from object reconstructions.

Tables Icon

Table 1. PSNR for the four calibration strategies with Part 2

4.2.2 Optical calibration for Part 1 (the part between an object and a DMD)

To calibrate Part 1, we use a checkerboard. The raw measurement is shown in Fig. 11(a). Figure 11(b) presents the estimated ideal image. Using the same strategies $M1\sim M4$, we obtain the calibration results as shown in Fig. 12. Once again, in each sub-figure, there are three columns. In the left, three PSFs at locations (20,20), (60,60), and (100,100) are presented. It can be observed that these PSFs are different from each other. In the center column, we present the estimated $ {\mathsf {H}}$ matrices. In the right column, the estimated measurements or the estimated ideal images are shown. We can observe that the estimates are close to the measurements or the ideal object. However, comparing to the results for Part 2, we can still see the difference between the estimates and the ideal results, due to the three difficulties discussed in Section 3.2. We also calculate the PSNR for the four strategies. The results are summarized in Table 2. Once again, we can see that the PSNR values for M1 and M3 are higher. However, we care about the reconstruction quality for TCI systems more.

 figure: Fig. 11.

Fig. 11. (a) The raw measurements and (b) the estimated ideal image of a check board at the DMD plane in the visible band.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. The PSFs at $(20,20)$, $(60,60)$, and $(100,100)$, an enlarged part of $ {\mathsf {H}}_{2,est}$, and the estimated binary pattern measurements or the estimated patterns using (a) $M1$; (b) $M3$ ; (c) $M2$; and (d) $M4$ for Part 1.

Download Full Size | PDF

Tables Icon

Table 2. PSNR for the four calibration strategies with Part 1

4.3 Reconstruction in dual-band TCI with calibration

After studying the optical calibration strategies, we use them for object reconstruction in broad dual-band TCI. In this sub-section, we first use the calibration for Part 2 in object reconstruction. Then, the calibration process for Part 1 is also discussed. In the end, we also study object reconstruction in scattering media.

4.3.1 Reconstruction with calibration of Part 2

In this part, the reconstruction results are obtained only using calibration for Part 2. In Figs. 13(a) and (b), we present the reconstructions using the calibration strategies $M1$ and $M3$, respectively. The figure in (a) is for $M1$, while (b) is for $M3$. It can be observed that the two sets of reconstructions are not different from each other much. Compared with the results in Fig. 5(b), the reconstructions have slightly better contrast. In Figs. 13(c) and (d), we present the results with preprocessed measurements using $M2$ and $M4$, respectively. Notice that here we also preprocess the TCI system raw measured sensing matrix $ {\mathsf {W}}_{meas}$. It is clear that the reconstructions in (d) is the best in all of the six sets of results in Fig. 5 and Fig. 13. Compared to the PSNR values in Table 1, this conclusion is different. In $M4$, the measurements used for calibrations are parts of random binary pattern images. Thus in each $(9\times 9)$ area, the measurements are from at least $(9\times 9)$ DMD pixels. However, the calibration model only uses the central pixel values. Thus, when evaluating the calibration performance as shown in Table 1 only, method $M4$ is not the best. On the other hand, because the model does not match to a PSF exactly, it is more tolerant to errors and noise in a system. Hence, for object reconstruction, it works best.

 figure: Fig. 13.

Fig. 13. The frames 1, 3, 5, 7, and 9 in the 10 reconstructed frames in the visible band, using the calibration strategies (a) $M1$ and (b) $M3$ with $ {\mathsf {W}}$; using the calibrated $ {\mathsf {W}}_{meas}$ and the calibrated measurements with (c)$M2$ and (d)$M4$.

Download Full Size | PDF

As in Section 4.1, we also plot the pixel values in the two lines as shown in Fig. 6. We have the curves as shown in Fig. 14. For comparison, we also plot the curve for the reconstruction without calibration but using $ {\mathsf {W}}_{meas}$. We can see that the results for using $M1$ and $M3$ are not very different from the reconstruction without calibration but using $ {\mathsf {W}}_{meas}$. The results for $M2$ and $M4$ are better than the others in the sense of better contrast and more regular shape.

 figure: Fig. 14.

Fig. 14. The pixel values at lines 1 (a) and 2 (b) in the reconstructed frame 4 for the checkerboard object in the visible band using different calibration strategies.

Download Full Size | PDF

We repeat the experiment for the NIR band. The results are presented in Fig. 15. In (a)–(d), $M1- M4$ are used respectively. Once again, the improvement using $M1$ and $M2$ is limited. The reconstruction using $M4$ is the best. Compared to the visible band results, the NIR band results are a bit more blurred, but the object covered by paint can be reconstructed clearly. Again, Fig. 16 shows the curves at the two lines in Fig. 6. $M2$ and $M4$ show better results.

 figure: Fig. 15.

Fig. 15. The frames 1, 3, 5, 7, and 9 in the 10 reconstructed frames in the NIR band, using the calibration strategies (a) $M1$ and (b) $M3$ with $ {\mathsf {W}}$; using the calibrated $ {\mathsf {W}}_{meas}$ and the calibrated measurements with (c)$M2$ and (d)$M4$.

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. The pixel values at lines 1 (a) and 2 (b) in the reconstructed frame 4 for the checkerboard object in the NIR band using different calibration strategies.

Download Full Size | PDF

4.3.2 Reconstructions with calibration of Part 1 and Part 2

In this part, we add the calibration process for Part 1 into the reconstruction process. Note that in the last sub-section, the reconstructions using $M1$ and $M3$ show limited improvement in both the visible and the NIR bands. Thus, here we only use $M2$ and $M4$ for reconstruction in dual-band TCI.

In Figs. 17(a) and (b), we present the frames 1, 3, 5, 7, and 9 out of the 10 reconstructed frames in the visible band using $M2$ and $M4$ for Part 1 on top of the calibration of Part 2. The calibration strategy used for Part 2 is $M4$. We can see that the two sets of reconstructions are not different from each other much. Compared to the results only including the calibration for Part 2 in Fig. 13, the frames in Fig. 17 have sharper edges, especially for the checkerboard object. In Fig. 18, the pixel values at the two lines as in Fig. 6 are presented. For comparison, we also plot the best reconstruction using only Part 2 calibration with $M4$. It is clear that the calibration of Part 1 helps the reconstruction much. We repeat the experiment for the NIR band. The results are shown in Fig. 19 and Fig. 20. $M2$ is used in Fig. 19(a) and Fig. 20(a), while $M4$ is for Fig. 19(b) and Fig. 20(b). The resolution improvement in the NIR band is more obvious than in the visible band. In addition, if we compare these results with the reconstructions in Fig. 5 and Fig. 8, which do not include optical calibration, we can conclude that the broad dual-band TCI system reconstruction performance has been improved greatly.

 figure: Fig. 17.

Fig. 17. The frames 1, 3, 5, 7, and 9 in the 10 reconstructed frames in the visible band, using the calibrated $ {\mathsf {W}}_{meas}$, the calibrated measurements, and the calibration for PSF1 with (a) $M2$ and (b) $M4$.

Download Full Size | PDF

 figure: Fig. 18.

Fig. 18. The pixel values at lines 1 (a) and 2 (b) in the reconstructed frame 4 for the checkerboard object in the visible band using different calibration strategies.

Download Full Size | PDF

 figure: Fig. 19.

Fig. 19. The frames 1, 3, 5, 7, and 9 in the 10 reconstructed frames in the NIR band, using the calibrated $ {\mathsf {W}}_{meas}$, the calibrated measurements, and the calibration for PSF1 with (a) $M2$ and (b) $M4$

Download Full Size | PDF

 figure: Fig. 20.

Fig. 20. The pixel values at lines 1 (a) and 2 (b) in the reconstructed frame 4 for the checkerboard object in the NIR band using different calibration strategies.

Download Full Size | PDF

To further quantify the reconstruction quality, we also calculated the correlation of a reconstructed frame, frame 4, using different calibration strategies and the ideal checkerboard as shown in Fig. 11(b). Figure 21 shows one of the correlation results. The maximum of the correlation can be used to represent the similarity between a reconstructed object and the ideal object. We summarized the maxima for different calibration strategies in Table 3. To make it more readable, we normalized the values using the maximum of the ideal object auto-correlation. Thus, the values in the table are between 0 and 1. If the value is closer to 1, the reconstruction is more similar to the ideal object. For either the visible band or the NIR band, there are 8 different reconstruction methods. Two are for the reconstructions using $ {\mathsf {W}}$ and $ {\mathsf {W}}_{meas}$ but without optical calibration. Four values are for the reconstructions using $M1\sim M4$ for Part 2 calibration only. The last two are for the reconstructions using $M2$ and $M4$ for Part 1 and $M4$ for Part 2 calibration. It is clear that for both bands, using optical calibration will help the reconstruction much. With the calibration for both Part 2 and Part 1, the reconstructions present the best results.

 figure: Fig. 21.

Fig. 21. The correlation results of a reconstructed frame and the ideal checkerboard object.

Download Full Size | PDF

Tables Icon

Table 3. The maximum of the correlation results using different calibration strategies for the visible and NIR bands

4.3.3 Reconstruction in scattering media

In the last experiment, we also test the system in a scattering environment. We put a diffuser between the moving object and the lens $L1$ to simulate imaging through scattering media. Figures 22(a) and (b) present two sets of raw visible band TCI measurements and their calibrated measurements using $M4$ only for Part 2. One set is for a moving number 7. The other is for a moving number 9. In each sub-figure, the left is the raw measurement, while the right is the calibrated result. It can be observed that, using the calibration method, the modulation pattern shows up. In (c) and (d), we present the raw measurements and the calibrated measurements in the NIR band.

 figure: Fig. 22.

Fig. 22. (a$\&$b) The raw (left) and the calibrated (right) measurements of TCI in the visible band; (c$\&$d) the raw (left) and the calibrated (right) measurements of TCI in the NIR band.

Download Full Size | PDF

In Fig. 23, we present two reconstructed frames without and with optical calibration. In (a)$\sim$(d), the results using $ {\mathsf {W}}_{meas}$ but without optical calibration are presented. The frames in (a) and (b) are for the visible band, while (c) and (d) are for the NIR band. The reconstruction frames with optical calibrations for Part 2 and Part 1 are presented in (e)$\sim$(h). The top row is for $M2$ with Part 1. The bottom row is for $M4$ with Part 1. The calibration strategy for Part 2 is $M4$. Similarly as in (a)$\sim$(d), the frames in (e) and (f) are for the visible band, while (g) and (h) are for the NIR band. It is clear that the reconstruction quality with optical calibration is much better. It can also be observed that the NIR band reconstructions have better visual quality than the reconstructions in the visible band.

 figure: Fig. 23.

Fig. 23. Two reconstructed frames for an object imaged through a diffuser in the visible (a$\&$b) and NIR (c$\&$d) bands using $ {\mathsf {W}}_{meas}$ without PSF calibration. The reconstructed frames obtained using PSF calibration in the visible (e$\&$f) and NIR (g$\&$h) bands. The calibration strategy used for Part1 in the top row of (e$\sim$h) is $M2$, while $M4$ is used for the bottom row.

Download Full Size | PDF

5. Conclusion

In this work, for the first time a broad dual-band temporal compressive imaging (TCI) system is studied. The two reflection directions of a DMD device are used to collect system measurements in the visible and the NIR band simultaneously. Because in both bands the DMD plane and the sensor plane are not perpendicular to the optical axis in the imaging system, it is likely to have aberration error in system measurements. To deal with the issue, we study four nonuniform, or spatially variant, calibration strategies for the part from a DMD to an FPA, or Part 2, using random binary patterns. We also study the calibration for the part from an object to a DMD, or Part 1. In this part, the optics needs to work for both bands. Thus, it is hard to be optimized in both the visible and the NIR bands. For Part 1, we study the same four nonuniform calibration strategies as for Part 2.

Using the calibrated broad dual-band TCI system, we reconstructed moving numbers, and a moving checkerboard. We cover the number 0 with paint. It can be observed clearly in the NIR band, but is not observable in the visible band. Comparing among the reconstruction without and with different calibration strategies, we can conclude that the reconstructions with calibrations of Part 1 and Part 2 are better than the results with calibration of Part 2 only and much better than the results without calibration. For Part 2, the strategy $M4$ works best. For Part 1, the calibration strategies $M2$ and $M4$ present nice results. We also use the system to reconstruct moving objects through a diffuser. With calibration, we obtain much better reconstructions. The reconstruction quality in the NIR band is also better than in the visible band. By these experiments, we demonstrate the superior performance of broad dual-band TCI over a single band system.

Funding

National Natural Science Foundation of China (61675023).

Disclosures

The authors declare no conflicts of interest.

References

1. M. Ishiguro, R. Nakamura, D. J. Tholen, N. Hirata, H. Demura, E. Nemoto, A. M. Nakamura, Y. Higuchi, A. Sogame, and A. a. Yamamoto, “The hayabusa spacecraft asteroid multi-band imaging camera (amica),” Icarus 207(2), 714–731 (2010). [CrossRef]  

2. Tsuko Nakamura, Akiko M. Nakamura, Jun Saito, Sho Sasaki, Ryosuke Nakamura, Hirohide Demura, Hiroaki Akiyama, and David Tholen, “Multi-band imaging camera and its sciences for the japanese near-earth asteroid mission muses-c,” Earth Planets Space 53(11), 1047–1063 (2001). [CrossRef]  

3. A. F. H. Goetz, “Three decades of hyperspectral remote sensing of the earth: A personal view,” Remote. Sens. Environ. 113, S5–S16 (2009). [CrossRef]  

4. A. D. Landgrebe, “Multispectral land sensing: where from, where to?” IEEE Trans. Geosci. Remote. Sens. 43(3), 414–421 (2005). [CrossRef]  

5. W. Wang and J. Paliwal, “Near-infrared spectroscopy and imaging in food quality and safety,” Sens. Instrumentation for Food Qual. Saf. 1(4), 193–207 (2007). [CrossRef]  

6. Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18(10), 100901 (2013). [CrossRef]  

7. N. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013). [CrossRef]  

8. N. Gupta, B. F. Andresen, G. F. Fulop, and P. R. Norton, “Hyperspectral imager development at army research laboratory,” Proc. SPIE 6940, 69401P (2008). [CrossRef]  

9. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15(21), 14013–14027 (2007). [CrossRef]  

10. M. W. Kudenov and E. L. Dereniak, “Compact snapshot birefringent imaging fourier transform spectrometer,” Proc. SPIE 7812, 40–50 (2010). [CrossRef]  

11. M. W. Kudenov and E. L. Dereniak, “Compact real-time birefringent imaging spectrometer,” Opt. Express 20(16), 17973 (2012). [CrossRef]  

12. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9), 10526–10545 (2013). [CrossRef]  

13. Q. Zhou, J. Ke, and E. Y. Lam, “Near-infrared temporal compressive imaging for video,” Opt. Lett. 44(7), 1702–1705 (2019). [CrossRef]  

14. D. Reddy, A. Veeraraghavan, and R. Chellappa, “P2c2: Programmable pixel compressive camera for high speed imaging,” in CVPR 2011, (IEEE, 2011), pp. 329–336.

15. R. Koller, L. Schmid, N. Matsuda, T. Niederberger, L. Spinoulas, O. Cossairt, G. Schuster, and A. K. Katsaggelos, “High spatio-temporal resolution video with compressed sensing,” Opt. Express 23(12), 15992–16007 (2015). [CrossRef]  

16. Y. Chen, C. Tang, Z. Xu, Q. Li, M. Cen, and H. Feng, “Adaptive reconstruction for coded aperture temporal compressive imaging,” Appl. Opt. 56(17), 4940–4947 (2017). [CrossRef]  

17. T. H. Tsai, P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Spectral-temporal compressive imaging,” Opt. Lett. 40(17), 4054–4057 (2015). [CrossRef]  

18. G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, “Compressive coded aperture spectral imaging: An introduction,” IEEE Signal Process. Mag. 31(1), 105–115 (2014). [CrossRef]  

19. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014). [CrossRef]  

20. E. A. Bernal, L. K. Mestha, P. R. Austin, and R. P. Loce, “Single-pixel camera architecture with simultaneous multi-band acquisition,” (2015). US Patent 9188785.

21. Y. Zhang, G. M. Gibson, M. P. Edgar, G. Hammond, and M. J. Padgett, “Dual-band single-pixel telescope,” Opt. Express 28(12), 18180–18188 (2020). [CrossRef]  

22. M. E. Gehm and D. J. Brady, “Compressive sensing in the eo/ir,” Appl. Opt. 54(8), C14–C22 (2015). [CrossRef]  

23. A. E. Mathisen, “Camera lens calibration apparatus and method,” (1999). US Patent 5930740.

24. J. Dimsdale, R. Williams, and W. Chen, “Automated lens calibration,” (2003). US Patent 20030035100.

25. J. P. Dumas, M. A. Lodhi, W. U. Bajwa, and M. C. Pierce, “Computational imaging with a highly parallel image-plane-coded architecture: challenges and solutions,” Opt. Express 24(6), 6145–6155 (2016). [CrossRef]  

26. X. Yuan, “Generalized alternating projection based total variation minimization for compressive sensing,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 2539–2543.

27. Y. Liu, X. Yuan, J. Suo, D. J. Brady, and Q. Dai, “Rank minimization for snapshot compressive imaging,” IEEE Trans. Pattern Anal. Math. Intell. 41(12), 2990–3006 (2019). [CrossRef]  

28. J. Yang, X. Yuan, X. Liao, P. Llull, D. J. Brady, G. Sapiro, and L. Carin, “Video compressive sensing using gaussian mixture models,” IEEE Tran. Image Process. 23(11), 4863–4878 (2014). [CrossRef]  

29. M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “Deep fully-connected networks for video compressive sensing,” Digit. Signal Process. 72, 9–18 (2018). [CrossRef]  

30. M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “Deepbinarymask: Learning a binary mask for video compressive sensing,” Digit. Signal Process. 96, 102591 (2020). [CrossRef]  

31. J. Ma, X.-Y. Liu, Z. Shou, and X. Yuan, “Deep tensor admm-net for snapshot compressive imaging,” in Proceedings of the IEEE International Conference on Computer Vision, (IEEE, 2019), pp. 10223–10232.

32. L. Zhang, J. Ke, and E. Y. Lam, “A deep learning approach for reconstruction in temporal compressed imaging,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2020), p. CW4B.3.

33. Y. Sun, X. Yuan, and S. Pang, “Compressive high-speed stereo imaging,” Opt. Express 25(15), 18182–18190 (2017). [CrossRef]  

34. T.-H. Tsai, P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Spectral-temporal compressive imaging,” Opt. Lett. 40(17), 4054–4057 (2015). [CrossRef]  

35. M. Qiao, X. Liu, and X. Yuan, “Snapshot spatial–temporal compressive imaging,” Opt. Lett. 45(7), 1659–1662 (2020). [CrossRef]  

36. X. Yuan and S. Pang, “Structured illumination temporal compressive microscopy,” Biomed. Opt. Express 7(3), 746–758 (2016). [CrossRef]  

37. X. Li, A. Stevens, J. A. Greenberg, and M. E. Gehm, “Single-shot memory-effect video,” Sci. Rep. 8(1), 1–8 (2018). [CrossRef]  

38. Q. Zhou, J. Ke, and E. Y. Lam, “Dual-waveband temporal compressive imaging,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2019), p. CTu2A.8.

39. J. Ke and E. Y. Lam, “Object reconstruction in block-based compressive imaging,” Opt. Express 20(20), 22102–22117 (2012). [CrossRef]  

40. J. Ke and E. Y. Lam, “Fast compressive measurements acquisition using optimized binary sensing matrices for low-light-level imaging,” Opt. Express 24(9), 9869–9887 (2016). [CrossRef]  

41. J. Ke, L. Zhang, and E. Y. Lam, “Temporal compressed measurements for block-wise compressive imaging,” in Mathematics in Imaging, (Optical Society of America, 2019), p. JW4B.1.

42. M. A. Neifeld and J. Ke, “Optical architectures for compressive imaging,” Appl. Opt. 46(22), 5293–5303 (2007). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (23)

Fig. 1.
Fig. 1. A system diagram for broad dual-band temporal compressive imaging (TCI).
Fig. 2.
Fig. 2. The imaging model between a DMD and a detector array in calibration strategy (a) $M1$ or $M2$, (b) $M3$, and (c) $M4$.
Fig. 3.
Fig. 3. (a) The experimental setup for broad dual-band TCI; (b) a binary pattern displayed on the DMD; (c) the measurement of the pattern collected by the visible band detector array.
Fig. 4.
Fig. 4. The raw measurements of (a) a rotating number 7, (b) a rotating number 0 covered by paint, and (c) a rotating check board in the visible band; (d$\sim$f) the raw measurements of the same objects as in (a$\sim$c) in the NIR band.
Fig. 5.
Fig. 5. The frames 1, 3, 5, 7, and 9 in the 10 reconstructed frames in the visible band, using (a) $ {\mathsf {W}}$ and (b) $ {\mathsf {W_{meas}}}$ with raw measurements without PSF calibration.
Fig. 6.
Fig. 6. The positions of lines 1 and 2 in the reconstructed frame 4 for the checkerboard object.
Fig. 7.
Fig. 7. The pixel values at lines 1 (a) and 2 (b) in the reconstructed frame 4 for the checkerboard object in the visible band without calibration.
Fig. 8.
Fig. 8. The frames 1, 3, 5, 7, and 9 in the 10 reconstructed frames in the NIR band, using (a) $ {\mathsf {W}}$ and (b) $ {\mathsf {W_{meas}}}$ with raw measurements without PSF calibration.
Fig. 9.
Fig. 9. The pixel values at lines 1 (a) and 2 (b) in the reconstructed frame 4 for the checkerboard object in the NIR band without calibration.
Fig. 10.
Fig. 10. The PSFs at $(20,20)$, $(60,60)$, and $(100,100)$, an enlarged part of $ {\mathsf {H}}_{2,est}$, and the estimated binary pattern measurements or the estimated patterns using (a) $M1$; (b) $M3$ ; (c) $M2$; and (d) $M4$ for Part 2.
Fig. 11.
Fig. 11. (a) The raw measurements and (b) the estimated ideal image of a check board at the DMD plane in the visible band.
Fig. 12.
Fig. 12. The PSFs at $(20,20)$, $(60,60)$, and $(100,100)$, an enlarged part of $ {\mathsf {H}}_{2,est}$, and the estimated binary pattern measurements or the estimated patterns using (a) $M1$; (b) $M3$ ; (c) $M2$; and (d) $M4$ for Part 1.
Fig. 13.
Fig. 13. The frames 1, 3, 5, 7, and 9 in the 10 reconstructed frames in the visible band, using the calibration strategies (a) $M1$ and (b) $M3$ with $ {\mathsf {W}}$; using the calibrated $ {\mathsf {W}}_{meas}$ and the calibrated measurements with (c)$M2$ and (d)$M4$.
Fig. 14.
Fig. 14. The pixel values at lines 1 (a) and 2 (b) in the reconstructed frame 4 for the checkerboard object in the visible band using different calibration strategies.
Fig. 15.
Fig. 15. The frames 1, 3, 5, 7, and 9 in the 10 reconstructed frames in the NIR band, using the calibration strategies (a) $M1$ and (b) $M3$ with $ {\mathsf {W}}$; using the calibrated $ {\mathsf {W}}_{meas}$ and the calibrated measurements with (c)$M2$ and (d)$M4$.
Fig. 16.
Fig. 16. The pixel values at lines 1 (a) and 2 (b) in the reconstructed frame 4 for the checkerboard object in the NIR band using different calibration strategies.
Fig. 17.
Fig. 17. The frames 1, 3, 5, 7, and 9 in the 10 reconstructed frames in the visible band, using the calibrated $ {\mathsf {W}}_{meas}$, the calibrated measurements, and the calibration for PSF1 with (a) $M2$ and (b) $M4$.
Fig. 18.
Fig. 18. The pixel values at lines 1 (a) and 2 (b) in the reconstructed frame 4 for the checkerboard object in the visible band using different calibration strategies.
Fig. 19.
Fig. 19. The frames 1, 3, 5, 7, and 9 in the 10 reconstructed frames in the NIR band, using the calibrated $ {\mathsf {W}}_{meas}$, the calibrated measurements, and the calibration for PSF1 with (a) $M2$ and (b) $M4$
Fig. 20.
Fig. 20. The pixel values at lines 1 (a) and 2 (b) in the reconstructed frame 4 for the checkerboard object in the NIR band using different calibration strategies.
Fig. 21.
Fig. 21. The correlation results of a reconstructed frame and the ideal checkerboard object.
Fig. 22.
Fig. 22. (a$\&$b) The raw (left) and the calibrated (right) measurements of TCI in the visible band; (c$\&$d) the raw (left) and the calibrated (right) measurements of TCI in the NIR band.
Fig. 23.
Fig. 23. Two reconstructed frames for an object imaged through a diffuser in the visible (a$\&$b) and NIR (c$\&$d) bands using $ {\mathsf {W}}_{meas}$ without PSF calibration. The reconstructed frames obtained using PSF calibration in the visible (e$\&$f) and NIR (g$\&$h) bands. The calibration strategy used for Part1 in the top row of (e$\sim$h) is $M2$, while $M4$ is used for the bottom row.

Tables (3)

Tables Icon

Table 1. PSNR for the four calibration strategies with Part 2

Tables Icon

Table 2. PSNR for the four calibration strategies with Part 1

Tables Icon

Table 3. The maximum of the correlation results using different calibration strategies for the visible and NIR bands

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

D ( x , y , λ ) = t 1 t 2 O ( x , y , t , λ ) W ( x , y , t , λ ) d t .
d λ = [ d 1 , λ d 2 , λ d M N , λ ] = [ W 1 , λ W 2 , λ W K , λ ] [ o 1 , λ o 2 , λ o K , λ ] = k = 1 K W k , λ o k , λ = W λ o λ ,
D ( x , y ) = t 1 t 2 H 2 ( x , y ) [ W ( x , y , t ) ( H 1 ( x , y ) O ( x , y , t ) ) ] d t .
d = k = 1 K H 2 W k H 1 o k ,
d = k = 1 K H 2 W k o k .
d = H 2 k = 1 K W k o k = H 2 d i d e a l ,
Y = H ^ 2 W c a l i .
y T = h 2 T W c a l i .
Y = h 2 w c a l i T .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.