Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast object imaging and classification based on circular harmonic Fourier moment detection

Open Access Open Access

Abstract

Limited by the number of illumination fields and the speed of a spatial light modulator, single-pixel imaging (SPI) cannot realize real-time imaging and fast classification of an object. In this paper, we proposed the circular harmonic Fourier single-pixel imaging (CHF-SPI) for the first time to realize fast imaging and classification of objects. The light field distribution satisfies the circular harmonic Fourier formula, and the light intensity values of the single-pixel detector are equivalent to the circular harmonic Fourier moments. Then the target can be reconstructed under low sampling ratio by inverse transformation. Through simulation and experimental verification, clear imaging can be performed at a sampling ratio of 0.9%. In addition, circular harmonic Fourier moments are used to construct multi-distortion invariant to classify objects with rotation and scale change. The scale change multiples of objects can be calculated and the objects can be classified by using 10 light fields. It is of great significance to classify objects quickly without imaging.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

SPI is an imaging method based on a single-pixel detector. Using a large number of structured light fields for illumination, and a single-pixel detector is used to capture the reflected or transmitted light of the object, then the object is reconstructed by calculation. Because the single-pixel detector has the advantages of high sensitivity, wide response band and low cost, SPI can achieve weak-light imaging [1], long-distance imaging [2], and non-visible light imaging such as infrared [3], terahertz [45], and X-ray [6]. As a result, it has received extensive attention recently.

Initially, single-pixel imaging technology adopted random light field for illumination [7], which required a large number of samples, resulting in a long time for image reconstruction. Therefore, researchers applied compressive sensing theory to SPI [89], which can break through the limit of the Nyquist-Shannon sampling theorem and reconstruct target images under under-sampling conditions. This approach reduces the number of sampling points but requires more computational time due to the convex optimization algorithm used in the reconstruction process. Then, in order to achieve more efficient image reconstruction, the researchers proposed several orthogonal basis patterns, such as Hadamard SPI (H-SPI) [1012], Fourier SPI (F-SPI) [1315], discrete cosine SPI (DCT-SPI) [16], Zernike SPI (Z-SPI) [17] and other orthogonal light fields. In theory, distortion-free reconstruction of objects can be achieved under fully sampled conditions, where the values measured by the single-pixel detector are equivalent to the coefficients of the target image in the transformed domain correlated with the utilized base patterns. The final image synthesis can be achieved through inverse transform, reducing computational time while improving imaging quality. In addition to the common rectangular field of view, the circular field of view is also a wide range of display forms in computational optical imaging, such as cameras, cameras, etc. Since the researchers studied single-pixel imaging in a circular field of view. For example, Z-SPI proposed by [17] reconstructs objects in the circular domain at a low sampling rate. Wang et al. [18] studied the Fourier single-pixel imaging technology in polar coordinates, and the imaging quality in the circular domain is better than that of the traditional F-SPI.

In addition to imaging, the researchers also applied SPI to the field of target recognition. Traditional target recognition relies on image reconstruction to extract effective feature information [19], which has problems such as high computational cost and long running time. In combination with SPI technology, researchers have done some research in the field of target recognition under image-free conditions. Researchers optimize the algorithm, use deep-learning training, reduce lighting patterns, and achieve fast classification [20,21]. In addition, they look for the mode light field with special mathematical properties [2224].The principle of computing the centroid of an object using zero-order and first-order geometric moments is employed to generate a 2D illumination field, enabling fast object localization with three illumination fields [22].Reconstruction of objects can be achieved at lower sampling rates using Zernike moments [17], and classification of image sets undergoing rotational changes can be performed based on the rotational invariance of Zernike moments.

In this paper, a single-pixel imaging technique by means of circular harmonic Fourier moments is proposed. Illumination light field is generated according to circular harmonic Fourier polynomials. The light intensity obtained by the single-pixel detector can be regarded as the value of circular harmonic Fourier moments. The object can be reconstructed by the inverse moment transformation. Experimental results show that this method can get better reconstruction results under very low sampling ratio. At the same time, by using the mathematical properties of circular harmonic Fourier moments, multi-distortion invariant is constructed. When the target has rotation or scale change, different targets can be classified well by using only a few light fields. The next section of this paper explains the principles of using circular harmonic Fourier moments for single-pixel imaging and target recognition. The third section introduces the simulation results, while the fourth section presents the experimental results. Finally, in the fifth section, a summary of the article is provided.

2. Principles and methods

2.1 Single-pixel imaging principle using circular harmonic Fourier moments

This part introduces the theoretical basis of using circular harmonic Fourier moments for SPI. The mathematical expression of the circular harmonic Fourier moment [25,26] is:

$${\phi _{nm}} = \int\limits_0^{2\pi } {\int\limits_0^1 {f(r,\theta )} } {P_{nm}}(r,\theta )rdrd\theta = \int\limits_0^{2\pi } {\int\limits_0^1 {f(r,\theta )} } {T_n}(r)\exp ( - jm\theta )rdrd\theta$$
where n represents the order and m represents the circular repetition, Tn(r) is:
$${T_n}(r) = \left\{ \begin{array}{l} \frac{1}{{\sqrt r }},n = 0\\ \sqrt {\frac{2}{r}} \sin [(n + 1)\pi r],n{\kern 1pt} \textrm{ }is\textrm{ }odd\\ \sqrt {\frac{2}{r}} \cos (n\pi r),n\textrm{ }is\textrm{ }even \end{array} \right.$$

According to the principle of SPI and Eq. (1), The circular harmonic Fourier patterns can be extracted from the circular harmonic Fourier function according to the reconstruction process in the equation, as shown below:

$${P_{nm}}(r,\theta ) = {T_n}(r)\exp (jm\theta)$$

Since the form of polar coordinates is used in Eq. (3). However, in the actual experimental system, the values in polar coordinates need to be converted to Cartesian coordinate system for projection. Eq. (4) and Eq. (5) are used for coordinate transformation:

$$x = \frac{{rN}}{{2(m - 1)}}\cos (\frac{{2\pi \theta }}{n}) + \frac{N}{2}$$
$$y = \frac{{rN}}{{2(m - 1)}}\sin (\frac{{2\pi \theta }}{n}) + \frac{N}{2}$$

In addition, since Eq. (3) has both real-valued components and virtual values, it cannot be realized in the experiment, therefore, Eq. (3) is expressed by real-valued trigonometric components:

$$P_{nm}^{(c)}(x,y) = P_{nm}^{(c)}(r,\theta ) = {T_n}(r)\cos (m\theta )$$
$$P_{nm}^{(s)}(x,y) = P_{nm}^{(s)}(r,\theta ) = {T_n}(r)\sin (m\theta )$$

Then:

$${P_{nm}}(x,y) = P_{nm}^{(c)}(x,y) - jP_{nm}^{(s)}(x,y)$$

Although the values in Eq. (6) and Eq. (7) are all real values, each circular harmonic Fourier basis mode still has positive and negative values in the pixels located in the unit circle. To solve this problem, Using the differential method [27], each mode Pnm (x, y) can be divided into two complementary modes P + nm (x, y) and P-nm (x, y). Thus, there are:

$$\begin{array}{l} {\phi _{nm}} = \int\limits_0^{2\pi } {\int\limits_0^1 {f(r,\theta )} } {P_{nm}}(r,\theta )rdrd\theta \textrm{ }\\ \textrm{ = [}\int\limits_{}^{} {\int\limits_{}^{} {_{{x^2} + {y^2} \le 1}f(x,y)P_{nm}^{(c) + }(x,y)dxdy} } - \int\limits_{}^{} {\int\limits_{}^{} {_{_{{x^2} + {y^2} \le 1}}f(x,y)P_{nm}^{(c) - }(x,y)dxdy} } ]\\ \textrm{ } - j[\int\limits_{}^{} {\int\limits_{}^{} {_{_{{x^2} + {y^2} \le 1}}f(x,y)P_{nm}^{(s) + }(x,y)dxdy} } - \int\limits_{}^{} {\int\limits_{}^{} {_{_{{x^2} + {y^2} \le 1}}f(x,y)P_{nm}^{(s) - }(x,y)dxdy} } ] \end{array}$$

Figure 1 shows some grayscale images of circular harmonic Fourier basis patterns of partial order on a 128 × 128 pixels rectangle.

 figure: Fig. 1.

Fig. 1. The partial order circular harmonic Fourier light field generated according to Eq. (4) and Eq. (5). The black part represents 0, and the white part represents 1.

Download Full Size | PDF

In SPI, the Pnm (x, y) part of the formula is the illuminated light field, and f (x, y) is the original object. Therefore, the light field is successively loaded onto the DMD, and the transmitted or reflected light intensity after irradiation on the object is collected by the single-pixel detector. The value of the obtained single-pixel detector is equivalent to the value of the circular harmonic Fourier moment, and the reconstructed object fR (x, y) can be obtained through the inverse moment transformation shown in the Eq. (10):

$${f_R}(r,\theta ) = \sum\limits_{n = 0}^\infty {\sum\limits_{m ={-} \infty }^\infty {{\phi _{nm}}} } {T_n}(r)\exp (jm\theta )$$

2.2 Mathematical invariant properties of circular harmonic Fourier moments

In addition to imaging, because of the unique mathematical form of the circular harmonic Fourier moment, it can also be applied to the field of target recognition in the absence of images. Since the angular function of the circular harmonic Fourier moment is exp(jmθ), Therefore, after rotating the image by a certain angle, all moments increase the same phase factor, and the modulus of the moment remains unchanged, that is, the circular harmonic Fourier moment itself has rotation invariance [26].

When the scale of an object changes, it can be normalized with the aid of Fourier-Mellin moments to obtain the scale invariant [27]. The expression of the Fourier-Mellin moment [28] of the image is:

$${M_{sm}} = \int\limits_0^{2\pi } {\int\limits_0^1 {{r^s}f(r,\theta )\exp ( - jm\theta )rdrd\theta } }$$

Calculate the low-order Fourier-Mellin moment $\frac{{M_{10}^i}}{{M_{00}^i}}$ of each image with Eq. (11), select the maximum value, and calculate the scale and density distortion factor ki and gi of each image with Eq. (12) and Eq. (13):

$${k_i} = (\frac{{M_{10}^i}}{{M_{00}^i}})/(\frac{{{M_{10}}}}{{{M_{00}}}})$$
$${g_i} = {[(\frac{{M_{10}^{}}}{{M_{00}^{}}})/(\frac{{M_{10}^i}}{{M_{00}^i}})]^2} \bullet \frac{{M_{00}^i}}{{{M_{00}}}}$$

Invariant can be obtained:

$$\Phi _{nm}^i = [\int\limits_0^{2\pi } {\int\limits_0^{{k_i}} {{g_i}f(r/{k_i}){T_n}(r/{k_i})\exp ( - jm\theta )rdrd\theta } } ]/{g_i}k_i^2$$

It can be seen that both the scale distortion factor and the density distortion factor are independent of the angular function, so the obtained invariants have both scale invariance and rotation invariance. Therefore, diameter nm can be regarded as a multi-distortion invariant. Same as the circular harmonic Fourier pattern basis, the Fourier-Mellin light field can be generated:

$${M_{sm}} = \int\limits_0^{2p} {\int\limits_0^1 {{r^s}\exp ( - jmq)rdrdq} }$$
with:
$${M_{00}} = \int\limits_0^{2\pi } {\int\limits_0^1 {rdrd\theta } } = \int {\int {_{{x^2} + {y^2} \le 1}1dxdy} }$$
$${M_{10}} = \int\limits_0^{2\pi } {\int\limits_0^1 {{r^2}drd\theta } } = \int {\int {_{{x^2} + {y^2} \le 1}} } \sqrt {{x^2} + {y^2}} dxdy$$

Figure 2 shows a grayscale image of the Fourier-Mellin pattern of M10 and M00 in a 128 × 128 pixels rectangle.

 figure: Fig. 2.

Fig. 2. Fourier-Mellin light fields of order M00 and M10 generated according to Eq. (16) and Eq. (17). The black part represents 0 and the white part represents 1.

Download Full Size | PDF

In SPI, M00 and M10 Fourier-Mellin light fields are first used to irradiate the object, the scale distortion factor and density distortion factor are calculated, and then the circular harmonic Fourier light field is irradiated to obtain the value of the single-pixel detector. The multi-distortion invariants can be obtained by calculating with the Eq. (12). Then different targets are classified.

3. Simulation results

3.1 Circular harmonic Fourier single-pixel imaging simulation

We verify the imaging capability of CHF-SPI in simulation. The gray image and binary image are reconstructed under different sampling ratios in Fig. 3. At the same time, F-SPI is used to reconstruct the objects under the same sampling ratio for comparison. F-SPI adopts the four-step phase shift method and samples according to the circular path [13]. It is found that CHF-SPI can render the image of an object at a very low sampling ratio, and the quality of the reconstructed image is better than that of F-SPI. With the increase of sampling ratio, the image quality becomes clearer and clearer.

 figure: Fig. 3.

Fig. 3. Reconstruction results of (a) gray image and (b) binary image using CHF-SPI and F-SPI under different sampling ratios.

Download Full Size | PDF

In order to quantitatively analyze reconstructed images, root mean square error (RMSE) is used as evaluation indexes [29]. The smaller the value of RMSE, the better the quality of the reconstructed image. As can be seen from Fig. 4, at very low sampling ratio, the RMSE of image reconstruction by CHF-SPI is lower than F-SPI image reconstruction. However, with the increase of sampling rate, the quality of reconstructed image by F-SPI method is gradually better than that by CHF-SPI. For F-SPI, the higher the sampling rate, the clearer the image, and therefore the smaller the RMSE value. For CHF-SPI, however, with the increase of sampling rate, the RMSE value becomes smaller. The center area of the image is gradually oversampled, and the values of the image center and the edge are quite different, so the RMSE has a rising trend. The advantage of CHF-SPI proposed in this paper is image reconstruction under very low sampling rate.

 figure: Fig. 4.

Fig. 4. (a) RMSEs of reconstructed gray image; (b) RMSEs of reconstructed binary image.

Download Full Size | PDF

3.2 Circular harmonic Fourier moment invariance simulation

3.2.1 Rotational invariance simulation

In this part, the rotation invariance of multi-distortion invariant constructed by using circular harmonic Fourier moments is verified by simulation. Figure 5 shows some binary and grayscale images with a resolution of 128 × 128 pixels after rotation changes. The rotation angles are set to 0°, 45°, 90°, 150°, 210°, and 300°. In theory, each invariant should remain constant during rotation. However, when m and n are large, the light field is weak, and the discrete errors in the experiment will become larger, resulting in poor invariance of the obtained results. Therefore, we use low-order circular harmonic Fourier moments for testing. Here we take Φ11 and Φ22 to calculate the result. Since the change of scale and distortion is not involved in the rotation process, we directly give the calculation result of the distortion invariant here. At the same time, in order to quantitatively represent the invariant characteristics, the standard deviation of each invariant is calculated. The calculated value and standard deviation are given in Table 1. In order to more clearly represent the degree of dispersion between the data, the standard deviation σ between the same-order invariants calculated from different angles is calculated according to Eq. (18). It can be seen that the invariant values of both grayscale images and binary images have small changes. The small differences between the different data come from the discretization of the images.

$$\sigma = \sqrt {\frac{{\sum\nolimits_{i = 1}^n {{{({x_i} - \bar{x})}^2}} }}{n}}$$

 figure: Fig. 5.

Fig. 5. Images of binary image and grayscale image rotated 0°, 45°, 90°, 150°, 210° and 300° respectively.

Download Full Size | PDF

Tables Icon

Table 1. Amplitude values of multi-distortion invariant moments of order Φ11 and Φ22 at different rotation angles (×102)

3.2.2 Scale invariance simulation

The scale invariance of the circular harmonic Fourier moment is also verified by simulation. Figure 6 shows the images after varying the scale of the gray image and the binary image respectively. The reduction times are 0.8, 0.61, 0.41 and 0.21 times. Again, each image has a resolution of 128 × 128. First, the scale change factor is calculated using the Fourier-Mellin light field of order M00 and M10, and the scale change multiple of the object can be obtained, as shown in Table 2. It can be seen that the calculated result is basically consistent with the actual change multiple, and the error is caused by partial distortion caused by discretization. After the values of k and g are obtained, the circular harmonic Fourier moment can be used to calculate the distortion invariants of the object. In theory, each order of distortion invariants has invariants. Here, orders Φ11 and Φ22 are also used for calculation, and the results are shown in Table 3. For quantitative description, the standard deviation of the same order moment under each scale change is calculated, and the obtained value is small, so it can be considered that the distortion invariant used also has good scale invariance.

 figure: Fig. 6.

Fig. 6. Images obtained by photographers of binary image and grayscale image are reduced to 0.8x, 0.61 x, 0.41x and 0.21x respectively.

Download Full Size | PDF

Tables Icon

Table 2. Comparison between the multiple of image change and the scale change factor obtained by simulation

Tables Icon

Table 3. Amplitude values of Φ11 and Φ22 invariant moments at different scale change multiples (×102)

4. Experimental results

The proposed circular harmonic Fourier single-pixel imaging and object recognition applications were also verified by experiments. The experimental optical path as shown in Fig. 7 was built. A laser with a wavelength of 532 nm (LSR532NL-100) was used as the light source. The laser was then irradiation on the DMD screen after passing through a beam expanding system (BE). The DMD system (Texas Instruments Discovery V7001) had a resolution of 1024 × 768 and was used to modulate the illumination light field. Lighting patterns could be generated by loading different computational images (as shown in Fig. 1) on the DMD screen. The modulated illumination laser was then projected onto the object using a projection lens (PL) with a focal length of 300 mm. The PL was 450 mm away from the DMD screen and the object was located 900 mm from PL. The object and the DMD screen satisfied the conjugation relationship. The reflected light from the object was collected through a collection lens (CL) and then detected with a single-pixel detector (SD, Thorlabs PMT-PMM02). The detected light intensity was transmitted to the computer through the data acquisition system (DAS, NIUSB-6361). Using self-developed data acquisition software to record the illumination mode and the corresponding reflected light intensity synchronously.

 figure: Fig. 7.

Fig. 7. Schematic diagram of SPI experimental system.

Download Full Size | PDF

4.1 single-pixel imaging experiment

In the above experimental system, another DMD was used as a reflective object and the English letter A and letter E was loaded on its screen. This DMD system had the same parameters as the system that produced the illumination pattern. When the illumination laser was modulated, a 512 × 512 mirror in the middle of the first DMD was used to produce the illumination mode. One image pixel was represented by four adjacent mirrors on the DMD. Since the resolution of the entire image was 128 × 128 pixels. To demonstrate the imaging effect at a low sampling ratio, we generated a series of 128 × 128 pixels of grayscale circular harmonic Fourier mode and Fourier base mode (generated by the four-step phase shift method) to illuminate the object. In this way, the reconstructed object has 128 × 128 pixels. The target image was reconstructed at the sampling ratios of 0.9%, 1.4%, 1.9%, 2.5% and 2.9%, respectively. The DMD-modulated 8-bit gray image frequency used in this experiment system is 290 Hz, so the sampling time is 2 seconds,3.2 seconds,4.3 seconds,5.6 seconds,6.5 seconds, respectively. To further reduce the sampling time, technically, the loading frequency of DMD to gray image can be increased. Alternatively, the time dithering method in Ref. [30] can be used, the grayscale patterns are split into a pair of grayscale patterns based on positive/negative pixel values, which are then decomposed into a cluster of binary basis patterns based on the principle of decimalization to binary. Thus, the imaging speed is improved without reducing the spatial resolution. The reconstruction results were shown in Fig. 8. For each letter, the image in the first row was the reconstruction result of F-SPI, and the image in the second row was the reconstruction result of CHF-SPI. The RMSEs of reconstructed images are calculated, as shown in Fig. 9. It can be seen that the proposed technology can reconstruct images well under extremely low sampling ratio, and the quality of reconstructed images when CHF-SPI is applied is better than that of F-SPI under extremely low sampling ratio.

 figure: Fig. 8.

Fig. 8. The reconstruction results of (a) letter A and (b) letter E by using F-SPI and CHF-SPI under different sampling. ratios.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. (a) RMSEs of reconstructed letter A; (b) RMSEs of reconstructed letter E.

Download Full Size | PDF

4.2 Target classification experiment

4.2.1 Rotating Target Classification

In order to quantitatively analyze the effect of target classification, this paper adopted the minimum distance classifier for calculation [31]. Its basic principle is as follows: calculate a random feature point X to the position Wi (i = 1, 2, …m) of the training sample of the relevant category cluster between the distance, to which class of middle-distance d is the smallest, it belongs to which class.

$$\textrm{d}(X) = \sum\limits_{\textrm{i} = 1}^m {|{X - {W_i}} |}$$

In this paper, the unrotated image is taken as the training sample of each cluster, and the distance between the test image and each training sample is calculated. In theory, any first order moment of the circular harmonic Fourier has rotation invariance, here we choose Φ31 and Φ51 order, there are:

$$d(X,i) = {({|{{\Phi _{31}}} |- |{\Phi _{31}^{(i)}} |} )^2} + {({|{{\Phi _{51}}} |- |{\Phi _{51}^{(i)}} |} )^2},i = {A_0},{B_0},{C_0},{D_0},{E_0}$$

Rotate each letter by 30°, 60°, 120°, and 210° respectively to get the image set shown in Fig. 10.

 figure: Fig. 10.

Fig. 10. Rotation of binary image A, B, C, D and E by 0°,30°, 60°, 120° and 210°, respectively.

Download Full Size | PDF

By irradiating 10 light fields for each image, namely P(c) + 31(x, y), P(c)-31(x, y), P(s) + 31(x, y), P(s)-31(x, y), P(c) + 51(x, y), P(c)-51(x, y), P(s) + 51(x, y), P(s)-51(x, y), the distortion invariant can be calculated. the time required is about 0.14 seconds. The distances between test images (letters A and letter B) and various training images are shown in Table 4. Obviously, for the rotating image of the test letter, the distance from the corresponding class training image is minimal compared to the class training image from the other class. In addition, the case of the letters C, D, and E is the same as the case of the letters A and B. Regardless of which test image is selected, the magnitude of several specific distortion invariant moments can be measured and the distance to different classes can be calculated. It can be divided into the class with the smallest distance. In most cases, the distance between images of the same class is at least two orders of magnitude smaller than the distance between images of different classes.

Tables Icon

Table 4. Distance between test images (letters A and B) and various training images (×10−3)

At the same time, in order to show the classification effect more clearly, we take Φ31 and Φ51 as horizontal and vertical axes respectively, and make Fig. 11. As we can see, images with the same letter and different rotation angles can be clearly divided into a group. And the images with different letters can be clearly distinguished.

 figure: Fig. 11.

Fig. 11. Amplitude values of multi-distortion invariant moments of order Φ31 and Φ51.

Download Full Size | PDF

4.2.2 Scale transform target classification

Similar to the verification of rotational invariance, we also experimentally verify the scale invariance of the circular harmonic Fourier moments. Make images 0.8, 0.9, 1.1 and 1.2 times of the original image, as shown in Fig. 12. In theory, each order has scale invariance, here we use Φ22 and Φ30 orders. In the experiment, two Fourier-Mellin light fields M00 and M10 are first irradiated on the object to calculate the scale change multiple of the object to calculate the scale change multiples of the object. Table 5 shows the scale transformation multiples obtained by calculation, and it can be seen that the experimental results have little difference from the theory. The error is caused by image distortion and external noise caused by discretization. Eight circular harmonic Fourier fields P(c) + 22(x, y), P(c)-22(x, y), P(s) + 22(x, y), P(s)-22(x, y), P(c) + 30(x, y), P(c)–30(x, y), P(s) + 30(x, y), Pcs)-30(x, y) are used to illuminate the object. Finally, the value of the distortion invariant can be obtained by calculation. The experimental results of the distance between the test image (letter A and letter B) and various training images are shown in Table 6. Similarly, the test scale for each letter transforms the image to the minimum distance from the corresponding class training image. Most images of the same class are at least an order of magnitude smaller than the distances between images of different classes. With order Φ22 and order Φ30 as horizontal and vertical axes in Fig. 13, it can be seen that the classification effect is better.

 figure: Fig. 12.

Fig. 12. 0.8 x, 0.9x, 1x, 1.1x and 1.2x of the binary images A, B, C, D and E.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Strength values of Φ22 and Φ30 multi-distortion invariant moments.

Download Full Size | PDF

Tables Icon

Table 5. Comparison of theoretical scale change multiples and experimental results

Tables Icon

Table 6. Distance between test images (letters A and B) and various training images (×10−2)

In this part, the fast imaging and target classification using circular harmonic Fourier are experimentally verified. The DMD system with high modulation speed is used in the spatial light modulator in the experiment, which ensures the timeliness of imaging and classification. Firstly, the fast imaging capability is verified, and the target is imaged under the condition that the sampling rate is only 0.9%, and the imaging quality is much better than that of F-SPI under the same sampling rate. Secondly, the image sets with rotation and scale change are quickly and accurately classified experimentally. This method has important practical significance and can be applied to the classification of rotating targets, such as cell classification. It can also be applied to the classification of objects with scale changes such as vehicle license plates.

5. Summary

In this paper, circular harmonic Fourier moments are used to realize fast target imaging and classification. On the one hand, by analyzing the theory of circular harmonic Fourier moments, the circular harmonic Fourier mode light field is constructed to achieve single-pixel imaging technology. Through the reconstruction of letter A and letter E, it is found that imaging can be performed under extremely low sampling rate conditions and the imaging effect is better than that under the same sampling rate Fourier single-pixel imaging. On the other hand, the mathematical properties of circular harmonic Fourier moments are analyzed, and the image sets are quickly classified by using their rotation invariance and scale invariance. Only 10 mode light fields are needed to calculate the multi-distortion invariants of an image. In the training set of the experiment, five English letters are used, which are transformed by different angles respectively, and the same letter with different rotation angles can be successfully divided into a class. In addition, the scaling multiples of 5 letters in the training set can be calculated by doing different scaling transformations, and different targets can be distinguished well. The DMD system with high modulation speed and the single-pixel detector with high sensitivity are used in the experiment, which can realize fast and accurate classification. This method is expected to be applied to traffic navigation, remote sensing, medical treatment, military and other fields. Realize cell classification, vehicle license plate classification and other practical needs.

Funding

Youth Innovation Promotion Association of the Chinese Academy of Sciences (2020438).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. P. A. Morris, R. S. Aspden, J. E. Bell, et al., “Imaging with a small number of photons,” Nat. Commun. 6(1), 5913 (2015). [CrossRef]  

2. W. K. Yu, X. F. Liu, X. R. Yao, et al., “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4(1), 5834 (2014). [CrossRef]  

3. M. P. Edgar, G. M. Gibson, R. W. Bowman, et al., “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015). [CrossRef]  

4. W. L. Chan, K. Charan, D. Takhar, et al., “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93(12), 121105 (2008). [CrossRef]  

5. R. I. Stantchev, B. Sun, S. M. Hornett, et al., “Noninvasive, near-field terahertz imaging of hidden objects using a single-pixel detector,” Sci. Adv. 2(6), e1600190 (2016). [CrossRef]  

6. J. Greenberg, K. Krishnamurthy, and D. Brady, “Compressive single-pixel snapshot x-ray diffraction imaging,” Opt. Lett. 39(1), 111–114 (2014). [CrossRef]  

7. P. Thibault, M. Dierolf, A. Menzel, et al., “High-resolution scanning x-ray diffraction microscopy,” Science 321(5887), 379–382 (2008). [CrossRef]  

8. E. J. Candes and M. B. Wakin, “An Introduction To Compressive Sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]  

9. M. F. Duarte, M. A. Davenport, D. Takhar, et al., “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

10. M. J. Sun, L. T. Meng, M. P. Edgar, et al., “A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging,” Sci. Rep. 7(1), 3464 (2017). [CrossRef]  

11. Y. Xiao, L. Zhou, and W. Chen, “Direct Single-Step Measurement of Hadamard Spectrum Using Single-Pixel Optical Detection,” IEEE Photon. Technol. Lett. 31(11), 845–848 (2019). [CrossRef]  

12. X. Yu, F. Yang, B. Gao, et al., “Deep Compressive single-pixel Imaging by Reordering Hadamard Basis: A Comparative Study,” IEEE Access 8, 55773–55784 (2020). [CrossRef]  

13. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]  

14. H. Jiang, S. Zhu, H. Zhao, et al., “Adaptive regional single-pixel imaging based on the Fourier slice theorem,” Opt. Express 25(13), 15118–15130 (2017). [CrossRef]  

15. J. Wu, F. Yang, and L. Cao, “Resolution enhancement of long-range imaging with sparse apertures,” Opt. Lasers Eng. 155, 107068 (2022). [CrossRef]  

16. B.-L. Liu, Z.-H. Yang, X. Liu, et al., “Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform,” J. Mod. Opt. 64(3), 259–264 (2017). [CrossRef]  

17. W. Lai, G. Lei, Q. Meng, et al., “Single-pixel imaging using discrete Zernike moments,” Opt. Express 30(26), 47761–47775 (2022). [CrossRef]  

18. G. Wang, H. Deng, M. Ma, et al., “Polar coordinate Fourier single-pixel imaging,” Opt. Lett. 48(3), 743–746 (2023). [CrossRef]  

19. M. S. Wei, F. Xing, and Z. You, “A real-time detection and positioning method for small and weak targets using a 1D morphology-based approach in 2D images,” Light: Sci. Appl. 7(5), 18006 (2018). [CrossRef]  

20. S. Jiao, J. Feng, Y. Gao, et al., “Optical machine learning with incoherent light and a single-pixel detector,” Opt. Lett. 44(21), 5186–5189 (2019). [CrossRef]  

21. H. Wang, C. Zhu, and L. Bian, “Image-free multi-character recognition,” Opt. Lett. 47(6), 1343–1346 (2022). [CrossRef]  

22. L. Zha, D. Shi, J. Huang, et al., “Single-pixel tracking of fast-moving object using geometric moment detection,” Opt. Express 29(19), 30327–30336 (2021). [CrossRef]  

23. W. Meng, D. Shi, Z. Guo, et al., “Image-free multi-motion parameters measurement by single-pixel detection,” Opt. Commun. 535, 129345 (2023). [CrossRef]  

24. Z. Zhang, J. Ye, Q. Deng, et al., “Image-free real-time detection and tracking of fast moving object using a single-pixel detector,” Opt. Express 27(24), 35394–35401 (2019). [CrossRef]  

25. H.-P. Ren, Z.-L. Ping, W.-R.-G. Bo, et al., “Cell image recognition with radial harmonic Fourier moments,” Chinese Phys. 12(6), 610–614 (2003). [CrossRef]  

26. H. Ren, Z. Ping, W. Bo, et al., “Multidistortion-invariant image recognition with radial harmonic Fourier moments,” J. Opt. Soc. Am. A 20(4), 631–637 (2003). [CrossRef]  

27. Z. Zhang, X. Wang, G. Zheng, et al., “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express 25(16), 19619–19639 (2017). [CrossRef]  

28. H. Zhang, H. Z. Shu, P. Haigron, et al., “Construction of a complete set of orthogonal Fourier–Mellin moment invariants for pattern recognition applications,” Image Vision Comput. 28(1), 38–44 (2010). [CrossRef]  

29. I. Avcibas, B. Sankur, and K. Sayood, “Statistical evaluation of image quality measures,” J. Electron. Imaging 11(2), 206 (2002). [CrossRef]  

30. J. Huang, D. F. Shi, K. E. Yuan, et al., “Computational-wieghted Fourier single-pixel imaging via binary illumination,” Opt. Express 26(13), 16547–16560 (2018). [CrossRef]  

31. M. S. Packianather and P. R. Drake, “Comparison of neural and minimum distance classifiers in wood veneer defect identification,” Proc. Inst. Mech. Eng., Part B 219(11), 831–841 (2005). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. The partial order circular harmonic Fourier light field generated according to Eq. (4) and Eq. (5). The black part represents 0, and the white part represents 1.
Fig. 2.
Fig. 2. Fourier-Mellin light fields of order M00 and M10 generated according to Eq. (16) and Eq. (17). The black part represents 0 and the white part represents 1.
Fig. 3.
Fig. 3. Reconstruction results of (a) gray image and (b) binary image using CHF-SPI and F-SPI under different sampling ratios.
Fig. 4.
Fig. 4. (a) RMSEs of reconstructed gray image; (b) RMSEs of reconstructed binary image.
Fig. 5.
Fig. 5. Images of binary image and grayscale image rotated 0°, 45°, 90°, 150°, 210° and 300° respectively.
Fig. 6.
Fig. 6. Images obtained by photographers of binary image and grayscale image are reduced to 0.8x, 0.61 x, 0.41x and 0.21x respectively.
Fig. 7.
Fig. 7. Schematic diagram of SPI experimental system.
Fig. 8.
Fig. 8. The reconstruction results of (a) letter A and (b) letter E by using F-SPI and CHF-SPI under different sampling. ratios.
Fig. 9.
Fig. 9. (a) RMSEs of reconstructed letter A; (b) RMSEs of reconstructed letter E.
Fig. 10.
Fig. 10. Rotation of binary image A, B, C, D and E by 0°,30°, 60°, 120° and 210°, respectively.
Fig. 11.
Fig. 11. Amplitude values of multi-distortion invariant moments of order Φ31 and Φ51.
Fig. 12.
Fig. 12. 0.8 x, 0.9x, 1x, 1.1x and 1.2x of the binary images A, B, C, D and E.
Fig. 13.
Fig. 13. Strength values of Φ22 and Φ30 multi-distortion invariant moments.

Tables (6)

Tables Icon

Table 1. Amplitude values of multi-distortion invariant moments of order Φ11 and Φ22 at different rotation angles (×102)

Tables Icon

Table 2. Comparison between the multiple of image change and the scale change factor obtained by simulation

Tables Icon

Table 3. Amplitude values of Φ11 and Φ22 invariant moments at different scale change multiples (×102)

Tables Icon

Table 4. Distance between test images (letters A and B) and various training images (×10−3)

Tables Icon

Table 5. Comparison of theoretical scale change multiples and experimental results

Tables Icon

Table 6. Distance between test images (letters A and B) and various training images (×10−2)

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

ϕ n m = 0 2 π 0 1 f ( r , θ ) P n m ( r , θ ) r d r d θ = 0 2 π 0 1 f ( r , θ ) T n ( r ) exp ( j m θ ) r d r d θ
T n ( r ) = { 1 r , n = 0 2 r sin [ ( n + 1 ) π r ] , n   i s   o d d 2 r cos ( n π r ) , n   i s   e v e n
P n m ( r , θ ) = T n ( r ) exp ( j m θ )
x = r N 2 ( m 1 ) cos ( 2 π θ n ) + N 2
y = r N 2 ( m 1 ) sin ( 2 π θ n ) + N 2
P n m ( c ) ( x , y ) = P n m ( c ) ( r , θ ) = T n ( r ) cos ( m θ )
P n m ( s ) ( x , y ) = P n m ( s ) ( r , θ ) = T n ( r ) sin ( m θ )
P n m ( x , y ) = P n m ( c ) ( x , y ) j P n m ( s ) ( x , y )
ϕ n m = 0 2 π 0 1 f ( r , θ ) P n m ( r , θ ) r d r d θ    = [ x 2 + y 2 1 f ( x , y ) P n m ( c ) + ( x , y ) d x d y x 2 + y 2 1 f ( x , y ) P n m ( c ) ( x , y ) d x d y ]   j [ x 2 + y 2 1 f ( x , y ) P n m ( s ) + ( x , y ) d x d y x 2 + y 2 1 f ( x , y ) P n m ( s ) ( x , y ) d x d y ]
f R ( r , θ ) = n = 0 m = ϕ n m T n ( r ) exp ( j m θ )
M s m = 0 2 π 0 1 r s f ( r , θ ) exp ( j m θ ) r d r d θ
k i = ( M 10 i M 00 i ) / ( M 10 M 00 )
g i = [ ( M 10 M 00 ) / ( M 10 i M 00 i ) ] 2 M 00 i M 00
Φ n m i = [ 0 2 π 0 k i g i f ( r / k i ) T n ( r / k i ) exp ( j m θ ) r d r d θ ] / g i k i 2
M s m = 0 2 p 0 1 r s exp ( j m q ) r d r d q
M 00 = 0 2 π 0 1 r d r d θ = x 2 + y 2 1 1 d x d y
M 10 = 0 2 π 0 1 r 2 d r d θ = x 2 + y 2 1 x 2 + y 2 d x d y
σ = i = 1 n ( x i x ¯ ) 2 n
d ( X ) = i = 1 m | X W i |
d ( X , i ) = ( | Φ 31 | | Φ 31 ( i ) | ) 2 + ( | Φ 51 | | Φ 51 ( i ) | ) 2 , i = A 0 , B 0 , C 0 , D 0 , E 0
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.