Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Simplified method for small infrared target detection against gentle background movement

Open Access Open Access

Abstract

The small infrared target against a great movement background can be detected by calculating its optical flow field to estimate and compensate for the movement background, but the target against gentle movement background has a large computational complexity for calculating its optical flow field. Therefore, this paper proposed a simplified method for detecting a small infrared target against a gentle movement background, considering that the background has no rotational movement and perspective relationship. It uses the translational motions in the horizontal and vertical directions to represent background movement. It uses the minimum variance of the difference in the gray scale between previous and posterior frames to estimate the background movement in the horizontal and vertical directions; it then uses the differential method and the morphological process to detect a moving small target. The results from measurement sequences show that the proposed method can detect infrared targets against gentle background movement and that its computational complexity is greatly reduced.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Infrared search and track system is widely applied in the various fields, especially in the precision guide weapons. Whereas, the small infrared moving target detection is very difficult and challenging work [1]. On one hand, there is no shape and structure information for infrared target, the image sequence contains background motion and small moving target. On the other hand, the infrared small targets are usually submerged in background clutter and heavy noise with low signal noise ratio (SNR). Many scholars have done a lot of work in this area. Various creative algorithms for small infrared target detection could be classified into two categories: detection based single frame and detection based on method sequential frames.

The single frame detection methods can be divided into two classes: one class is background estimation methods and other class is target extraction methods. One class methods using image filtering enhances targets by estimating the background filtering, such as Max-mean and Max-median filter [2], Top-hat transform [3], Least Mean Squares filter [4], and Non-local Means filter [5]. In recent years, researchers have put forward novel methods to extract the features of the small and dim targets on the basis of the digital filter. Hu [6] propose an anisotropic spatial-temporal fourth-order diffusion filter for background prediction. In order to detect the target, the estimated background is subtracted from the original image. The directional max-median filter [7] is developed to make a pre-processing and a background suppression filtering template is utilized on the de-noised image to highlight target. These methods cannot detect the targets with changing sizes in real application efficiently because of background prediction using the fixed-scaled masks. And the methods are sensitive to the various backgrounds with heavy clutter. A multiscale local homogeneity measure is presented in [8]. Firstly, intra-patch homogeneity of the target itself and the inter-patch heterogeneity between target and the local background region are integrated to enhance the significant of small target. Secondly, a multiscale measure based on local regions is proposed to obtain the most appropriate response. Yun [9] propose the algorithm consists of two stages: at the first stage spectrum scale space is used as the pre-process procedure to obtain the multi-scale saliency maps. At the second stage, the Gabor wavelets algorithm is utilized to suppress the high frequency noise remained in the optimal salient map and match the feature of size and direction of small target at the different scales and angles. The proposed histogram rightwards cyclic shift banalization first transforms the histogram curve according to a self-adaptive gray level transformation equation and the background subtraction based on Gaussian filtering can be used to generate an enhanced image in [10]. Another type of algorithms is detecting targets by using machine learning theory. The [11] present a new method by combining fractional Fourier transform with the high order statistic filtering. Yan [12] advocates the use of a local steering kernel to encode the infrared image. Long [13] formulates the infrared target detection as a multi-classification problem and propose a method based multi-label generative model.

The use of sequence infrared sequence images for target detection, the researchers also carried out lots of innovative work. Such as, inaccuracies in sensor reading are eliminated by morphological filter [14]. Based on the singular value decomposition and the improved kernelized correlation filter, Kun [15] adopt a mutual consistency guided spatial cues combination-based method to capture the regions with obvious luminance contrast and contour features. At the same, a multi-frame symmetric difference approach is proposed to discriminate salient moving regions of interest from background motions in [16]. The [17] proposes a novel method based on the local region similarity difference, and then the regions where the true targets and suspected target areas exist in are extracted.

These methods could not detect the targets on the nonstationary background, although they can achieve good performance under static background. In order to overcome the tracking problem of target motion with the violent camera displacement. A new concept based on the particle filtering with the optical flow clustering method is presented in [18]. The target motion can be estimated by clustering and analyzing the optical flow. To avoid problem because of the gradient method, Koji [19] proposed a tracking method using one-dimensional optical flow under a rotating observer using mapping, which converts the motion of a stationary environment object into a linear signal trajectory.

The method [19] is more effective with rotational motion background which is used in practical applications with significant result. In view of the gentle moving background, this method cannot express the rapidity. To cope with this situation, we purposed a simplified target detection method under gentle background movement. The method uses horizontal and vertical translational motion to represent the background motion, and estimates the horizontal and vertical background motion by the minimum variance estimation (MVE) of the gray difference between the two frames. The experimental results indicate that the proposed method could achieve same detection precision result and have lower computation with the Lucas-Kanade (LK) optical flow method [18]. The remainder of this paper is organized as follows. The simplified target detection method is presented in section 2. Section 3 conducts the four infrared image sequences experiments to certificate the effectiveness of the simplified method. The conclusion of this paper is provided in section 4.

2. Materials and methods

2.1 Estimating and reconstructing the background vector

If backgrounds of two adjacent small-profile target images move gently and have no rotational movement and perspective relationship, then the background movement can be approximately represented as the translational movement in the two horizontal and vertical directions. If the size of optical flow of the entire background movement in the horizontal direction is u and that in the vertical direction is v, we can solve the u and v through minimizing the sum of squares of differences in gray scales of corresponding pixels of the two frames ${I_k}$ and ${I_{k + 1}}$, namely:

$$E({u,v} )= \sum\limits_{\scriptstyle({x,y} )\in {I_k}\atop \scriptstyle({x - u,y - v} )\in {I_{k + 1}}} {{{|{{I_k}({x,y} )- {I_{k + 1}}({x - u,y - v} )} |}^{\textrm{2}}}}$$
Where ${I_k}({x,y} )$ is the gray scale of the $({x,y} )$ point of the $k$ th frame of the image sequences I.

If the background movement is gentle, then the movement of pixel points between two frames in horizontal and vertical directions may not exceed n and m numbers of pixels. Therefore, u and v are the integers from –$n$ to +$n$ and from –$m$ to +$m$. In other words, $u \in [{ - n,n} ]$ and $v \in [{ - m,m} ]$ roughly calculate the background movement. Then the estimates of u and v are as follows:

$$({\hat{u},\hat{v}} )= \mathop {\arg \min }\limits_{u = [{ - n,n} ],v = [{ - m,m} ]} E({u,v} )$$
In reality, the numerical values of n and m are determined according to the amplitude values of the inter-frame background movement of a video and its sampling time. It is usually considered that the pixel point movement velocities in horizontal and vertical directions are 250 pixels per second.

2.2 Target detection

After estimating the optical flow vectors $\hat{u}$ and $\hat{v}$ of the background movement, we reconstruct the $k + 1$ th frame of the image ${I_{k + 1}}$, namely:

$${I^{\prime}_{k + 1}}({x,y} )= {I_{k + 1}}({x - \hat{u},y - \hat{v}} )$$
After obtaining reconstructed images, we differentiate ${I^{\prime}_{k + 1}}$ and ${I_k}$, thus obtaining the differential image ${I_{diff}}$ between the two:
$${I_{diff}}({x,y} )= {I_k}({x,y} )- {I^{\prime}_{k + 1}}({x,y} )$$
Then we binarize the differentiated images, thus obtaining the binarized image ${I_{diff2}}$.
$${I_{diff2}}({x,y} )= \left\{ \begin{array}{ll} 255&{I_{diff}}({x,y} )\ge {T_d}\\ 0&{I_{diff}}({x,y} )< {T_d} \end{array}, {T_d} = 0\textrm{.35}\max ({I_{diff}}(x,y))\right.$$
Where ${T_d}$ is the binarized threshold.

After binarizing the image, we perform the closed operation of its morphology, obtaining a complete target image, namely:

$${I_{diff3}} = {I_{diff2}}\cdot B = ({{I_{diff2}} \oplus B} )\Theta B$$
Where B is the structural element selected by the closed operation.

Finally, we use the open operation to remove the independent noise points in the image, namely:

$${I_{diff4}} = {I_{diff3}} \circ B = ({{I_{diff3}}\Theta B} )\oplus B$$

2.3 Target position determination

The position of the small target detected through the series of steps mentioned in Section 2.3 is the position of the target whose image is the $k + 1$ th frame of reconstructed image. The position of the target in the reconstructed image is calculated with the position of the white region in the detection results, namely:

$${x^{\prime}_{k + 1}} = {{\left( {\mathop{\max}\limits_{\scriptstyle{I({x_{k + 1}^i,{\textrm{y}}_{k + 1}^i} )= 1}} ({x_{k + 1}^{\prime i}} )+ \mathop{\min }\limits_{\scriptstyle{I({x_{k + 1}^i,{\textrm{y}}_{k + 1}^i} )= 1}} ({x_{k + 1}^{\prime i}} )} \right)} \mathord{\left/ {\vphantom {{\left( {\mathop {\max }\limits_{I({x_{k + 1}^i,y_{k + 1}^i} )= 1} ({x_{k + 1}^{\prime i}} )+ \mathop {\min }\limits_{I({x_{k + 1}^i,y_{k + 1}^i} )= 1} ({x_{k + 1}^{\prime i}} )} \right)} 2}} \right. } 2}$$
$${y^{\prime}_{k + 1}} = {{\left( {\mathop{\max}\limits_{\scriptstyle{I({x_{k + 1}^i,{\textrm{y}}_{k + 1}^i} )= 1}} ({y_{k + 1}^{\prime i}} )+ \mathop{\min }\limits_{\scriptstyle{I({x_{k + 1}^i,{\textrm{y}}_{k + 1}^i} )= 1}} ({y_{k + 1}^{\prime i}} )} \right)} \mathord{\left/ {\vphantom {{\left( {\mathop {\max }\limits_{I({x_{k + 1}^i,y_{k + 1}^i} )= 1} ({y_{k + 1}^{\prime i}} )+ \mathop {\min }\limits_{I({x_{k + 1}^i,y_{k + 1}^i} )= 1} ({y_{k + 1}^{\prime i}} )} \right)} 2}} \right. } 2}$$
Then, before reconstruction, the position $({{x_{k + 1}},{y_{k + 1}}} )$ of the target in the $k + 1$ th frame if image is as follows:
$${x_{k + 1}} = {x^{\prime}_{k + 1}} + \hat{u}$$
$${y_{k + 1}} = {y^{\prime}_{k + 1}} + \hat{v}$$

2.4 Algorithm details and explanations

Under the gentle motion background, the small moving target detection algorithm is shown in Table 1.

Tables Icon

Table 1. Details of the proposed method

In step 1, the background gentle movement is estimated by Eq. (2). Koji [19] method is more effective under rotation background which applied in practical applications with significant result. In view of the gentle moving background, this method calculates the background movement in a complicated way. The optical flow equation can be obtained by differential operations.

$$\frac{{\partial I}}{{\partial x}}u + \frac{{\partial I}}{{\partial y}}v + \frac{{\partial I}}{{\partial t}} = 0$$
Equation (12) is an overdetermined equation. The LK method is to assume that the optical flow vectors of all pixels in the local small image are approximately the same. The optical flows in the horizontal direction u and in the vertical direction v are estimated by using the least square (LS) method. The error E of the optical flow equation can be expressed as (angular velocity $\omega$ is small):
$$E = {\sum\limits_\omega {\left( {\frac{{\partial I}}{{\partial x}}u + \frac{{\partial I}}{{\partial y}}v + \frac{{\partial I}}{{\partial t}}} \right)} ^2}$$
To minimize $E$, we set $\partial E/\partial x = 0$ and $\partial E/\partial y = 0$. We can get the background optical flow vector estimation ${{\begin{bmatrix} {\hat{u}}&{\hat{v}} \end{bmatrix}}^{\textrm{T}}}$ of the pixel.
$$\left[ {\begin{array}{{c}} {\hat{u}}\\ {\hat{v}} \end{array}} \right] = {\left[ {\begin{array}{{cc}} {\sum\limits_\omega {\frac{{\partial I}}{{\partial x}}\frac{{\partial I}}{{\partial x}}} }&{\sum\limits_\omega {\frac{{\partial I}}{{\partial x}}\frac{{\partial I}}{{\partial y}}} }\\ {\sum\limits_\omega {\frac{{\partial I}}{{\partial x}}\frac{{\partial I}}{{\partial y}}} }&{\sum\limits_\omega {\frac{{\partial I}}{{\partial y}}\frac{{\partial I}}{{\partial y}}} } \end{array}} \right]^{ - 1}}\left[ {\begin{array}{{c}} { - \sum\limits_\omega {\frac{{\partial I}}{{\partial x}}\frac{{\partial I}}{{\partial t}}} }\\ { - \sum\limits_\omega {\frac{{\partial I}}{{\partial y}}\frac{{\partial I}}{{\partial t}}} } \end{array}} \right]$$
It needs to apply newton iteration method to obtain the optical flow of each solution in program computing using Eq. (14). Therefore, the details of LK method are shown in Table 2. We could found that the step 1 of the proposed method is more simplified than the LK method.

Tables Icon

Table 2. Details of the proposed method

3. Experiments and analysis

3.1. Experiment sequences

To evaluate the performance of the proposed method, the experiments are carried out in this section. We check the flowing four infrared image sequences with different gentle background. These infrared images sequences captured at a frame rate of 50 fps. The details of four sequences used in the experiments are described in Table 3.

Tables Icon

Table 3. Details of the fou sequences

3.2 Experiment

The four image sequences have a gentle background movement. The time interval between two frames is 0.02 second; the amplitude values of horizontal and vertical background movements are within 5 pixels; therefore, in Eq. (2), n = m = 5. The element of the open operation is:

$$\left[ {\begin{array}{{ccc}} 1&1&1\\ 1&1&1\\ 1&1&1 \end{array}} \right]$$
The element of the open operation is:
$$\left[ {\begin{array}{{cc}} 1&1\\ 1&1 \end{array}} \right]$$
The results on the infrared target detection with the detection method mentioned in Section 2 are shown in Fig. 1, Where all the resultant images are differentiated and binarized and receive closed and open operations.

 figure: Fig. 1.

Fig. 1. The target detection results by the simplification method

Download Full Size | PDF

The position detection results of the small target using the proposed method and LK method [18] all have precise accuracies in fig1. They have fine detection effect against gentle background movement. Next, we compare the computational time.

3.3 Computation comparison

This paper uses the following calculation conditions. Software version: MATLAB 2015b; processor: intel-i5 3.0G; computer memory: 8.0G. The comparison of the Pyramidal LK method [18] with the method proposed in this paper produces the results on two-frame detection time as shown in Table 4.

Tables Icon

Table 4. Time consumption results for quantitative evaluation

Table 4 shows that the proposed method product the smaller time consumption compared to LK method [18]. The detection results show that this detection method avoids the procedures for detecting optical flow vectors with the LK method is concise and has a fine detection effect against gentle movement background. Take Seq 1 as an example, the time consumption of the proposed method is 2.4219s and is smaller than 44.9844s obtained from LK method. Similar results could also be found from the Seq 2, Seq 3 and Seq 4.

Furthermore, all 1594 frames of the Seq 4 are processed. The time consumed of the two methods to estimation gentle background movement between two adjacent frames is compared as shown in the following Fig. 2. The average computing time between two frames is: 14.2944s (LK method), 0.5254s(proposed method).

 figure: Fig. 2.

Fig. 2. Time consumption results for Seq 4.

Download Full Size | PDF

4. Conclusions

The infrared target detection has a large computational complexity for calculating optical flow fields and estimating background movement. Therefore, this paper proposed a simplification method for detecting targets against gentle background movement.

(1) The proposed method uses the minimum variance of the difference in the gray scale between previous and posterior frames to estimate the background movement in the horizontal and vertical directions; it then uses the differential method and the morphological process to detect a moving small target.

(2) The method proposed in the paper can detect infrared targets against gentle movement background and avoids the procedures for detecting optical flow vectors without iterative operation. The computational complexity is greatly reduced.

(3) Against rotational movement background or translational greatly, this detection method may have errors and cause detection failure. In our future work, we plan to put further research on the proposed simplified method for rotational background.

Funding

National Natural Science Foundation of China (NSFC) (61601505, 61603297).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. Y. Chen, B. Song, D. Wang, and L. Guo, “An effective infrared small target detection method based on the human visual attention,” Infrared Phys. Technol. 95, 128–135 (2018). [CrossRef]  

2. S. Deshapande, M. H. Er, R. Venkateswarlu, and P. Chan, “Max-mean and max median filters for detection of small targets,” Proc. SPIE 3809, 74–83 (1999). [CrossRef]  

3. L. Deng, H. Zhu, Q. Zhou, and Y. Li, “Adaptive top-hat filter based on quantum genetic algorithm for infrared small target detection,” Multimed. Tool Appl. 77(9), 10539–10551 (2018). [CrossRef]  

4. T.-W. Bae, F. Zhang, and I.-S. Kweon, “Edge directional 2D LMS filter for infrared small target detection,” Infrared Phys. Technol. 55(1), 137–145 (2012). [CrossRef]  

5. J. Hu, Y. Yu, and F. Liu, “Small and dim target detection by background estimation,” Infrared Phys. Technol. 73, 141–148 (2015). [CrossRef]  

6. H. Zhu, Y. Guan, L. Deng, Y. Li, and Y. Li, “Infrared moving point target detection based on an anisotropic spatial-temporla fourth-order diffusion filter,” Comput. Electr. Eng. 68, 550–556 (2018). [CrossRef]  

7. M. Wan, G. Gu, E. Cao, X. Hu, W. Qian, and K. Ren, “In-frame and inter-frame information based infrared moving small target detection under complex cloud backgrounds,” Infrared Phys. Technol. 76, 455–467 (2016). [CrossRef]  

8. J. Nie, S. Qu, Y. Wei, L. Zhang, and L. Deng, “An infrared small target detection method based on multiscale local homogeneity measure,” Infrared Phys. Technol. 90, 186–194 (2018). [CrossRef]  

9. Y.-H. Xin, J. Zhou, and Y.-S. Chen, “Dual multi-scale filter with SSS and GW for infrared small target detection,” Infrared Phys. Technol. 81, 97–108 (2017). [CrossRef]  

10. B. Wang, L. Dong, M. Zhao, and W. Xu, “Fast infrared maritime target detection: Binarization via hisrogram curve transformation,” Infrared Phys. Technol. 83, 32–44 (2017). [CrossRef]  

11. A. Zhou, W. Xie, and J. Pei, “Infrared maritime target detection using the high order statistic filtering in fractional Fourier domain,” Infrared Phys. Technol. 91, 123–136 (2018). [CrossRef]  

12. Y. Li and Y. Zhang, “Robust infrared small target detection using local steering kernel reconstruction,” Pattern Recogn. 77, 113–125 (2018). [CrossRef]  

13. L. Wang, Z. Lin, and X. Deng, “Infrared point target detection based on multi-label generative MRF model,” Infrared Phys. Technol. 83, 188–194 (2017). [CrossRef]  

14. A. P. Tzannes and D. H. Brooks, “Temporal filters for point target detection in IR imagery,” Proc. SPIE 3061, 508–520 (1997). [CrossRef]  

15. K. Qian, H. Zhou, S. Rong, B. Wang, and K. Cheng, “Infrared dim-small target tracking via singular value decomposition and improved Kernelized correlation filter,” Infrared Phys. Technol. 82, 18–27 (2017). [CrossRef]  

16. X. Wang, Y. Zhang, and C. Ning, “A novel visual saliency detection method for infrared video sequences,” Infrared Phys. Technol. 87, 91–103 (2017). [CrossRef]  

17. H. Qi, B. Mo, F. Liu, Y. He, and S. Liu, “Small infrared target detection utilizing Local Region Similarity Difference map,” Infrared Phys. Technol. 71, 131–139 (2015). [CrossRef]  

18. C. M. Huang and M. H. Hung, “Target motion compensation with optical flow clustering during visual tracking,” 2014 IEEE 11th International conference on networking, sensing and control (ICNSC), Miami, FL, USA, 7-9 April, 96–101 (2014).

19. K. Kinoshita, M. Enokidani, M. Izumida, and K. Murakami, “Tracking of a moving object using one-dimensional optical flow with a rotating observer,” 2006 9th International conference on control, automation, robotics and vision, Singapore, Singapore, 5–8 Dec. 2006, pp. 1–6 (2006).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (2)

Fig. 1.
Fig. 1. The target detection results by the simplification method
Fig. 2.
Fig. 2. Time consumption results for Seq 4.

Tables (4)

Tables Icon

Table 1. Details of the proposed method

Tables Icon

Table 2. Details of the proposed method

Tables Icon

Table 3. Details of the fou sequences

Tables Icon

Table 4. Time consumption results for quantitative evaluation

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

E ( u , v ) = ( x , y ) I k ( x u , y v ) I k + 1 | I k ( x , y ) I k + 1 ( x u , y v ) | 2
( u ^ , v ^ ) = arg min u = [ n , n ] , v = [ m , m ] E ( u , v )
I k + 1 ( x , y ) = I k + 1 ( x u ^ , y v ^ )
I d i f f ( x , y ) = I k ( x , y ) I k + 1 ( x , y )
I d i f f 2 ( x , y ) = { 255 I d i f f ( x , y ) T d 0 I d i f f ( x , y ) < T d , T d = 0 .35 max ( I d i f f ( x , y ) )
I d i f f 3 = I d i f f 2 B = ( I d i f f 2 B ) Θ B
I d i f f 4 = I d i f f 3 B = ( I d i f f 3 Θ B ) B
x k + 1 = ( max I ( x k + 1 i , y k + 1 i ) = 1 ( x k + 1 i ) + min I ( x k + 1 i , y k + 1 i ) = 1 ( x k + 1 i ) ) / ( max I ( x k + 1 i , y k + 1 i ) = 1 ( x k + 1 i ) + min I ( x k + 1 i , y k + 1 i ) = 1 ( x k + 1 i ) ) 2 2
y k + 1 = ( max I ( x k + 1 i , y k + 1 i ) = 1 ( y k + 1 i ) + min I ( x k + 1 i , y k + 1 i ) = 1 ( y k + 1 i ) ) / ( max I ( x k + 1 i , y k + 1 i ) = 1 ( y k + 1 i ) + min I ( x k + 1 i , y k + 1 i ) = 1 ( y k + 1 i ) ) 2 2
x k + 1 = x k + 1 + u ^
y k + 1 = y k + 1 + v ^
I x u + I y v + I t = 0
E = ω ( I x u + I y v + I t ) 2
[ u ^ v ^ ] = [ ω I x I x ω I x I y ω I x I y ω I y I y ] 1 [ ω I x I t ω I y I t ]
[ 1 1 1 1 1 1 1 1 1 ]
[ 1 1 1 1 ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.