Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Reconstruction of target image from inhomogeneous degradations through backscattering medium images using self-calibration

Open Access Open Access

Abstract

Target images recorded with range-gated laser imaging systems and conventional passive imaging systems through rapidly changing turbid mediums inevitably suffer from inhomogeneous degradations. Consequently, this makes the images partly or entirely different from their true targets and eventually has adverse effects on target identification. To date, the inhomogeneous degradations are still not finely eliminable despite utilizing adaptive optical methods and pure mathematical signal improvement techniques. Herein, we demonstrate an image restoration method involving intrinsic physical evolution of light beams based on the backscattering images of a turbid medium. The corresponding mathematical signal processing algorithms are applied for restoring the true target images in the presence of rapidly changing inhomogeneous degradations. This technique would benefit target imaging through moving cloud/mist in air and flowing muddy masses under water.

© 2017 Optical Society of America

1. Introduction

Imaging through optically inhomogeneous turbid mediums is indispensable in many fields such as astronomical observation, underwater observation, biological imaging and security surveillance [1–5]. Range-gated laser imaging [6] is a prominent approach for suppressing and eliminating the backscattered photons from turbid mediums with the view to improve the signal-to-noise ratio (SNR) of the target images. The applications of this technique are plenty, but not limited to underwater detection [7–9], target recognition [10,11], and three-dimensional imaging [12–14]. However, the photons that are backscattered from the turbid medium clusters and carry plenty of physical information about the medium and the light beams are suppressed. As a result, the inhomogeneous degradations of the range-gated target images are not entirely eliminable by the blind digital image restoration algorithms such as filtering algorithms [15–19] and total variation methods [20–24]. Imaging methods such as ghost imaging [25,26], wavefront shaping [27–29], and speckle correlations [30,31] are also incapable of sufficiently dealing with the issue of obtaining high quality target images through rapidly changing inhomogeneous turbid mediums such as moving cloud/mist in air and flowing muddy masses under water.

In a previous report, we proposed a longitudinal laser tomography (LLT) system [32] to capture the target images and the images of the nonuniform attenuator (an opaque solid layer) that can successfully eliminate light beam degradation and improve the quality of the target image. In this method, the degradation was mainly caused by the nonuniform attenuator, and the backscattering images were produced by photons backscattered from the turbid medium lying close to the target. As a result, this method would not work for the common turbid medium conditions in which the distribution of the turbid medium is not in the vicinity of the target. Here, by broadening the working conditions of the LLT system, we propose an approach to estimate the degradation caused by the turbid medium clusters and retrieve the true target images under common turbid medium conditions.

2. Principles of the method

As shown in Fig. 1, using the LLT system comprising of a nanosecond pulsed laser and an intensified charge-coupled device (ICCD) camera, the raw images—including the backscattering images of the turbid medium clusters and the target images—are captured. The signal timings for the LLT system are shown in Fig. 1. By suitably tuning the gating ranges of the ICCD camera, various groups of photons from different spatial layers on the optical path will be captured to form the raw images. The thickness of each layer is determined by the corresponding duration of laser pulse and camera exposure. It should be noted that the gated backscattering images contain physical information about the turbid medium. As far as possible, these images should be collected simultaneously along with the target images for rapidly changing turbid mediums, which means that all the raw images should be captured within a short time.

 figure: Fig. 1

Fig. 1 Schematic diagram of the longitudinal laser tomography experimental set up. The gate delay satisfiesτn=nτ1, whereτ1is the gate step andn=1,2,3. For clarity, yellow and blue are used to indicate the backscattering pulse and the target reflecting pulse, respectively, although their wavelengths are the same as the incident pulse. See Visualization 1 for details.

Download Full Size | PDF

2.1 Modeling of target image degradation

While propagating through a turbid medium, some of the photons interact with the medium, and are eventually scattered and absorbed. This inevitably degrades the quality of the light beam along with the influences of ordinary diffraction and interference. We introduce a degradation matrix V, which can be used to describe the target image degradation model as shown in Eq. (1) [32].

u0=Vu+n.
Here, u0 denotes the captured raw target image, udenotes the ideal target image (free from degeneration), and ndenotes the additive noise imposed on the image. According to the basic physical process of image formation, the degradation matrix V, which is multiplied with the ideal target image u, contains information of the light beam evolution through the turbid medium cluster and is in the form of a matrix. Since the additive noisenis usually removed by digital image processing methods, the perfect target image can be restored if a suitable degradation matrixVis applied.

2.2 Establishment of the degradation matrix

According to the Light Detection and Ranging (LIDAR) principles [33–36], for the middle-range and long-range LLT usually with a range of hundreds or thousands of meters, some of the photons reflected from the target with reflectivity of ρt will be captured by a pixel(i,j)of the ICCD camera [32],

Itar(i,j)=EtD2dpix2ρt(Xt,Yt,Zt)θ2f2Zt2exp[20ZtKe(x,y,z)dz].
Here,Itar(i,j) is the target-reflected energy received by the pixel(i,j),Zt is the distance between the ICCD camera and the target, Et is the energy of a single pulse used in the measurement, θ is the divergence angle of the laser beam, dpix is the pixel size, Dis the aperture width of the receiver, fis the focal length of the optical receiving system, and Ke(x,y,z)is the attenuation coefficient of the turbid mediums at position(x,y,z), which takes into account the energy loss by scattering and absorption in the turbid medium. Moreover, the reflection point positions (Xt,Yt,Zt) and the pixel coordinates (i,j) satisfy the conditions of projective transformation, i.e., {idpix=fXt/Ztjdpix=fYt/Zt, as shown in Fig. 2. The target image without the influences of the turbid medium is defined as the ideal target image:

 figure: Fig. 2

Fig. 2 Coordinate system of longitudinal laser tomography set up.

Download Full Size | PDF

Iideal(i,j)=EtD2dpix2ρt(Xt,Yt,Zt)θ2f2Zt2.

Obviously, the quality of the ideal target image depends on the target’s reflectivity distribution and the parameters of the transceiver system rather than the atmospheric conditions. For the pixel (i,j)of the raw target image, the degradation of the corresponding light beam intensity produced by the turbid medium can be represented as V(i,j), which is an element of the degradation matrix V:

V(i,j)=Itar(i,j)Iideal(i,j)=exp[20ZtKe(x,y,z)dz].

The key point in constructing the degradation matrix is to obtain the values of 0ZtKe(x,y,z)dz (or the attenuation coefficientKe(x,y,z) and the corresponding distribution parameters of the turbid medium).

2.3 Obtaining key information about the degradation matrix by applying sub-window segmentation

First, in order to obtain each element of the degradation matrix, the LIDAR ratio [36], i.e., the extinction-to-backscattering ratio S=Ke/β, is extracted from the LLT images, whereβis the backscattering coefficient of the turbid medium. The LIDAR ratioSdepends on the complex refractive index and the particle size distribution of the turbid medium rather than the number density of the particles [36]. As shown in Fig. 1, when the gate range and the camera exposure time are tuned to cover the entire turbid medium, the relationship 0Ztβ(x,y,z)dzρS(x,y) exists, where ρS(x,y) can be approximated as the total equivalent reflectivity of the turbid medium cluster. Therefore, each element of the degradation matrix can be simplified as:

V(i,j)=exp[2SρS(x,y)].

Herein, to ensure perfect timeliness for the extraction ofρSandS, we put forward an estimation approach based on calibration using an observed target image and a backscattering image of the turbid medium. Basically, the gray value of the backscattering image is proportional to the total equivalent reflectivity of the turbid medium clusters, i.e.,IS(i,j)=CESρS(x,y)/Zs2, where ES is the single pulse energy in the measurement, ZS is the distance of the medium cluster from the camera, and C is a constant determined by the transceiver system parameters. Therefore, Eq. (5) can be rewritten as:

V(i,j)=exp[2SZS2CESIS(i,j)]exp[2KSIS(i,j)].
Here,KS=SZS2/(CES)contains both the LIDAR ratioSand the transceiver system parameters.

Assuming that in a measurement, the ideal target image Iideal and the degraded target image Itar are both captured by the transceiver system, the exponential factor KS can be calculated as:

KS=ln(Itar/Iideal)2IS.

Nevertheless, in a practical imaging system, it is impossible to capture the target images with and without the influences of the turbid medium at the same time. By segmenting two sub-windows in the degraded target image, i.e., one for the severely degraded part and another for the slightly degraded or even unaffected part, which can be inferred to represent the original target approximately,KScan be calculated as:

KS=ln(W¯1/W0¯)2WS¯.
Here, W0 is the sub-window of the slightly degraded or even unaffected part of the observed target image, W1 is the sub-window of the severely degraded part, and WS is the sub-window in the backscattering image corresponding to W1; W0¯, W1¯, and WS¯ are the average gray values of the corresponding sub-windows.

Moreover, for performing a new measurement with this technique, if the medium components remain the same, i.e., the LIDAR ratioSremains unchanged, another round of calibration ofKSwill be unnecessary, and the exponential factors KS can be derived from the previously obtainedKS. Consequently, the new degradation matrix can be represented as:

V(i,j)=exp[2Z2SZS2ES2E2SKSIS(i,j)],
whereES is the pulse energy, ZSis the distance between the turbid medium and the camera, and ISis the backscattering image of the turbid medium. In the LLT system, the distance of the turbid medium from the camera will beZS=cτS/2, in which cdenotes the speed of light and τSdenotes the delay in signal propagation.

2.4 Algorithm for image recovery

Based on the TV-L1 variational model [22,23], the following image recovery algorithm is established [32].

minuΣ[|u|+γ|u0v˜u|],
whereγ represents the regularization parameter, u0 represents the captured raw target image, u represents the restored target image, and v˜ represents the estimated degradation matrix . The flowchart of the recovery process is presented in Fig. 3.

 figure: Fig. 3

Fig. 3 Flowchart of the restoration method.

Download Full Size | PDF

3. Experiments and analysis

The LLT experimental setup is shown in Fig. 1, in which the turbid medium was created by a fog/smoke generator, and the synchronizing signals for the ICCD camera were provided by a Si photodetector. The laser pulses were generated by a pulsed Nd:YAG laser having a wavelength of 532 nm and pulse width of 20 ns. The output images of the ICCD camera (an AndoriStar ICCD camera) were composed of 340 × 340 pixels. The laser repetition frequency and the frame frequency of the ICCD camera were both 10 Hz, which allowed a 100 ms time interval between the capture of the backscattering image and the target image. The signal timings for the longitudinal tomography imaging system are shown in Fig. 1. In the experiments, the gate step τ1 and the exposure time were both 20 ns. The gate delays corresponding to the backscattering image and the target image are τi=2ZS/c and τj=2Zt/c, respectively, whereZS (20mZS27m) is the distance between the turbid mediums and the transceiver system, Zt=30m is the target distance, and c denotes the speed of light.

3.1 Experimental results

The target image and the backscattering image of the turbid medium clusters captured by the LLT system are shown in Figs. 4 and 5. The establishment of the degradation matrix and the recovery of the target are both depicted in Fig. 4. The sub-windowsW0, W1, and WSare segmented from the observed target image and the backscattering image. According to Eq. (8), the calibration parameter KScan be calculated as 0.0043 for the first group of experimental images. The degradation matrix V˜ can be estimated with the calibration parameter KSand the denoised backscattering image IS; then, through the proposed variational model in Eq. (10), the retrieved target imageUcan be solved using the captured target imageU0and the estimated degradation matrix V˜. For a comparison between the observed and retrieved target images, the turbid-medium-free target image Irefserves as the reference image.

 figure: Fig. 4

Fig. 4 Recovery process of the first experimental image group.KSis the calibration parameter extracted from the target image and the medium image, whileV˜represents the estimated degradation matrix.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Recovery process of the second experimental image group.

Download Full Size | PDF

By considering the turbid-medium-free target images in Fig. 4 as the approximate ideal target image, the structural similarity (SSIM) indexes [37] for the degraded area (as marked in Fig. 4 by dotted bordered rectangles) of the observed and retrieved target images are listed along with the corresponding images. The SSIM index is a real number lying between 0 and 1, and value 1 can only be reached in the case of two identical images. A larger SSIM index implies higher structural similarity. The proposed method can improve the values of SSIM indexes remarkably. The experimental results indicate that the proposed method can effectively eliminate the influences of the inhomogeneous turbid medium to achieve the true target image.

The experimental raw images shown in Fig. 5 are captured with the same fog/smoke generator parameters as those shown in Fig. 4. Therefore, the calibration parameterKS1 can be derived from the previous parameterKSas obtained from Fig. 4. For a better illustration of the improvement, the 3D surfaces of the grid targets are presented in Fig. 5, where the height specifies the gray scale instead of the target depth. The experimental results illustrate that, the proposed method can effectively reveal the true target image and can be used to represent the reflectivity distribution through inhomogeneous turbid mediums.

3.2 Robustness check

Since the degradation matrix is obtained from the backscattering image of the turbid medium along with crucial calibration parameters such as the exponential factorKS, an error in the calibration will affect the robustness of the image recovery method.

The exponential factor corresponding to the observed target image along with the turbid-medium-free target image is calculated to beKS0=0.0046, in accordance with Eq. (7). This value is considered more accurate than the value obtained from a single observed target image and can be approximated as the standard value. The recovery results for various values of KS are listed in Figs. 6(a)–6(i), in which Fig. 6(e) displays the recovery result with the standard value KS0 and the relative errors of KS for the rest of the recovery images vary between −40% and + 40%. It is obvious that if the relative errors increase, despite the elimination of the inhomogeneous influence from the target images, the recovery images will be of poor quality, leading to a reduction in the corresponding SSIM indexes. An excessively largeKScauses the degraded part of the target image to become too bright, while an extremely small KSfails to eliminate the degradation effects significantly. Moreover, when the relative errors of the exponential factorKSare less than 20% (Figs. 6(c)–6(g)), the corresponding SSIM indexes are greater than 0.8 and the recovery effects are acceptable. This indicates that the proposed estimation approach for the degradation matrix shows good robustness for the calibration parameters.

 figure: Fig. 6

Fig. 6 Recovery results for the exponential factorKSwith various relative errors.

Download Full Size | PDF

4. Conclusion

A novel and feasible method was put forward to reconstruct a true target image free from degradations caused by inhomogeneous turbid medium. This technique utilized raw images obtained from LLT, particularly the ones generated by photons backscattered from the turbid medium clusters. The results demonstrated that, based on the backscattering medium images that represent light beam degradation induced by the turbid medium, the influence of the inhomogeneous turbid medium can be approximately estimated and subsequently eliminated.

This method can remove the interferences of the inhomogeneous turbid medium from the target images and reveal the true reflectivity distribution of the targets behind the turbid medium, which makes it helpful in reduction of false recognition rate in target recognition and identification. The proposed method will be beneficial in target acquisition for observations in air and under water, military reconnaissance, and fire rescue. It should be pointed out that, the sub-window extraction of the proposed method may have to be assisted with human operation and certain prior information when the target is highly variable spatially.

Funding

National Natural Science Foundation of China (NSFC) NO.613192, 61070040, 61108089, 61205087 and 61107005.

Acknowledgment

The authors acknowledge Yanyun Ma for lending the measurement equipment for the experiments.

References and links

1. A. Ishimaru, Wave Propagation and Scattering in Random Media (Academic, 1978).

2. E. A. McLean, H. R. Burris Jr, and M. P. Strand, “Short-pulse range-gated optical imaging in turbid water,” Appl. Opt. 34(21), 4343–4351 (1995). [CrossRef]   [PubMed]  

3. S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J. S. Lee, Y. S. Lim, Q. H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photonics 9, 253–258 (2015).

4. A. P. Gibson, J. C. Hebden, and S. R. Arridge, “Recent advances in diffuse optical imaging,” Phys. Med. Biol. 50(4), R1–R43 (2005). [CrossRef]   [PubMed]  

5. K. Wu, Q. Cheng, Y. Shi, H. Wang, and G. P. Wang, “Hiding scattering layers for noninvasive imaging of hidden objects,” Sci. Rep. 5, 8375 (2015). [CrossRef]   [PubMed]  

6. L. F. Gillespie, “Apparent illumination as a function of range in gated, laser night-viewing systems,” J. Opt. Soc. Am. 56(7), 883–887 (1966). [CrossRef]  

7. G. R. Fournier, D. Bonnier, J. L. Forand, and P. W. Pace, “Range-gated underwater laser imaging system,” Opt. Eng. 32(9), 2185–2190 (1993). [CrossRef]  

8. C. S. Tan, G. Seet, A. Sluzek, and D. M. He, “A novel application of range-gated underwater laser imaging system (ULIS) in near-target turbid medium,” Opt. Lasers Eng. 43(9), 995–1009 (2005). [CrossRef]  

9. J. Busck, “Underwater 3-D optical imaging with a gated viewing laser radar,” Opt. Eng. 44(11), 116001 (2005). [CrossRef]  

10. R. G. Driggers, R. H. Vollmerhausen, N. Devitt, C. Halfort, and K. J. Barnard, “Impact of speckle on laser range-gated shortwave infrared imaging system target identification performance,” Opt. Eng. 42(3), 738–746 (2003). [CrossRef]  

11. R. L. Espinola, E. L. Jacobs, C. E. Halford, R. Vollmerhausen, and D. H. Tofsted, “Modeling the target acquisition performance of active imaging systems,” Opt. Express 15(7), 3816–3832 (2007). [CrossRef]   [PubMed]  

12. P. Andersson, “Long-range three dimensional imaging using range-gated laser radar images,” Opt. Eng. 45(3), 034301 (2006). [CrossRef]  

13. M. Laurenzis, F. Christnacher, and D. Monnin, “Long-range three-dimensional active imaging with superresolution depth mapping,” Opt. Lett. 32(21), 3146–3148 (2007). [CrossRef]   [PubMed]  

14. M. Laurenzis, F. Christnacher, N. Metzger, E. Bacher, and I. Zielenski, “3d range-gated imaging at infrared wavelengths with super-resolution depth mapping,” Proc. SPIE 7298, 729833 (2009). [CrossRef]  

15. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed. (Prentice Hall, 2007).

16. C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 1998), pp. 839–846. [CrossRef]  

17. A. V. Oppenheim, R. W. Schafer, and T. G. Stockham, “Nonlinear filtering of multiplied and convolved signals,” Proc. IEEE Trans. Audio Electroacoustics 56(8), 1264–1291 (1968).

18. M. Martin-Fernandez, E. Munoz-Moreno, and C. Alberola-Lopez, “A speckle removal filter based on anisotropic Wiener filtering and the Rice distribution,” Proc. IEEE Ultrason. Symp. 7(3), 1694–1697 (2006). [CrossRef]  

19. D. G. Lainiotis, P. Papaparaskeva, and K. Plataniotis, “Nonlinear filtering for LIDAR signal processing,” Math. Probl. Eng. 2(5), 367–392 (1996). [CrossRef]  

20. L. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D 60(1-4), 259–268 (1992). [CrossRef]  

21. A. Chambolle, “An algorithm for total variation minimization and applications,” J. Math. Imaging Vis. 20(1/2), 89–97 (2004). [CrossRef]  

22. T. F. Chan and S. Esedoglu, “Aspects of total variation regularized L1 function approximation,” SIAM J. Appl. Math. 65(5), 1817–1837 (2005). [CrossRef]  

23. J. F. Yang, Y. Zhang, and W. T. Yin, “An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise,” SIAM J. Sci. Comput. 31(4), 2842–2865 (2009). [CrossRef]  

24. Z. M. Jin and X. P. Yang, “A variational model to remove the multiplicative noise in ultrasound images,” J. Math. Imaging Vis. 39(1), 62–74 (2011). [CrossRef]  

25. D. V. Strekalov, A. V. Sergienko, D. N. Klyshko, and Y. H. Shih, “Observation of two-photon “ghost” interference and diffraction,” Phys. Rev. Lett. 74(18), 3600–3603 (1995). [CrossRef]   [PubMed]  

26. R. S. Bennink, S. J. Bentley, and R. W. Boyd, “‘Two-Photon’ coincidence imaging with a classical source,” Phys. Rev. Lett. 89(11), 113601 (2002). [CrossRef]   [PubMed]  

27. I. Freund, “Looking through walls and around corners,” Physica A 168(1), 49–65 (1990). [CrossRef]  

28. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6(5), 283–292 (2006). [CrossRef]  

29. O. Katz, E. Small, Y. Bromberg, and Y. Silberberg, “Focusing and compression of ultrashort pulses through scattering media,” Nat. Photonics 5(6), 372–377 (2011). [CrossRef]  

30. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [CrossRef]   [PubMed]  

31. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014). [CrossRef]  

32. W. J. Yi, W. Hu, P. Wang, and X. J. Li, “Image restoration method for longitudinal laser tomography based on degradation matrix estimation,” Appl. Opt. 55(20), 5432–5438 (2016). [CrossRef]   [PubMed]  

33. R. D. Richmond and S. C. Cain, Direct-Detection LADAR Systems (SPIE, 2010).

34. L. C. Andrews and R. L. Phillips, Laser Beam Propagation through Random Media (SPIE, 2005).

35. P. F. McManamon, “Review of ladar: a historic, yet emerging, sensor technology with rich phenomenology,” Opt. Eng. 51(6), 060901 (2012). [CrossRef]  

36. F. G. Fernald, “Analysis of atmospheric lidar observations: some comments,” Appl. Opt. 23(5), 652–653 (1984). [CrossRef]   [PubMed]  

37. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” Proc. IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1: MP4 (3090 KB)      Schematic of the method

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Schematic diagram of the longitudinal laser tomography experimental set up. The gate delay satisfies τ n =n τ 1 , where τ 1 is the gate step and n=1,2,3 . For clarity, yellow and blue are used to indicate the backscattering pulse and the target reflecting pulse, respectively, although their wavelengths are the same as the incident pulse. See Visualization 1 for details.
Fig. 2
Fig. 2 Coordinate system of longitudinal laser tomography set up.
Fig. 3
Fig. 3 Flowchart of the restoration method.
Fig. 4
Fig. 4 Recovery process of the first experimental image group. K S is the calibration parameter extracted from the target image and the medium image, while V ˜ represents the estimated degradation matrix.
Fig. 5
Fig. 5 Recovery process of the second experimental image group.
Fig. 6
Fig. 6 Recovery results for the exponential factor K S with various relative errors.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

u 0 =Vu+n.
I tar (i,j)= E t D 2 d pix 2 ρ t ( X t , Y t , Z t ) θ 2 f 2 Z t 2 exp[2 0 Z t K e (x,y,z)dz ].
I ideal (i,j)= E t D 2 d pix 2 ρ t ( X t , Y t , Z t ) θ 2 f 2 Z t 2 .
V(i,j)= I tar (i,j) I ideal (i,j) =exp[2 0 Z t K e (x,y,z)dz ].
V(i,j)=exp[2S ρ S (x,y)].
V(i,j)=exp[2S Z S 2 C E S I S (i,j)]exp[2 K S I S (i,j)].
K S = ln( I tar / I ideal ) 2 I S .
K S = ln( W ¯ 1 / W 0 ¯ ) 2 W S ¯ .
V (i,j)=exp[2 Z 2 S Z S 2 E S 2 E 2 S K S I S (i,j)],
min u Σ [ | u |+γ| u 0 v ˜ u| ] ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.