Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Incoherent nonlinear deconvolution using an iterative algorithm for recovering limited-support images from blurred digital photographs

Open Access Open Access

Abstract

Recovering original images from blurred images is a challenging task. We propose a new deconvolution method termed incoherent nonlinear deconvolution using an iterative algorithm (INDIA). Two inputs are introduced into the algorithm: one is a random or engineered point spread function of the scattering system, and the other is a blurred or distorted image of some object produced from this system. The two functions are Fourier transformed, and their phase distributions are processed independently of their magnitude. The algorithm yields the image of the original object with reduced blurring effects. The results of the new method are compared to two linear and two nonlinear algorithms under various types of blurs. The root mean square error and structural similarity between the original and recovered images are chosen as the comparison criteria between the five different algorithms. The simulation and experimental results confirm the superior performance of INDIA compared to the other tested deblurring methods.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Many algorithms for recovering underlying images blurred by a known point spread function (PSF) are based on linear deconvolution [14]. The cases of blurring are split into two types: undesired blurring and intended blurring. Examples of undesired blurring are out-of-focus imaging [5], motion-blurred images [6], and blurring because of a scattering medium between the observed object and the imaging system [7]. In the first two examples, the PSF can be computed based on the parameters of the system, whereas in the third case, the PSF is measured and recorded. Intended blurring has been suggested as a way to improve imaging capabilities, for example, from two to three dimensions [8,9] or from low to high imaging resolution [10]. For both types of blurring, the recovery algorithms can be the same. As there are many cases of unintended and intended image blurring with special beams, such as Airy [11], self-rotating beams [12], and dot patterns [13], it might be effective to find a general algorithm that is best for most cases. It should be noted that Airy and self-rotating light beams have been used for three-dimensional imaging as a compromise between the need to achieve a higher signal-to-noise ratio (SNR) and not too much worse axial resolution than a PSF of random pattern.

The general system model of a blurred recorded picture is a convolution between the object functions and PSF of the blurring system plus detection noise [3,4]. The optical configuration of an imaging system with intentional blur is shown in Fig. 1. The light from an object is collimated by a refractive lens and incident on an SLM on which different phase masks are displayed, and the modulated light is recorded by an image sensor. This system is used in the present study to create various types of PSFs, but the analysis herein is valid for all the systems that can be described as a convolution between the object functions and random or engineered known PSF plus detection noise. Mathematically, this model is expressed by the relation b = o$\mathrm{\ \star }$p + n, where $\mathrm{\ \star }$ stands for convolution, b is the blurred recorded photograph, o is the original object, p is the PSF of the blurring system, and n is the detection noise. In the Fourier domain, the relation between the four functions becomes B = O·P + N, where B, O, P, and N are the Fourier transforms of b, o, p and n, respectively. For the case of a known PSF, algorithms such as the Lucy-Richardson algorithm (LRA) [1,2] and Wiener deconvolution (WD) [3,4] are commonly practiced, and both are linear deconvolution processes. Recently, Rai et al. proposed a recovery method using nonlinear deconvolution called nonlinear reconstruction (NLR) [14,15]. Anand et al. combined LRA with NLR to create a new recovery algorithm, designated the Lucy-Richardson-Rosen algorithm (LRRA), by performing the cross-correlation of each iteration in a nonlinear mode [16,17]. Both NLR [18] and LRRA [19] were proposed as general algorithms for a wide range of blurs soon after their respective inventions, as their performances were better in comparison to existing deconvolution methods. However, the outputs of both NLR and LRRA are still far from the directly recorded images without blur. In this study, we take nonlinear deconvolution a step forward by proposing a new algorithm called incoherent nonlinear deconvolution by an iterative algorithm (INDIA). Furthermore, we compare the abovementioned five recovery algorithms, WD, LRA, LRRA, NLR and INDIA, and show that, at least for the tested PSFs and objects, INDIA performs the best in terms of the minimal root mean square error (RMSE) and structural similarity index (SSIM) between the recovered images and the original objects.

 figure: Fig. 1.

Fig. 1. Optical configuration of a typical imaging system with a spatial light modulator (SLM) for intentional blur.

Download Full Size | PDF

Nonlinear deconvolution is based on one assumption and one principle that follows from the assumption. The basic assumption is that most of the visual information of object o is stored in the phase distribution of O and much less in the magnitude of O [20]. Based on this assumption, the prime principle of nonlinear deconvolution is that recovering the image of the original object is achieved by recovering the phase of O from B = O·P + N, while the magnitude of B is nonlinearly processed. To be accurate, B, the spectrum of the blurred photograph, is multiplied by the conjugated phase of P (the phase of the PSF spectrum). Additionally, the magnitude of the recovered image spectrum is processed to minimize the recovery error and to decrease the influence of noise.

The assumption of phase superiority is supported by several pieces of evidence, such as the experiment of [20], in which the inverse Fourier transform of |Om|exp(n) yields an image more like an object on than om for any mn, where |Om|exp(m) and |On|exp(n) are the Fourier transforms of objects om and on, respectively. Additional evidence for the superior role of the phase of O can be demonstrated if the input object is a point located at the coordinates rs = (xs,ys) on the input plane, and the detection noise is ignored for a moment. The recorded photograph in this case is b=δ(r-rs)$\mathrm{\ \star }$p, and its spectrum is B = O·P = A·exp(irs·ρ)·|P|exp(P), where r and ρ are the spatial coordinates of the input and spectral planes, respectively. |P|exp(P) is the Fourier transform of p, A is a constant and δ(·) is the Dirac delta function. The most important term in B to reconstruct the image point is the phase term exp(irs·ρ). Hence, multiplying B by the inverse filter |P|1exp(P) yields the sharpest image of the point. However, an infinite number of other filters of the form |F(ρ)|exp(-P) yield an image of the point that is less sharp than in the case of the inverse filter but still an image of a point. Under the assumption of the prime role of the phase of O, it is clear that the recorded photograph spectrum B should at least be multiplied by the conjugate phase of the PSF spectrum, exp(−P). The magnitude of the filter, |F(ρ)|, can be chosen for fine-tuning the shape of the recovered point when the detection noise is considered. Multiplication with exp(−P) is performed in all five compared algorithms of this study for any object and PSF, in addition to other operations that are unique to each of these different methods. The purpose of multiplying B by exp(-P) is to omit the phase of the PSF spectrum and to remain with the phase of O, which, according to the basic assumption, contains most of the information of the original object o. The modulation of the spectral magnitude in each technique is unique and discussed in the following.

Next, we consider the magnitude of the spectrum of the reconstructed image for an arbitrary object and not just for a point object. Neglecting the detection noise again for the current analysis, the optimal recovered image is obtained by deconvolution with the inverse filter FI(ρ). This solution is clear because the product of the blurred spectrum with the inverse filter is

$$\begin{aligned}B\cdot{F_I} = B{|P |^{ - 1}}\textrm{exp}({ - i{\varPhi _P}} )\, &= |O |\textrm{exp}({i{\varPhi _O}} )|P |\textrm{exp}({i{\varPhi _P}} ){|P |^{ - 1}}\textrm{exp}({ - i{\varPhi _P}} )\\&= |O |\textrm{exp}({i{\varPhi _O}} )= O(\boldsymbol{\rho } ).\end{aligned}$$

However, in practical systems with detection noise, the inverse filter usually magnifies the noise in such a way that the recovered image is usually unrecognizable. In WD, the blurred photograph spectrum is multiplied by the following filter: ${F_W}(\boldsymbol{\rho } )= |{P(\boldsymbol{\rho } )} |\textrm{exp}({ - i{\varPhi _P}} )S(\boldsymbol{\rho } )/[{{{|{P(\boldsymbol{\rho } )} |}^2}S(\boldsymbol{\rho } )+ N(\boldsymbol{\rho } )} ]$, where S(ρ) and N(ρ) are the power spectral densities of the original object and the noise, respectively. Apparently, the WD becomes an inverse filter when there is no noise [N(ρ) = 0]. The other linear deconvolution, LRA, is an iterative algorithm in which the (k + 1)-th recovered image rk + 1 is ${r_{k + 1}} = {r_k}\left[ {\frac{b}{{{r_k}\mathrm{\ \star }p}}\otimes p} \right]$, where ‘$\otimes $’ stands for correlation. In this case, the correlation with the system’s PSF, p, in the image domain is equivalent to the multiplication with |P|exp(-P) in the spectrum domain.

The NLR [14] is not a linear deconvolution because the superposition principle is no longer valid under the NLR operation. The algorithm operation is described best in the spectral domain. In the recovery process, the multiplication with exp(-P) is the same as in the linear algorithms because of the assumption that most of the object’s information is in the phase of the object’s spectrum exp(o); hence, this phase should be the only phase of the recovered image if the phase of the noise spectrum is ignored. The treatment with the magnitude of B = O·P + N is different than that in WD and LRA. In NLR, the reconstructed image’s spectrum has the form T=|B|α|P|βexp(B)exp(−P), where α and β are two parameters in the range [1,1] that optimize some blind cost functions. The cost function is blind in the sense that it is not based on knowledge about the shape of the original object in the system’s input. From the expression of T, one can see that the existence domain of α and β includes the linear deconvolution (α=1), and inside the linear deconvolution, one can find the inverse (β=−1), phase-only (β=0) and matched (β=1) filters. However, these and other linear filters do not necessarily yield the best recovered image in the sense of the minimum cost function. In previous studies, the cost functions were entropy [14,15], no-reference structural sharpness [21] and SNR [22].

The iterative LRA and NLR are combined in LRRA by applying the nonlinear correlation for every iteration, such that the (k + 1)-th recovered image rk + 1 is ${r_{k + 1}} = {r_k}\left[ {\frac{b}{{{r_k}\mathrm{\ \star }p}}\otimes_\beta^\alpha p} \right],$ where $\otimes _\beta ^\alpha $ indicates nonlinear correlation in the sense that

$$f(\boldsymbol{r} )\otimes _\beta ^\alpha g(\boldsymbol{r} )= {\mathrm{{\cal F}}^{ - 1}}\{{{{|F |}^\alpha }{{|G |}^\beta }\textrm{exp}({i{\varPhi _F}} )\cdot\textrm{exp}({ - i{\varPhi _G}} )} \},$$
and for every two functions $f(r )\; \textrm{and}\,g(r )$, their Fourier transforms are |F|$\textrm{exp}({ - i{\varPhi _F}} )$ and |G|$\textrm{exp}({ - i{\varPhi _G}} )$, respectively. To run the LRRA with reasonable time, the search of the parameters α and β is performed only once in the first algorithm iteration.

Both NLR and LRRA are methods that search for two optimal variables that dictate the shape of the magnitude of the reconstructed image’s spectrum T to minimize some chosen blind cost function. However, the algorithm is limited to searching for only two variables, whereas the magnitude of the matrix T has N × M variables, where N and M are the numbers of rows and columns of T, respectively. Hence, the search for two optimal variables is much easier but might limit the solutions to a suboptimal case. To extend the limitations of NLR, another type of nonlinear deconvolution termed INDIA is proposed here for the first time to the best of our knowledge. When one looks for a relatively fast optimization algorithm with N × M variables (Assuming N,M ≥ 256), the appropriate algorithm can be a modified version of the Gerchberg-Saxton algorithm (GSA) [23]. Indeed, INDIA is a modified version of GSA that runs on the entire spectral magnitude matrix in parallel and usually converges to a solution that satisfies two constraints in two different domains, where at least one constraint is fully satisfied and usually the other is approximately satisfied. The limitation of every GSA, including INDIA, is that not every desired cost function can be defined in the algorithm. As we see in the following, we define a constraint in the image plane that approximates some cost function but not necessarily the optimal cost function.

2. Methodology

The INDIA scheme is shown in Fig. 2. Like other methods, INDIA starts from the blurred or distorted image whose spectrum is given by B = O·P + N, and as in the other methods, B is multiplied by exp(-P), where ΦP is the phase of the known PSF. Only the phase distribution of the product T = B·exp(−P)=|T|·exp(T) is fed into the process where |T| is ignored. The goal of the algorithm is to find the best new magnitude |T|k of the k-th iteration that can yield, after inverse Fourier transform of |T|k·exp(T), the closest image to the original object. Note that there is a new type of nonlinearity in INDIA, different than the one defined in Eq. (2). The new nonlinear operation is defined by the relation

$$f(\boldsymbol{r} ){\mathrm{\circledast }_{|H |}}g(\boldsymbol{r} )= {\mathrm{{\cal F}}^{ - 1}}\{{|H |\cdot\exp({i{\varPhi _F}} )\cdot\exp({ - i{\varPhi _G}} )} \},$$
where ${\mathrm{\circledast }_{|H |}}$ indicates the new type of nonlinear correlation, |H| is a magnitude distribution normalized in the range [0,1], and the values of |H| are determined by the iterative algorithm. This nonlinearity is a new stage in the evolutionary path from the nonlinear correlator proposed by Ersoy et al. [24], in which the parameters of Eq. (2) are (α,β)≡(0,0). The next evolutionary stage is expressed by Eq. (2), and the current stage is expressed by Eq. (3).

 figure: Fig. 2.

Fig. 2. Scheme of INDIA, where C, Φ, and M are complex, phase and magnitude values, respectively, NLR is nonlinear reconstruction, WD is Wiener deconvolution, LRA is the Lucy-Richardson algorithm, LRRA is the Lucy-Richardson-Rosen algorithm, $\mathrm{{\cal F}}$ and ${\mathrm{{\cal F}}^{ - 1}}$ are Fourier and inverse Fourier transforms, respectively, × is the multiplication sign, IT is the threshold of the iteration number m and exp(P) is the conjugated phase of the PSF spectrum.

Download Full Size | PDF

In the INDIA scheme, the phase function of T, exp(T), is the constraint on the spectral side such that every matrix Tk in iteration k has the same phase function, exp(T)=exp(B)·exp(−P), and only the magnitude |T|k can be changed during the various iterations. The INDIA starts by multiplying the constrained phase exp(T) by an initial magnitude, which we choose to be the spectral magnitude obtained by one of the other four algorithms. Among the four options, the chosen magnitude is the one that recovers the image closest to the original image. This choice guarantees that the INDIA solution cannot be worse than any output of the other algorithms. The product of phase and magnitude, |T|k·exp(T), is inversely Fourier transformed to the recovered image plane. The chosen constraint in the image plane is the limited support of the original object [25,26]. In other words, the values of the image matrix outside the limiting area are set to zero. This constraint can be applied under the assumption that the size of the image support of the original object is known, although the object shape is unknown. The phase distribution on the image plane remains the same, and the product of the phase and the constrained magnitude is Fourier transformed to the spectrum domain. In the spectral domain only, the magnitude remains untouched, where the phase function is constrained to exp(T). This process is iterated with a predefined number of iterations, or alternatively, it can be halted when the average level outside the limited support in the image domain is not significantly reduced further.

To make INDIA comparable to an optimization process such as NLR, we should define a cost function that INDIA can minimize. Fortunately, the constraint of the limited support can be interpreted as an operation in which one looks for the magnitude |T|k that minimizes the value of the mean background noise [27], where the background is the entire area outside the a priori known object support. In comparison to other nonlinear algorithms, we conclude that INDIA performs an extensive search over N × M variables of magnitude |T|k that minimize a limited cost function, whereas NLR and LRRA search over only two variables that can minimize an unlimited number of cost functions. In other words, in NLR and LRRA, any cost function can be defined, but the search is limited to two variables. On the other hand, INDIA searches in parallel over all variables that represent the spectral magnitude of the recovered matrix, but the number of possible cost functions is limited. However, once there is a final solution, all the solutions of the various algorithms are compared with the same figure of merit, and regardless of the specific process, the solution with the best performance should be preferred. Next, we compare the five abovementioned algorithms under different PSFs. The RMSE and SSIM [28] are chosen as the measures to evaluate the algorithms’ performance in simulations and experiments.

3. Simulations

A simulation study was carried out using test objects consisting of a binary target of the logo of Ben Gurion University and a grayscale image of blood cells. A matrix size of 500 × 500 pixels, λ = 650 nm, and pixel size Δ = 10 µm was used, and different phase masks were used, such as the cubic phase mask with a phase function $\exp [{ - i({2\pi /\lambda } )({a{x^3} + b{y^3}} )} ]$, where a = b∼1000 [19], quasi-random lens with a scattering ratio of 0.1 obtained using the Gerchberg-Saxton algorithm [8], diffractive lens $\exp [{ - i\pi {{(\lambda f)}^{ - 1}}{r^2}} ],$ diffractive lens with azimuthally varying focal length (DL-AVF) $\textrm{exp}[{ - i2{\pi^2}{r^2}/({2\pi \lambda {f_0} + \mathrm{\Delta }f\lambda \theta } )} ]$ [12,19], spiral lens $\exp [{ - i2\pi {\Lambda ^{ - 1}}r} ]\cdot \exp [{iL\theta } ],$ and diffractive axicon $\exp [{ - i2\pi {\Lambda ^{ - 1}}r} ]$, where $r = {({{x^2} + {y^2}} )^{1/2}}$, Λ is the period of the axicon, f is the focal length of the diffractive lens and spiral lens, f0 is the minimum and f0 + Δf is the maximum focal length of the DL-AVF and θ is the azimuthal angle. The PSF was simulated and convolved with the test object to obtain the blurred image pattern. Random Poissonian noise was added to both the PSF and blurred image distributions. This was done using MATLAB’s built-in syntax ‘imnoise’ for adding Poisson noise to the normalized matrices of the blurred images. The images of the phase masks, PSF, blurred image distribution and recovered images using NLR, LRA, WD, LRRA and INDIA are shown in Fig. 3(a). For WD, the MATLAB built-in syntax ‘deconvwnr’ was used, and the noise-to-signal power ratio [denoted by σ in Fig. 3(a)] was tuned to minimize RMSE. The MATLAB codes for NLR, LRA and LRRA are available in [19]. The initial guess of INDIA can be the case with the lowest RMSE out of the outputs of NLR, LRA, WD and LRRA. In the simulation study, LRRA exhibited the lowest RMSE; therefore, the spectral magnitude of LRRA was used as the initial guess magnitude of INDIA for all cases. The RMSEs calculated for the above cases are plotted in Fig. 3(b). Figure 4 is identical to Fig. 3, but the binary target is replaced by the grayscale image. The RMSE of INDIA was found to be better than that of all the other methods for both targets.

 figure: Fig. 3.

Fig. 3. (a) Phase masks, simulated PSF and blurred image for Airy, scattered, defocused, self-rotating, vortex and Bessel patterns and reconstructions by NLR, LRA, WD, LRRA and INDIA for a binary target. (b) Bar plot of reconstruction RMSE by NLR, LRA, WD, LRRA and INDIA.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. (a) Phase masks, simulated PSF and blurred image for Airy, scattered, defocused, self-rotating, vortex and Bessel patterns and reconstructions by NLR, LRA, WD, LRRA and INDIA for a grayscale target. (b) Bar plot of reconstruction RMSE by NLR, LRA, WD, LRRA and INDIA.

Download Full Size | PDF

4. Experiments

An experimental setup was built with the following components: a high-power LED (Thorlabs, 940 mW, λ = 660 nm and Δλ = 20 nm), a spatial light modulator (SLM) (Thorlabs Exulus HD2, 1920 × 1200 pixels, pixel size = 8 µm) and an image sensor (Zelux CS165MU/M 1.6 MP monochrome CMOS camera, 1440 × 1080 pixels with pixel size ∼3.5 µm). A photograph of the experimental setup is shown in Fig. 5. Light from the LED (element 1) was collimated by a refractive lens (element 2) and polarized by a polarizer (element 3). The collimated light is focused by a refractive lens (element 4) on the object (element 5). The light from the object enters a refractive lens (element 6) and then the beam splitter (element 9) through an iris (element 7) and is normally incident on the SLM (element 10). On the SLM, different phase masks were displayed one after the other, and the intensity distributions were recorded by the image sensor (element 8). The object distance zs between elements 5 and 6 and the recording distance zh between elements 8 and 10 are 5 cm and 18 cm, respectively. The distance d between element 6 and element 10 is approximately 18 cm. In the first step, the PSF was recorded using a pinhole with a diameter of 25 µm. In the next step, a USAF object (Group – 3, Element – 3) was mounted exactly at the same location as the pinhole, and the blurred image was recorded. The images of the recorded PSFs and blurred images for different types of phase masks are shown in Fig. 6. The PSFs and blurred image patterns were processed using NLR, LRA, WD, LRRA and INDIA. Since all the recovery filters in this study are digital, the entire parameters of the various filters in the experiments were optimized by the digital reconstruction programs in the same way it was made in the simulation. For the simulation study, RMSE was used as the criterion for comparison, as SSIM was very close to one another. In the experiment, SSIM and RMSE were used as the criteria for comparison. In the experiment before running INDIA, in some cases, LRRA generated the images with the highest SSIM, and in some cases, WD. Accordingly, the spectral magnitudes of LRRA and WD were used as the initial guesses for INDIA. The results of reconstruction and the SSIM and RMSE bar plots are shown in Figs. 6(a), 6(b) and 6(c), respectively.

 figure: Fig. 5.

Fig. 5. Photograph of the experimental setup. 1 – LED, 2 – collimating lens, 3 – polarizer, 4 – refractive lens for critical illumination, 5 – object, 6 – refractive lens, 7 – iris, 8 – image sensor, 9 – beamsplitter and 10- SLM. The object distance zs between element 5 and element 6 is 5 cm. The recording distance zh between element 8 and element 10 is 18 cm. The distance between element 6 and element 10 is d ∼ 18 cm. The stage with the micrometer dials holds the object (5).

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. (a) Phase masks, experimental PSF and blurred image for Airy, scattered, defocused, self-rotating, vortex and Bessel patterns and reconstructions by NLR, LRA, WD, LRRA and INDIA. (b) Bar plot of SSIM of reconstructions by NLR, LRA, WD, LRRA and INDIA. (c) Bar plot of RMSE of reconstructions by NLR, LRA, WD, LRRA and INDIA.

Download Full Size | PDF

In the next experiment, a stronger noise level (Gaussian noise with zero mean and 0.01 variance) was added to both PSF and blurred images in addition to the experimental detector noise. Another test object (Group – 3, Elements – 3 and 4) was used for this study. The images of the PSF, blurred image, and reconstruction results of NLR, LRA, WD, LRRA and INDIA are shown in Fig. 7(a). The plots of the SSIM and RMSE for the above cases are shown in Fig. 7(b) and 7(c), respectively. Note that although the visual inspection reveals that the performance of INDIA is better than that of the other methods, the figures of merit used do not always match the perceptions of visual inspection. Another interesting observation is that the figures of merit, RMSE and SSIM do not match, i.e., the maxima of SSIM do not always overlap with the minima of RMSE. A better figure of merit is needed to match the outcome of visual inspection with the optimal value of the figure of merit. Based on the observation of Fig. 7, we conclude that as much as the blur is higher (the original object is less recognizable), the superiority of INDIA over the other compared methods is clearer. For a 500 × 500 pixel matrix, in a computer with a 1.8 GHz processor, 16 GB RAM, and a 64 bit operating system, the computation times for NLR, LRA (2 iterations), WD, LRRA (2 iterations) and INDIA (2 iterations) are 44 ms, 181 ms, 18 ms, 188 ms and 204 ms, respectively. The above time excludes image reading and loading times. There are two types of results shown in Figs. 6 and 7, optimized for the maxima of SSIM and minima of RMSE, respectively. Therefore, in Fig. 6, INDIA demonstrates the highest SSIM values, and in Fig. 7, INDIA demonstrates the lowest RMSE values.

 figure: Fig. 7.

Fig. 7. (a) Experimental PSF and blurred image for Airy, scattered, defocused, self-rotating, vortex and Bessel patterns and reconstructions by NLR, LRA, WD, LRRA and INDIA. (b) Bar plot of SSIM of reconstructions by NLR, LRA, WD, LRRA and INDIA. (c) Bar plot of RMSE of reconstructions by NLR, LRA, WD, LRRA and INDIA.

Download Full Size | PDF

5. Summary and conclusions

A novel computational reconstruction method, INDIA, based on the concepts of nonlinear deconvolution, has been developed for deblurring blurred images due to different types of blur. The method is based on the fundamental observation that more useful object information is present in the phase of the spectrum of the object. Therefore, by retaining the phase information and tuning the magnitude of the recovered spectrum, an optimal reconstruction can be obtained. Both simulation and experimental studies were carried out. The performance of INDIA in comparison to existing and widely used methods such as NLR, LRA, LRRA and WD has been investigated for a wide range of blurs. Of the first four cases, NLR, LRA, WD and LRRA, LRRA has a relatively high resilience to noise without inputting noise estimation into the equation. With the noise estimation input guess, in some cases, WD is better than LRRA as per the figure of merit. In all cases, INDIA was found to reconstruct images with the best performance.

The newly proposed algorithm is compared to two basic linear deconvolution methods and two more advanced nonlinear methods. There are many other deconvolution methods, such as LRA with total variation regularization [29], regularized inversion with block-matching and 3D filtering denoising filters [30], and deep learning-based deconvolution [31]. It is out of the scope of this study to compare INDIA to each and every method, but we should mention that the methods [2931] are effective for weak blurring in which the system PSF is an inverse Fourier transform of a type of lowpass filter. On the other hand, the blurred patterns in this study are in the range from weakly (the defocused pattern) to heavily blurred (the scattered pattern), such that the original image cannot be recognized. Another advantage of INDIA compared to some of the above-mentioned methods is that INDIA has no parameters that should be tuned before any run of the algorithm.

Our main conclusion from this study is that manipulating the spectral magnitude of the recovered image can reduce the error between the recovered and the original images. However, there are several practical obstacles to implementing the needed spectral magnitude manipulation. First, the original image is a priori unknown, and hence, any type of error function between the recovered and the original images cannot be defined as a cost function for some optimization processes. If a blind cost function, meaning blind for any specific image, is defined, the number of possible variables, which is the number of elements in the spectral magnitude matrix, makes the cost function minimization an exhausting process. Our solution to these problems is INDIA, which operates in parallel over all the matrix elements relatively fast but minimizes a limited cost function. Although the present results of the INDIA are better than those of the four other algorithms, we think that efforts should be directed toward finding more efficient blind cost functions and faster optimization algorithms. On the question of whether INDIA can be applied with coherent illumination, we claim that since the linearity of coherent imaging systems is between the output and input complex amplitudes [32], INDIA under coherent light should be modified accordingly. Our lab has already investigated this topic, and we hope to publish the results soon. We believe that the developed method will contribute to ongoing research on coded aperture imaging, imaging through scattering medium, holography, computer vision and image processing.

Funding

Israel Innovation Authority (79555, MAGNET); Horizon 2020 Framework Programme (857627, CIPHR).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

The program code is given as supplementary material in Code 1 [33].

References

1. W. Richardson, “Bayesian-Based Iterative Method of Image Restoration*,” J. Opt. Soc. Am. 62(1), 55–59 (1972). [CrossRef]  

2. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974). [CrossRef]  

3. A. Dhawan, R. Rangayyan, and R. Gordon, “Image restoration by Wiener deconvolution in limited-view computed tomography,” Appl. Opt. 24(23), 4013–4020 (1985). [CrossRef]  

4. R. Gonzalez, R. Woods, and S. Eddins, Digital Image Processing Using MATLAB. (Prentice Hall, 2003).

5. Y.-Q. Liu, X. Du, H.-L. Shen, et al., “Estimating Generalized Gaussian Blur Kernels for Out-of-Focus Image Deblurring,” IEEE Trans. Circuits Syst. Video Technol. 31(3), 829–843 (2021). [CrossRef]  

6. Y. Yitzhaky, I. Mor, A. Lantzman, et al., “Direct method for restoration of motion-blurred images,” J. Opt. Soc. Am. A 15(6), 1512–1519 (1998). [CrossRef]  

7. R. Wang and G. Wang, “Single image recovery in scattering medium by propagating deconvolution,” Opt. Express 22(7), 8114–8119 (2014). [CrossRef]  

8. A. Vijayakumar and J. Rosen, “Interferenceless coded aperture correlation holography—a new technique for recording incoherent digital holograms without two-wave interference,” Opt. Express 25(12), 13883–13896 (2017). [CrossRef]  

9. N. Antipa, G. Kuo, R. Heckel, et al., “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018). [CrossRef]  

10. M. R. Rai, A. Vijayakumar, Y. Ogura, et al., “Resolution enhancement in nonlinear interferenceless COACH with point response of subdiffraction limit patterns,” Opt. Express 27(2), 391–403 (2019). [CrossRef]  

11. R. Kumar, V. Anand, and J. Rosen, “3D single shot lensless incoherent optical imaging using coded phase aperture system with point response of scattered airy beams,” Sci. Rep. 13(1), 2996 (2023). [CrossRef]  

12. A. Bleahu, S. Gopinath, T. Kahro, et al., “3D incoherent imaging using an ensemble of sparse self-rotating beams,” Opt. Express 31(16), 26120–26134 (2023). [CrossRef]  

13. M. R. Rai and J. Rosen, “Noise suppression by controlling the sparsity of the point spread function in interferenceless coded aperture correlation holography (I-COACH),” Opt. Express 27(17), 24311–24323 (2019). [CrossRef]  

14. M. R. Rai, A. Vijayakumar, and J. Rosen, “Non-linear adaptive three-dimensional imaging with interferenceless coded aperture correlation holography (I-COACH),” Opt. Express 26(14), 18143–18154 (2018). [CrossRef]  

15. S. Mukherjee and J. Rosen, “Imaging through scattering medium by adaptive non-linear digital processing,” Sci. Rep. 8(1), 10517 (2018). [CrossRef]  

16. V. Anand, M. Han, J. Maksimovic, et al., “Single-shot mid-infrared incoherent holography using Lucy-Richardson-Rosen algorithm,” Opto-Electronic Science 1(3), 210006 (2022). [CrossRef]  

17. P.A. Praveen, F. G. Arockiaraj, S. Gopinath, et al., “Deep Deconvolution of Object Information Modulated by a Refractive Lens Using Lucy-Richardson-Rosen Algorithm,” Photonics 9(9), 625 (2022). [CrossRef]  

18. D. Smith, S. Gopinath, F. G. Arockiaraj, et al., “Nonlinear reconstruction of images from patterns generated by deterministic or random optical masks—concepts and review of research,” J. Imaging 8(6), 174 (2022). [CrossRef]  

19. A. P. I. Xavier, F. G. Arockiaraj, S. Gopinath, et al., “Single Shot 3D Incoherent Imaging Using Deterministic and Random Optical Fields with Lucy-Richardson-Rosen Algorithm,” Photonics 10, 987 (2023). [CrossRef]  

20. A. V. Oppenheim and J. S. Lim, “The importance of phase in signals,” Proc. IEEE 69(5), 529–541 (1981). [CrossRef]  

21. C. Liu, T. Man, and Y. Wan, “Optimized reconstruction with noise suppression for interferenceless coded aperture correlation holography,” Appl. Opt. 59(6), 1769–1774 (2020). [CrossRef]  

22. J. P. Desai and J. Rosen, “Resolution-enhanced imaging by interferenceless coded aperture correlation holography with two competing methods of phase mask synthesis,” Submitted (2023).

23. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

24. O. Ersoy and M. Zeng, “Nonlinear matched filtering,” J. Opt. Soc. Am. A 6(5), 636–648 (1989). [CrossRef]  

25. J.-L. Starck, A. Bijaoui, B. Lopez, et al., “Image reconstruction by the wavelet transform applied to aperture synthesis,” Astron. Astrophys. 283, 349–360 (1994).

26. N. Blakeley, P. Bones, R. Millane, et al., “Efficient frequency-domain sample selection for recovering limited-support images,” J. Opt. Soc. Am. A 20(1), 67–77 (2003). [CrossRef]  

27. Y. Noda, S. Goshima, H. Koyasu, et al., “Renovascular CT: comparison between adaptive statistical iterative reconstruction and model-based iterative reconstruction,” Clin. Radiol. 72(901), e13–e19 (2017). [CrossRef]  

28. S. S. Channappayya, A. C. Bovik, C. Caramanis, et al., “Design of Linear Equalizers Optimized for the Structural Similarity Index,” IEEE Trans. Image Process. 17(6), 857–872 (2008). [CrossRef]  

29. N. Dey, L. Blanc-Feraud, C. Zimmer, et al., “Richardson–Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution,” Microsc. Res. Tech. 69(4), 260–266 (2006). [CrossRef]  

30. K. Dabov, A. Foi, V. Katkovnik, et al., “Image restoration by sparse 3D transform-domain collaborative filtering,” Proc. SPIE 6812, 681207 (2008). [CrossRef]  

31. L. Xu, J.S. Ren, C. Liu, et al., “Deep convolutional neural network for image deconvolution,” Adv. Neural Inf. Process. Syst. 27, 1790–1798 (2014).

32. J.W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, 1996).

33. J. Rosen and V. Anand, “The MATLAB code of the INDIA algorithm,” figshare (2020), https://doi.org/10.6084/m9.figshare.24434911.

Supplementary Material (1)

NameDescription
Code 1       The MATLAB code of the INDIA algorithm.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

The program code is given as supplementary material in Code 1 [33].

33. J. Rosen and V. Anand, “The MATLAB code of the INDIA algorithm,” figshare (2020), https://doi.org/10.6084/m9.figshare.24434911.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Optical configuration of a typical imaging system with a spatial light modulator (SLM) for intentional blur.
Fig. 2.
Fig. 2. Scheme of INDIA, where C, Φ, and M are complex, phase and magnitude values, respectively, NLR is nonlinear reconstruction, WD is Wiener deconvolution, LRA is the Lucy-Richardson algorithm, LRRA is the Lucy-Richardson-Rosen algorithm, $\mathrm{{\cal F}}$ and ${\mathrm{{\cal F}}^{ - 1}}$ are Fourier and inverse Fourier transforms, respectively, × is the multiplication sign, IT is the threshold of the iteration number m and exp(P) is the conjugated phase of the PSF spectrum.
Fig. 3.
Fig. 3. (a) Phase masks, simulated PSF and blurred image for Airy, scattered, defocused, self-rotating, vortex and Bessel patterns and reconstructions by NLR, LRA, WD, LRRA and INDIA for a binary target. (b) Bar plot of reconstruction RMSE by NLR, LRA, WD, LRRA and INDIA.
Fig. 4.
Fig. 4. (a) Phase masks, simulated PSF and blurred image for Airy, scattered, defocused, self-rotating, vortex and Bessel patterns and reconstructions by NLR, LRA, WD, LRRA and INDIA for a grayscale target. (b) Bar plot of reconstruction RMSE by NLR, LRA, WD, LRRA and INDIA.
Fig. 5.
Fig. 5. Photograph of the experimental setup. 1 – LED, 2 – collimating lens, 3 – polarizer, 4 – refractive lens for critical illumination, 5 – object, 6 – refractive lens, 7 – iris, 8 – image sensor, 9 – beamsplitter and 10- SLM. The object distance zs between element 5 and element 6 is 5 cm. The recording distance zh between element 8 and element 10 is 18 cm. The distance between element 6 and element 10 is d ∼ 18 cm. The stage with the micrometer dials holds the object (5).
Fig. 6.
Fig. 6. (a) Phase masks, experimental PSF and blurred image for Airy, scattered, defocused, self-rotating, vortex and Bessel patterns and reconstructions by NLR, LRA, WD, LRRA and INDIA. (b) Bar plot of SSIM of reconstructions by NLR, LRA, WD, LRRA and INDIA. (c) Bar plot of RMSE of reconstructions by NLR, LRA, WD, LRRA and INDIA.
Fig. 7.
Fig. 7. (a) Experimental PSF and blurred image for Airy, scattered, defocused, self-rotating, vortex and Bessel patterns and reconstructions by NLR, LRA, WD, LRRA and INDIA. (b) Bar plot of SSIM of reconstructions by NLR, LRA, WD, LRRA and INDIA. (c) Bar plot of RMSE of reconstructions by NLR, LRA, WD, LRRA and INDIA.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

B F I = B | P | 1 exp ( i Φ P ) = | O | exp ( i Φ O ) | P | exp ( i Φ P ) | P | 1 exp ( i Φ P ) = | O | exp ( i Φ O ) = O ( ρ ) .
f ( r ) β α g ( r ) = F 1 { | F | α | G | β exp ( i Φ F ) exp ( i Φ G ) } ,
f ( r ) | H | g ( r ) = F 1 { | H | exp ( i Φ F ) exp ( i Φ G ) } ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.