Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Reducing the risk of laser damage in a focal plane array using linear pupil-plane phase elements

Open Access Open Access

Abstract

A compact imaging system with reduced risk of damage owing to intense laser radiation is presented. We find that a pupil phase element may reduce the peak image plane irradiance from an undesirable laser source by two orders of magnitude, thereby protecting the detector from damage. The desired scene is reconstructed in postprocessing. The general image quality equation (GIQE) [Appl. Opt. 36, 8322 (1997) [CrossRef]  ] is used to estimate the interpretability of the resulting images. A localized loss of information caused by laser light is also described. This system may be advantageous over other radiation protection approaches because accurate pointing and nonlinear materials are not required.

© 2015 Optical Society of America

1. Introduction

Intense laser radiation can be used as an effective threat to an imaging system. High power lasers are widely available, affordable, and essentially unregulated. Such sources may be used to disrupt imaging systems by excessively illuminating the detector, even from large distances [1]. In scenarios where the threat of unwanted laser radiation is high, it is desirable to passively protect the detector from extreme irradiance levels.

If the wavelength of the laser or the state of a highly polarized beam is known, common optical elements such as notch filters and polarizers may be used to reject the laser light. However, a more desirable solution would be robust to many wavelengths, bandwidths, and polarization states. Another approach is to optically remove the light from a source based on its location [2]. Coronagraphs, for example, are commonly used in astronomy to image faint targets in close proximity to extremely bright sources such as exoplanets near to their parent star [3]. Although coronagraphs achieve remarkable suppression levels [4], prior knowledge of the source location and accurate pointing are required.

Over the past few decades, a large research effort has sought nonlinear optical limiting materials to mitigate the threat of laser damage [58]. The goal is to fabricate an optical element with intensity dependent absorption to block intense laser radiation, while allowing for high quality imaging when a laser source is not present. Nonlinear filters are often focal plane elements that are limited by a high irradiance turn-on threshold, operate over a narrow bandwidth, or become permanently opaque after a hostile exposure. To our knowledge, a reusable material that allows for white light imaging and a few orders of magnitude of laser suppression over a large bandwidth has yet to be discovered [9,10].

The approach presented here protects a focal plane array in an imaging system from the damaging effects of intense laser radiation without prior knowledge of the laser source location, brightness, wavelength, or polarization and without the use of nonlinear optical elements. Rather, the risk of laser damage is reduced by modifying the point spread function (PSF) of the optical system with a linear phase element such that the peak irradiance in the image plane is reduced. The signal from the background scene is maintained and an image is recovered by computer processing. We calculate the peak irradiance reduction for a number of potential pupil-plane phase elements and assess the quality of the recovered background scene in terms of the National Imagery Interpretability Rating Scale (NIIRS) [1115]. Additionally, we numerically demonstrate the capability of the system to protect a detector against a powerful laser source and exemplify the strengths and limitations of using pupil phase elements for peak irradiance suppression.

2. Optical System

The scenario to be considered is illustrated in Fig. 1. The unwanted laser source is a bright, spatially coherent, monochromatic, quasi-point source described by the complex field U(x0,y0). The scene, excluding the laser, is a relatively dim spatially incoherent background containing targets of interests described in terms of reflected spectral irradiance b(x0,y0,λb), where λb is the wavelength. Monochromatic illumination is assumed for simplicity. A single lens system is used to form an image of the (x0,y0) plane at the (x,y) plane. The laser source is distant enough that the beam width becomes large with respect to the aperture of the system and evenly illuminates the lens; that is, zzR, where z is the distance from the source to the pupil plane, zR is the Rayleigh distance given by zR=πw02/λL, w0 is the beam waist, and λL is the wavelength of the laser light. Additionally, w0 is small with respect to the targets of interest in the scene. The peak image plane irradiance owing to a laser source Ipeak is reduced by introducing a linear optical element at the pupil plane with complex transmittance t(x,y). The image plane irradiance is approximated by I(x,y)=IL(x,y)+Ib(x,y), where the contribution of laser IL(x,y) and background scene Ib(x,y) are described by the following convolutions:

IL(x,y,λL)=α|Ug(x,y)*h(x,y,λL)|2,
Ib(x,y,λb)=β(bg(x,y,λb)*|h(x,y,λb)|2),
where α and β are constants, Ug(x,y) and bg(x,y,λb) are the geometric images of the laser source and background scene, respectively, h(x,y,λ) is the complex PSF of the optical system given by the Fourier transform of the pupil function
h(x,y,λ)=FT{A(x,y)t(x,y)}=A(x,y)t(x,y)×exp(i2πλf(xx+yy))dxdy,
and A(x,y) is the aperture function [16]. It is assumed that the aperture is a circle of radius R; that is, A(x,y)=circ(r/R), where (r,θ) are the circular coordinates in the (x,y) plane. The formalism above may be easily generalized for pulsed lasers by allowing time dependence in the laser contribution. The radiant exposure, or fluence, is defined as
Φ(x,y,λ)=0ΔtI(x,y,λ,t)dt,
where Δt is the exposure time. Assuming constant illumination, the radiant exposure due to the background scene Φb(x,y,λb) is proportional to Ib(x,y,λb).

 figure: Fig. 1.

Fig. 1. Diagram of the optical system with incident radiation from an unwanted laser source and background scene located at the (x0,y0) plane. The object distance z is large with respect to the Rayleigh range of the laser beam zR. A phase element is placed adjacent to a lens with focal length f and is bounded by aperture A(x,y). Both the laser source and the spatially incoherent background scene are imaged at the (x,y) plane.

Download Full Size | PDF

Laser damage thresholds for focal plane arrays depend on the architecture and materials that make up the device as well as the properties of the laser source, including wavelength and pulse duration. In each case, there are several damage mechanisms that occur at different exposure (fluence) levels [1]. For the purpose of this study, we use an estimated damage threshold based on typical morphological damage thresholds in charge-coupled devices (CCDs). The damage threshold of the Itek Optical Systems Model VLA577 E57D [17], for example, is Φd1J/cm2 for a 10 ns pulse from a Q-switched Nd:YAG laser operating at λ=1064nm [18]. It is useful for our discussion to determine the ratio between the damage threshold and the saturation threshold of the CCD. Saturation occurs at an exposure of Φsat=0.3μJ/cm2 with white light illumination [17]. Thus, Φd/Φsat3×106. We assume this value for the remainder of our discussion keeping in mind that the damage threshold may vary significantly for different detectors and laser sources.

A scenario where a focal plane CCD can be damaged is represented by the following example. A single 10 ns pulse from a laser at distance z=1km with divergence angle θ=w0/zR=2mrad, w0=1mm, and λ=1064nm incident on an f/10 imaging system with R=50mm is likely to damage the detector (i.e., Φ>1J/cm2) if the output pulse energy exceeds 3 mJ. Here, the atmospheric extinction coefficient is taken to be 1km1. However, if the optical system passively reduces the peak irradiance, and therefore the peak fluence, by two orders of magnitude, the imaging system can withstand a pulse with energy up to 300 mJ. Lasers with such output power levels are commercially available [19].

We examined several different pupil phase elements designed to mitigate the risk of detector damage. A phase-only element is used to obviate loss to the desired image signal. Figures 2 and 3 show several example pupil phase patterns and line profiles of their corresponding PSF amplitudes, respectively. Note that each of the phase patterns shown reduces Ipeak by a factor of 100 relative to the peak image plane irradiance without a pupil phase element I0. The phase patterns include low-order Zernike polynomials [see Figs. 2(a)2(d)]. Also shown are a vortex phase element with complex transmittance t(x,y)=exp(ilθ), where l is a nonzero integer known as the topological charge (see Fig 2(e)), and an axicon with transmittance t(x,y)=exp(ir/a), where a is a constant [see Fig. 2(f)]. Even though these phase elements provide the same peak irradiance suppression, we show below that the vortex and axicon allow better image quality after postprocessing than the low-order Zernike polynomials.

 figure: Fig. 2.

Fig. 2. Several example pupil phase elements that yield two orders of magnitude reduction in peak image plane irradiance. The phase patterns shown are (a) defocus, (b) astigmatism, (c) coma, (d) trefoil, (e) a charge l=18 vortex phase element, and (f) an axicon with a=R/28.3. The phase is wrapped between π (black) and π (white).

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Horizontal line profiles (black) of the point spread functions (PSFs) corresponding to the pupil phase elements shown in Fig. 2. The PSF without a phase element (gray) is also shown in each case for comparison. The line profiles pass through the point of maximum amplitude and are normalized such that the maximum amplitude without a phase element is unity. We note that the functions shown in (c) and (d) are not circularly symmetric.

Download Full Size | PDF

3. Image Recovery

An optical system with any of the PSFs shown in Fig. 3 produces a blurred image. However, the background scene may be reconstructed by removing the effect of the phase element from the detected image via Wiener deconvolution. The detected image is a digitally sampled representation of the scene with dimensions N×M and may be written as

Gn,m=γI˜b(nΔx,mΔx)+Vn,m,
where
I˜b(x,y)=Ib(x,y)*d(x,y),
γ is a constant, Δx is the detector pitch, Vn,m is the noise per pixel in units of digital counts, and d(x,y) is the pixel impulse response. A digital approximation of the scene is recovered from the detected image by applying a Fourier domain Wiener filter [20]
W(kΔξ,pΔη)=H˜*(kΔξ,pΔη)|H˜(kΔξ,pΔη)|2+1/SNR,
where (k,p) are the spatial frequency indices, Δξ=1/(NΔx), Δη=1/(MΔx), H˜(ξ,η)=H(ξ,η)D(ξ,η) is the system transfer function, H(ξ,η)=FT{|h(x,y)|2} is the optical transfer function, D(ξ,η)=FT{d(x,y)} is the detector transfer function, and SNR is the signal-to-noise ratio.

4. Optimizing Image Quality

In order to assess the performance of the imaging system described above, we use the general image quality equation (GIQE) to estimate the quality of the deconvolved background image resulting from various pupil phase elements in terms of NIIRS, a quantitative metric commonly used by the strategic intelligence community [1115]. The standard form of the GIQE is given by

NIIRS=c0+c1log10(GSD)+c2log10(RER)+c3G/SNR+c4H,
where GSD=zΔx/f is the ground sample distance, RER is the relative edge response, G is the noise gain in post-processing, H is the edge overshoot, and c0c4 are empirically obtained coefficients. The values of c0c4 have been modified through a number of GIQE versions. Versions 1.0 and 2.0 are not publically available. The coefficients for versions 3.0 and 4.0 are given in Table 1.

Tables Icon

Table 1. Coefficients of the GIQE [12]

The change in image quality caused by the introduction of the pupil phase element may be described as a change on the NIIRS scale

ΔNIIRS=NIIRSNIIRS0,
where NIIRS and NIIRS0 represent the quality of the reconstructed image with and without the pupil phase element, respectively. For GIQE versions 3.0 and 4.0, the change in image quality is
ΔNIIRS3.0,4.0=c2log10(RERRER0)+c3(GG0SNR)+c4(HH0),
where a constant GSD is assumed and RER0, G0, and H0 are calculated without a pupil phase element. The GIQE 3.0 and 4.0 are designed for rating diffraction limited imagery and are not ideal for heavily aberrated systems [21]. Thurman and Fienup [21] suggested a modified expression for aberrated imagery, which for an arbitrary aberration may be written as
ΔNIIRST-F=log2(RERRER0)2.3(GG0SNR).
We note that Eq. (11) was obtained using defocus and mid-spatial-frequency aberrations, where in each case the G/SNR coefficient is 2.229 and 2.574, respectively. The optimum G/SNR coefficient is expected to vary slightly between aberration types; however, we assume 2.3 for general application. In each case, the average RER is calculated along the vertical and horizontal axes: RER=(RERx+RERy)/2. For each direction, the RER is computed as
RERx,y=ERx,y(Δx/2)ERx,y(Δx/2),
where ERx,y is the edge response found by convolving the Heaviside step function
Θ(u)={1u>00u<0
with the impulse response of the system. We use the variable u ambiguously as x or y. The edge response is computed in the Fourier domain as
ERx(x)=FT1{FT{Θ(x)}H˜(ξ,η)W(ξ,η)}|y=0,
ERy(y)=FT1{FT{Θ(y)}H˜(ξ,η)W(ξ,η)}|x=0.
The edge overshoot is also averaged between the horizontal and vertical directions: H=(Hx+Hy)/2, where Hx,y=ERx,y(1.25Δx) if ERx,y(u) is monotonically increasing between u=Δx and u=3Δx. Otherwise, Hx,y=max{ERx,y(u)} on the interval between u=Δx and u=3Δx. The noise gain is defined by
G=n=1Nm=1Mwn,m2n=1Nm=1Mwn,m,
where wn,m is the discrete Fourier transform of the Wiener filter: wn,m=DFT{W(kΔξ,pΔη)}. Assuming photon counting and Gaussian detector noise is present, the signal-to-noise ratio is approximated by
SNR=SavgSavg+σ2,
where Savg is the average detected signal per pixel and σ is the standard deviation of the detector noise.

Figure 4 shows ΔNIIRS estimates for the pupil phase elements shown in Fig. 2 using each of the three versions described by Eqs. (10) and (11). The strength parameter of each phase function is varied to show the performance at all relevant values of peak irradiance reduction. We assume a 16-bit detector with an average signal corresponding to 16,384 digital counts and detector noise equivalent to σ=5 digital counts (SNR=128). The ΔNIIRS results are plotted against the reduction in peak image plane irradiance owing to a point source in the object plane. The calculations were performed using a 4096×4096 computational grid with 0.083λF# per sample in the image plane and a detector sampling rate of Q=λF#/Δx=1.19. ΔNIIRS4.0 yields the most optimistic results, whereas ΔNIIRST-F predicts the largest loss in image quality. For reference, ΔNIIRS=1 corresponds to approximately a factor of two in regards to the loss in resolution, while ΔNIIRS=0.1 corresponds to a barely noticeable difference [22,23]. In the most optimistic case, reducing Ipeak by two orders of magnitude is possible with ΔNIIRS1.255. Discontinuities in the ΔNIIRS curves arise from the conditional definition of the edge overshoot.

 figure: Fig. 4.

Fig. 4. Estimated loss in image quality ΔNIIRS for the incoherent scene as compared to an unprotected system plotted against the relative peak image plane irradiance owing to a bright laser point source in the object plane. The results are shown for GIQE versions (a) 3.0 and (b) 4.0 [12] as well as (c) the modified version suggested by Thurman and Fienup [21] for aberrated imagery. I0 is the peak image plane irradiance without a pupil phase element. The data for the vortex phase element are plotted at integer values of topological charge l.

Download Full Size | PDF

Among the pupil phase elements we have considered, the vortex phase element offers the best performance in cases where one to two orders of magnitude of peak irradiance suppression is required. The axicon performs slightly better than the vortex phase element near Ipeak/I0=102 according to all of the ΔNIIRS estimates. The comparable performance between the vortex phase and axicon pupil elements is attributed to the similarity between their PSFs, which appear as a narrow “donut.” This PSF shape is desirable because the light is spread out in the image plane while the PSF is narrow in the radial direction.

5. Computed Image: without Laser Threat

We choose a vortex phase pupil element for demonstration purposes, which yields a complex PSF generally given by

h(x,y,λ)=FT{circ(r/R)exp(ilθ)}=Clexp(ilθ)0RJl(2πrr/λf)rdr,
where l is the topological charge, Cl=2π(i)l, Jl(z) is the lth-order Bessel function of the first kind, f is the focal length of the lens, and (r,θ) are the circular coordinates in the (x,y) plane [24,25]. Analytical solutions to Eq. (18) are given elsewhere [26,27].

We numerically calculate the images generated by the system described above with an l=18 vortex phase pupil element, which affords slightly more than two orders of magnitude in peak suppression. The computed images (see Fig. 5) are obtained by convolving a discretely sampled test scene with the PSF of the optical system and the impulse response of the detector. The scene is then resampled from 0.25λF# per sample to 1.19λF# per pixel (i.e., Q=1.19). The diameter of the aperture is assumed to be 0.1 m, and the modeled exposure time provides an average signal corresponding to Savg=16,384 digital counts. Both Poisson and Gaussian distributed noise are added, where the Gaussian noise has a standard deviation of σ=5 digital counts. Kolmogorov random phase screens are applied at the pupil plane to simulate wavefront error caused by atmospheric distortion. The phase screens are calculated using a sub-harmonic Monte Carlo method [28,29] assuming a coherence diameter of r0=0.1m and inner and outer scales of 20 m and 5 mm, respectively.

 figure: Fig. 5.

Fig. 5. Computed image with an l=18 vortex phase pupil element (a) before and (b) after Wiener filtering. The deconvolved image without the vortex phase element (i.e., the unprotected system) is shown in (c) for comparison. The image dimensions are 794×1112 pixels.

Download Full Size | PDF

The detected image with the vortex pupil phase element [see Fig. 5(a)] appears blurry, but it is sharpened by deconvolving the PSF of the system [see Fig. 5(b)]. However, there is an overall loss in image quality in the recovered image that corresponds to ΔNIIRS1.63 as compared to the image obtained with an unprotected system [see Fig. 5(c)]. Roughly speaking, this corresponds to a factor of 3 loss in visual resolution.

6. Computed Image: with Laser Threat

The benefit of the pupil phase element is most obvious in the case where the peak irradiance due to a potentially damaging laser source is reduced to a safe level by the pupil phase element. Figure 6 shows a computed image of the test scene obtained with a bright laser originating from a window of the building. In the vicinity of the laser, the image becomes heavily saturated. As previously noted, a laser may cause damage if the image plane exposure is 3×106Φsat. In this scenario, the pupil element reduces the image plane irradiance from 3×106Φsat to <3×104Φsat, thereby protecting the sensor from damage. Line profiles of the image plane irradiance along the dashed line in Fig. 6 are shown in Fig. 7.

 figure: Fig. 6.

Fig. 6. Computed image with an l=18 vortex phase pupil element in the presence of a potentially damaging laser source (before postprocessing). Without the phase pupil element, permanent damage would occur on the sensor. Image plane exposure profiles along the dashed lines are shown in Fig. 7 with and without the protection provided by the pupil phase element.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Profiles of image plane exposure along the dashed line in Fig. 6. Without the pupil phase element (l=0), the detector may become damaged. The l=18 vortex phase pupil element reduces the peak exposure to a safe level by spreading out the laser light.

Download Full Size | PDF

In cases where the laser source is not bright enough to cause permanent detector damage, the detector may still locally saturate leading to a loss of information. A circular saturation region appears where the background information is completely lost, as shown in Fig. 8. Moreover, the local SNR may become too low for detection because of the noise associated with diffracted laser light. It is important to note that the saturated region will be larger with the pupil phase element in place, since the laser light is spread out on the detector. Figure 8 shows recovered images with a laser source of increasing brightness and the l=18 vortex phase pupil element in place. The saturation region becomes large for brighter laser sources, which obscures a substantial region of the background scene. Ringing artifacts also occur around the laser source after deconvolution and tend to appear near sharp edges in the image. Other saturation artifacts such as blooming have not been modeled.

 figure: Fig. 8.

Fig. 8. Computed images after postprocessing with Wiener deconvolution for Φpeak/Φsat= (a) 102, (b) 103, (c) 104, and (d) 105, where Φpeak is defined with the l=18 vortex phase element in the pupil plane of the optical system.

Download Full Size | PDF

In situations where critical targets appear near the laser source, additional postprocessing steps may be introduced to improve recovery of the background scene in the vicinity of the saturated region. We have implemented a two-step process for removing the laser contribution from the image prior to deconvolution. This prevents ringing artifacts that may hinder detection of objects near to the laser source in the image. First, we apply nonlinear optimization to estimate the contribution of the laser source from the detected image. Assuming a circular laser source with constant amplitude and phase, a Nelder–Mead simplex algorithm [30,31] is used to vary the size, location, and brightness until the expected laser contribution best matches the detected image in the region where laser light is present. We define the error metric as the squared difference between the detected image Gn,m and the estimated laser contribution Φ^(nΔx,mΔx), which is calculated over the region-of-interest (ROI):

E=n,m[Gn,mΦ^(nΔx,mΔx)]2,(n,m)ROI,
where ROI is a circle centered on the estimated laser location. The location estimate corresponds to the maximum value returned by matched filtering the image using the PSF of the optical system as the kernel. The initial location estimate is sufficient in cases of low levels of wavefront error and does not need to be varied further by the optimization algorithm. For higher levels of wavefront error, varying the location improves the estimate of the laser contribution. Two independent parameters describe the brightness of the laser source because the appearance of a given laser source depends on the exposure time as well as the background light level. In addition, the estimation process allows for a partially saturated image as may be the case for operating in the presence of an unexpected laser source. The laser subtracted image Xn,m is
Xn,m=Gn,mΦ^(nΔx,mΔx).

The images that result from the laser subtraction routine often have residual speckle due to wavefront error that, when significant, may also cause ringing artifacts in deconvolution. To remedy this, we have devised a gradient-based speckle suppression algorithm that iteratively reduces the image values in regions where the magnitude of the image gradient is high. Each iteration is computed as

Xn,m(i+1)=Xn,m(i)×(1ε|Xn,m(i)|),
where ε is a constant. Typically, ε=0.9 is chosen. By subtracting the expected contribution of the laser source and reducing the residual speckle prior to deconvolution, ringing artifacts in the recovered images are avoided [see Figs. 9(a) and 9(b)].

 figure: Fig. 9.

Fig. 9. Computed, recovered images with the l=18 vortex phase element, Φpeak/Φsat=102, and SNR=128. (a), (b) Recovered images using Wiener deconvolution with (a) the laser contribution subtracted and (b) after gradient-based speckle reduction. The ringing artifacts are reduced as compared to Fig. 8(a). (c), (d) Recovered images using Lucy–Richardson iterative deconvolution, where (c) is the recovered image using only the Lucy–Richardson iterative deconvolution algorithm (i.e., without performing the laser removal process) and (d) is the recovered image with the laser contribution removed. For this example, 10 iterations of gradient-based speckle suppression and 20 iterations of Lucy–Richardson deconvolution are used.

Download Full Size | PDF

Alternate deconvolution algorithms may also be less susceptible to certain types of artifacts. For example, ringing artifacts are less prominent when Lucy–Richardson (L–R) iterative deconvolution [32,33] is used instead of Wiener deconvolution [see Fig. 9(c)]. The L–R results are also improved when the laser contribution is removed as described above prior to performing L–R deconvolution [see Fig. 9(d)].

Using the additional postprocessing steps outlined above, the scene information may be recovered very close to the saturation region. However, wavefront error and noise are the ultimate limitation for recovering the background scene. In other words, the postprocessing routines fail when the actual PSF is very different from the expected PSF.

7. Future Work

Although the GIQE and NIIRS rating were used in this work, image quality may also be estimated using alternate metrics such as the mean-squared error (MSE), peak signal-to-noise ratio (PSNR), or perception-based image quality metrics including the structural similarity index [34], edge metrics [35], task satisfaction confidence scale [14], as well as others [36]. The image quality achieved using deconvolution algorithms, other than the Wiener filter, may also be assessed [37,38].

A phase element that operates over a large bandwidth is needed for the application presented here. With recent advances in the fabrication, high topological charge achromatic vortex phase elements are possible in a number of wavelength regimes including visible, near-, and mid-infrared. Broadband vortex phase transmittance may be achieved by use of holographic elements [39,40] with dispersion compensation [4143], subwavelength gratings [4446], liquid crystal elements [4749], and photonic crystal elements [50]. The most promising designs for achromatic, high-order topological charge vortex elements are liquid crystal vector vortex elements [51].

8. Conclusion

We have presented a novel imaging system design that mitigates the risk of damage caused by laser radiation. Optical elements for predetection processing and postprocessing routines have been developed to optimize the image quality. The main advantages of the approach presented here are that the system is compact, the optical technology is readily available, and prior knowledge of the laser source location, brightness, wavelength, or polarization is not required. This approach is particularly well suited for surveillance scenarios with high probability of incident laser radiation, but with the added assumption that light from the laser sources is unlikely to surpass the damage threshold by a few orders of magnitude. This may be the case for imaging crowd members with handheld laser pointers several meters from the imaging system. If more powerful sources are expected, a pupil phase element with further reduced peak image plane irradiance may be used at the cost of image quality. Alternatively, the technique presented here may be used in conjunction with other approaches such as nonlinear optical limiting materials in a focal plane.

This work was supported by the Naval Research Enterprise Internship Program, the American Society for Engineering Education, and the Office of Naval Research.

References

1. M. F. Becker, C.-Z. Zhang, S. E. Watkins, and R. M. Walser, “Laser-induced damage to silicon CCD imaging sensors,” Proc. SPIE 1105, 68–77 (1989).

2. B. Lyot, “The study of the solar corona and prominences without eclipses,” Mon. Not. R. Astron. Soc. 99, 580–594 (1939).

3. W. A. Traub and B. R. Oppenheimer, “Direct imaging of exoplanets,” in Exoplanets, Space Science Series, S. Seager, ed. (University of Arizona, 2010), pp. 111–156.

4. J. T. Trauger and W. A. Traub, “A laboratory demonstration of the capability to image an Earth-like extrasolar planet,” Nature 446, 771–773 (2007). [CrossRef]  

5. G. A. Swartzlander Jr., B. L. Justus, A. L. Huston, A. J. Campillo, and C. T. Law, “Characteristics of a low f-number broadband visible thermal optical limiter,” Int. J. Nonlinear Opt. Phys. 2, 577–611 (1993). [CrossRef]  

6. L. W. Tutt and T. F. Boggess, “A review of optical limiting mechanisms and devices using organics, fullerenes, semiconductors and other materials,” Prog. Quantum Electron. 17, 299–338 (1993).

7. Y.-P. Sun and J. E. Riggs, “Organic and inorganic optical limiting materials. From fullerenes to nanoparticles,” Int. Rev. Phys. Chem. 18, 42–90 (1999).

8. I. C. Khoo, A. Diaz, and J. Ding, “Nonlinear-absorbing fiber array for large-dynamic-range optical limiting application against intense short laser pulses,” J. Opt. Soc. Am. B 21, 1234–1240 (2004). [CrossRef]  

9. M. J. Miller, A. G. Mott, and B. P. Ketchel, “General optical limiting requirements,” Proc. SPIE 3472, 24–29 (1998).

10. G. Ritt, D. Walter, and B. Eberle, “Research on laser protection: an overview of 20 years of activities at Fraunhofer IOSB,” Proc. SPIE 8896, 88960G (2013).

11. K. Riehl Jr. and L. A. Maver, “Comparison of two common aerial reconnaissance image quality measures,” Proc. SPIE 2829, 242–254 (1996).

12. J. C. Leachtenauer, W. Malila, J. Irvine, L. Colburn, and N. Salvaggio, “General image-quality equation: GIQE,” Appl. Opt. 36, 8322–8328 (1997). [CrossRef]  

13. J. M. Irvine, “National imagery interpretability rating scales (NIIRS): overview and methodology,” Proc. SPIE 3128, 93–103 (1997).

14. J. C. Leachtenauer and R. G. Driggers, Surveillance and Reconnaissance Systems: Modeling and Performance Prediction (Artech, 2001).

15. J. R. Schott, Remote Sensing: The Image Chain Approach (Oxford University, 2007).

16. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

17. J. P. Ellul, H. Y. Tsoi, J. J. White, M. I. H. King, W. C. Bradley, and D. W. Colvin, “A buttable 2048 × 96 element TDI imaging array,” Proc. SPIE 501, 117–127 (1984).

18. C. Zhang, L. Blarre, R. M. Walser, and M. F. Becker, “Mechanisms for laser-induced functional damage to silicon charge-coupled imaging sensors,” Appl. Opt. 32, 5201–5210 (1993). [CrossRef]  

19. http://www.spectra-physics.com/.

20. N. Wiener, Extrapolation, Interpolation, and Smoothing of Stationary Time Series: With Engineering Applications (MIT, 1964).

21. S. T. Thurman and J. R. Fienup, “Application of the general image-quality equation to aberrated imagery,” Appl. Opt. 49, 2132–2142 (2010). [CrossRef]  

22. R. D. Fiete and T. Tantalo, “Comparison of SNR image quality metrics for remote sensing systems,” Opt. Eng. 40, 574–585 (2001). [CrossRef]  

23. R. D. Fiete, T. A. Tantalo, J. R. Calus, and J. A. Mooney, “Image quality of sparse-aperture designs for remote sensing,” Opt. Eng. 41, 1957–1969 (2002). [CrossRef]  

24. G. A. Swartzlander, “Peering into darkness with a vortex spatial filter,” Opt. Lett. 26, 497–499 (2001). [CrossRef]  

25. J. P. Treviño, O. López-Cruz, and S. Chávez-Cerda, “Segmented vortex telescope and its tolerance to diffraction effects and primary aberrations,” Opt. Eng. 52, 081605 (2013). [CrossRef]  

26. V. V. Kotlyar, S. N. Khonina, A. A. Kovalev, V. A. Soifer, H. Elfstrom, and J. Turunen, “Diffraction of a plane, finite-radius wave by a spiral phase plate,” Opt. Lett. 31, 1597–1599 (2006). [CrossRef]  

27. V. V. Kotlyar, A. A. Kovalev, R. V. Skidanov, O. Y. Moiseev, and V. A. Soifer, “Diffraction of a finite-radius plane wave and a Gaussian beam by a helical axicon and a spiral phase plate,” J. Opt. Soc. Am. A 24, 1955–1964 (2007). [CrossRef]  

28. R. G. Lane, A. Glindemann, and J. C. Dainty, “Simulation of a Kolmogorov phase screen,” Waves Random Media 2, 209–224 (1992).

29. J. D. Schmidt, Numerical Simulation of Optical Wave Propagation (SPIE, 2010).

30. J. A. Nelder and R. Mead, “A simplex method for function minimization,” Comput. J. 7, 308–313 (1965).

31. J. Lagarias, J. Reeds, M. Wright, and P. Wright, “Convergence properties of the Nelder–Mead simplex method in low dimensions,” SIAM J. Optim. 9, 112–147 (1998).

32. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. 62, 55–59 (1972). [CrossRef]  

33. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745–754 (1974).

34. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). [CrossRef]  

35. G. O’Brien, S. A. Israel, J. M. Irvine, C. Fenimore, J. Roberts, M. Brennan, D. Cannon, and J. Miller, “Metrics to estimate image quality in compressed video sequences,” Proc. SPIE 6546, 65460A (2007).

36. A. B. Watson, ed., Digital Images and Human Vision (MIT, 1993).

37. J. L. Starck, E. Pantin, and F. Murtagh, “Deconvolution in astronomy: a review,” Publ. Astron. Soc. Pac. 114, 1051–1069 (2002). [CrossRef]  

38. X. Zhu and P. Milanfar, “Removing atmospheric turbulence via space-invariant deconvolution,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 157–170 (2013). [CrossRef]  

39. V. Y. Bazhenov, M. V. Vasnetsov, and M. S. Soskin, “Laser beams with screw dislocations in their wavefronts,” JETP Lett. 52, 429–431 (1990).

40. N. R. Heckenberg, R. McDuff, C. P. Smith, and A. G. White, “Generation of optical phase singularities by computer-generated holograms,” Opt. Lett. 17, 221–223 (1992). [CrossRef]  

41. J. Leach and M. J. Padgett, “Observation of chromatic effects near a white-light vortex,” New J. Phys. 5, 154 (2003). [CrossRef]  

42. I. G. Mariyenko, J. Strohaber, and C. J. G. J. Uiterwaal, “Creation of optical vortices in femtosecond pulses,” Opt. Express 13, 7599–7608 (2005). [CrossRef]  

43. G. J. Ruane, P. Kanburapa, J. Han, and G. A. Swartzlander, “Vortex-phase filtering technique for extracting spatial information from unresolved sources,” Appl. Opt. 53, 4503–4508 (2014). [CrossRef]  

44. Z. Bomzon, V. Kleiner, and E. Hasman, “Pancharatnam–Berry phase in space-variant polarization-state manipulations with subwavelength gratings,” Opt. Lett. 26, 1424–1426 (2001). [CrossRef]  

45. D. Mawet, P. Riaud, O. Absil, and J. Surdej, “Annular groove phase mask coronagraph,” Astrophys. J. 633, 1191–1200 (2005). [CrossRef]  

46. C. Delacroix, O. Absil, P. Forsberg, D. Mawet, V. Christiaens, M. Karlsson, A. Boccaletti, P. Baudoz, M. Kuittinen, I. Vartiainen, J. Surdej, and S. Habraken, “Laboratory demonstration of a mid-infrared AGPM vector vortex coronagraph,” Astron. Astrophys. 553, A98–A106 (2013). [CrossRef]  

47. L. Marrucci, C. Manzo, and D. Paparo, “Optical spin-to-orbital angular momentum conversion in inhomogeneous anisotropic media,” Phys. Rev. Lett. 96, 163905 (2006). [CrossRef]  

48. D. Mawet, E. Serabyn, K. Liewer, C. Hanot, S. McEldowney, D. Shemo, and N. O’Brien, “Optical vectorial vortex coronagraphs using liquid crystal polymers: theory, manufacturing and laboratory demonstration,” Opt. Express 17, 1902–1918 (2009). [CrossRef]  

49. S. Slussarenko, A. Murauski, T. Du, V. Chigrinov, L. Marrucci, and E. Santamato, “Tunable liquid crystal q-plates with arbitrary topological charge,” Opt. Express 19, 4085–4090 (2011). [CrossRef]  

50. N. Murakami, S. Hamaguchi, M. Sakamoto, R. Fukumoto, A. Ise, K. Oka, N. Baba, and M. Tamura, “Design and laboratory demonstration of an achromatic vector vortex coronagraph,” Opt. Express 21, 7400–7410 (2013). [CrossRef]  

51. S. R. Nersisyan, N. V. Tabiryan, D. Mawet, and E. Serabyn, “Improving vector vortex waveplates for high-contrast coronagraphy,” Opt. Express 21, 8205–8213 (2013). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Diagram of the optical system with incident radiation from an unwanted laser source and background scene located at the ( x 0 , y 0 ) plane. The object distance z is large with respect to the Rayleigh range of the laser beam z R . A phase element is placed adjacent to a lens with focal length f and is bounded by aperture A ( x , y ) . Both the laser source and the spatially incoherent background scene are imaged at the ( x , y ) plane.
Fig. 2.
Fig. 2. Several example pupil phase elements that yield two orders of magnitude reduction in peak image plane irradiance. The phase patterns shown are (a) defocus, (b) astigmatism, (c) coma, (d) trefoil, (e) a charge l = 18 vortex phase element, and (f) an axicon with a = R / 28.3 . The phase is wrapped between π (black) and π (white).
Fig. 3.
Fig. 3. Horizontal line profiles (black) of the point spread functions (PSFs) corresponding to the pupil phase elements shown in Fig. 2. The PSF without a phase element (gray) is also shown in each case for comparison. The line profiles pass through the point of maximum amplitude and are normalized such that the maximum amplitude without a phase element is unity. We note that the functions shown in (c) and (d) are not circularly symmetric.
Fig. 4.
Fig. 4. Estimated loss in image quality Δ NIIRS for the incoherent scene as compared to an unprotected system plotted against the relative peak image plane irradiance owing to a bright laser point source in the object plane. The results are shown for GIQE versions (a) 3.0 and (b) 4.0 [12] as well as (c) the modified version suggested by Thurman and Fienup [21] for aberrated imagery. I 0 is the peak image plane irradiance without a pupil phase element. The data for the vortex phase element are plotted at integer values of topological charge l .
Fig. 5.
Fig. 5. Computed image with an l = 18 vortex phase pupil element (a) before and (b) after Wiener filtering. The deconvolved image without the vortex phase element (i.e., the unprotected system) is shown in (c) for comparison. The image dimensions are 794 × 1112 pixels.
Fig. 6.
Fig. 6. Computed image with an l = 18 vortex phase pupil element in the presence of a potentially damaging laser source (before postprocessing). Without the phase pupil element, permanent damage would occur on the sensor. Image plane exposure profiles along the dashed lines are shown in Fig. 7 with and without the protection provided by the pupil phase element.
Fig. 7.
Fig. 7. Profiles of image plane exposure along the dashed line in Fig. 6. Without the pupil phase element ( l = 0 ), the detector may become damaged. The l = 18 vortex phase pupil element reduces the peak exposure to a safe level by spreading out the laser light.
Fig. 8.
Fig. 8. Computed images after postprocessing with Wiener deconvolution for Φ peak / Φ sat = (a)  10 2 , (b)  10 3 , (c)  10 4 , and (d)  10 5 , where Φ peak is defined with the l = 18 vortex phase element in the pupil plane of the optical system.
Fig. 9.
Fig. 9. Computed, recovered images with the l = 18 vortex phase element, Φ peak / Φ sat = 10 2 , and SNR = 128 . (a), (b) Recovered images using Wiener deconvolution with (a) the laser contribution subtracted and (b) after gradient-based speckle reduction. The ringing artifacts are reduced as compared to Fig. 8(a). (c), (d) Recovered images using Lucy–Richardson iterative deconvolution, where (c) is the recovered image using only the Lucy–Richardson iterative deconvolution algorithm (i.e., without performing the laser removal process) and (d) is the recovered image with the laser contribution removed. For this example, 10 iterations of gradient-based speckle suppression and 20 iterations of Lucy–Richardson deconvolution are used.

Tables (1)

Tables Icon

Table 1. Coefficients of the GIQE [12]

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

I L ( x , y , λ L ) = α | U g ( x , y ) * h ( x , y , λ L ) | 2 ,
I b ( x , y , λ b ) = β ( b g ( x , y , λ b ) * | h ( x , y , λ b ) | 2 ) ,
h ( x , y , λ ) = FT { A ( x , y ) t ( x , y ) } = A ( x , y ) t ( x , y ) × exp ( i 2 π λ f ( x x + y y ) ) d x d y ,
Φ ( x , y , λ ) = 0 Δ t I ( x , y , λ , t ) d t ,
G n , m = γ I ˜ b ( n Δ x , m Δ x ) + V n , m ,
I ˜ b ( x , y ) = I b ( x , y ) * d ( x , y ) ,
W ( k Δ ξ , p Δ η ) = H ˜ * ( k Δ ξ , p Δ η ) | H ˜ ( k Δ ξ , p Δ η ) | 2 + 1 / SNR ,
NIIRS = c 0 + c 1 log 10 ( GSD ) + c 2 log 10 ( RER ) + c 3 G / SNR + c 4 H ,
Δ NIIRS = NIIRS NIIRS 0 ,
Δ NIIRS 3.0 , 4.0 = c 2 log 10 ( RER RER 0 ) + c 3 ( G G 0 SNR ) + c 4 ( H H 0 ) ,
Δ NIIRS T - F = log 2 ( RER RER 0 ) 2.3 ( G G 0 SNR ) .
RER x , y = ER x , y ( Δ x / 2 ) ER x , y ( Δ x / 2 ) ,
Θ ( u ) = { 1 u > 0 0 u < 0
ER x ( x ) = FT 1 { FT { Θ ( x ) } H ˜ ( ξ , η ) W ( ξ , η ) } | y = 0 ,
ER y ( y ) = FT 1 { FT { Θ ( y ) } H ˜ ( ξ , η ) W ( ξ , η ) } | x = 0 .
G = n = 1 N m = 1 M w n , m 2 n = 1 N m = 1 M w n , m ,
SNR = S avg S avg + σ 2 ,
h ( x , y , λ ) = FT { circ ( r / R ) exp ( i l θ ) } = C l exp ( i l θ ) 0 R J l ( 2 π r r / λ f ) r d r ,
E = n , m [ G n , m Φ ^ ( n Δ x , m Δ x ) ] 2 , ( n , m ) ROI ,
X n , m = G n , m Φ ^ ( n Δ x , m Δ x ) .
X n , m ( i + 1 ) = X n , m ( i ) × ( 1 ε | X n , m ( i ) | ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.