Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Time multiplexing super-resolved imaging without a priori knowledge of the spatial distribution of the encoding structured illumination

Open Access Open Access

Abstract

Time multiplexing is a super-resolution technique that sacrifices time to overcome the resolution reduction obtained because of diffraction. There are many super resolution methods based on time multiplexing, but all of them require a priori knowledge of the time changing encoding mask, which is projected on the object and used to encode and decode the high-resolution information. In this paper, we present a time multiplexing technique that does not require the a priori knowledge on the projected encoding mask. First, the theoretical concept of the technique is demonstrated; then, numerical simulations and experimental results are presented.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Imaging systems have resolution limitations due to several mechanisms, being one of them the diffraction. Diffraction limit produces a spatial low-pass filtering and a loss of imaging resolution. Super-resolution techniques can be used to solve this problem. The main concept of super-resolution is to encode the high-resolution spatial information into other dimension or axis. This means that we sacrifice one domain in order to improve the spatial domain [14]. There are many super resolution methods, each one sacrificing a different axis, for instance, we can sacrifice the time axis [5,6], the wavelength axis [710] or even the field of view [11,12]. Another example could be the case of 1-D objects [13] where we can use this a priori information to overcome the diffraction limitation [14], we can look at a 1-D object as if it has two dimensions [15] or to use the spectral dilation method [16]. One can rely on optical fluctuation [17] in order to achieve better resolution, living cells can be super-resolved by molecule localization microscopy methods such as STED, PALM and STORM [1820]. Each method requires some a priori knowledge about the object’s characteristics to know which axis could be sacrificed and in order to have the ability to perform the encoding/decoding super resolving process.

In this paper we will present a new time multiplexing approach. One approach of time multiplexing was suggested by Lukosz [6], in such approach two moving gratings were needed: one grating encodes the high-resolution spatial information and is positioned in front of the object, and the other grating is positioned near the detector and decodes the high-resolution information. Time multiplexing requires the object will not change in time, i.e. $\textrm{u}({\textrm{x},\; y,\; t} )\; = \; u({\textrm{x},\; y} )$, where $\textrm{u}({\textrm{x},\; y,\; t} )$ is the object’s spatiotemporal field. This restriction is imposed because the purpose of the gratings is to encode spatial high-resolution information into different time slots. As a consequence, if the object was changing in time then the method would not work properly. Other important information that we have to know is the spatial distribution of the spatially encoding structure. This information is important because we need to know how our information is being encoded over time by the grating.

In order to achieve the improvement in the resolution, the illuminated structure need to have a resolution that is at least as the resolution we try to accomplish, and therefore, the projecting system should have a better optics then the imaging system optics, but the concept of super-resolution is not to improve the optics. There are couple of techniques to avoid from a better optics in the projecting system. One of them is to illuminate with a laser a diffuser, this will create on the object a speckle pattern, this is a very cheap way to illuminate a pattern which we can determine the resolution of the pattern by locating the diffuser in the right distance from the object [2124].

The new approach that we present here overcomes the diffraction limitation without the need of knowing the spatial distribution of the encoding grating. Even more, it is decoded out of the set of captured low-resolution images. Previously, we have demonstrated the capability of doing time multiplexing super-resolved imaging without a priori knowledge of the encoding grating [24]. However, in such approach a plurality of wavelengths were required. On the contrary, in the current approach, the super-resolved imaging is obtained with a single illumination wavelength.

In the proposed approach, we will capture several low-resolution images being the encoding function spatially shifted for each capture. As a result, each image will contain a different combination of different regions in the object's spatial spectrum. By knowing the connections between different images we can calculate the different regions in the object’s spectrum. Every region will have a different coefficient according to the Fourier coefficient of the high-resolution pattern that illuminates the object.

This paper is constructed as follows: the mathematical derivation that supports our super resolving concept is presented in section 2, numerical simulations are presented in section 3 and experimental results are shown in section 4. To finish, we conclude the paper in section 5.

2. Mathematical validation

We assume that the encoding function has periodicity in space and thus in the spectrum it is expressed as a set of Dirac delta functions with spectral spacing of δµ. As depicted in Fig. 1, we will assume that $\tilde{\rm S}(\mathrm{\mu} )$ is the unknown spectrum of the inspected object and an are the Fourier series coefficients of the unknown encoding function.

 figure: Fig. 1.

Fig. 1. Schematic sketch used for the mathematical derivation.

Download Full Size | PDF

The spectra also has a resolution given by the inverse of the spatial extent of the object. Let us assume that the object has spatial dimensions of Δx, thus the spectral resolution is:

$$\mathrm{\delta}\mathrm{\nu} = {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 \textrm{x}}} \right.}\!\lower0.7ex\hbox{$\Delta \textrm{x}$}}$$

If for instance $\mathrm{\delta}\mathrm{\nu} $=δµ then the periodicity assumption is even not needed as it means that the spatial period of the periodicity equals to the finite dimensions of the inspected object.

We denote by Δµ the low resolution of the imaging system and our aim is to perform super resolution to detect the full spectral bandwidth of our object while the encoding function is not known as well. Without loss of generality, we will use 1-D analysis for the sake of simplicity.

We capture M low resolution images where the encoded function is shifted by $\mathrm{\delta}\textrm{x}$ between two subsequent captured images. Therefore, we will record M images related to the object and the encoding mask as follows:

$${\textrm{R}_\textrm{m}} = \; {\textrm{S}_\textrm{m}}\cdot{\textrm{E}_\textrm{m}}({\textrm{x} - \textrm{m}\mathrm{\delta}\textrm{x}} ),$$
where R denotes the captured image, and S and E represent the object and the encoding mask respectively.

The Fourier transform of the previous, making use of the convolution and shift theorems, could be written as:

$${\tilde{\textrm{R}}_\textrm{m}}(\mathrm{\mu} )= \tilde{\textrm{S}}(\mathrm{\mu} )\ast \left\{ {\textrm{exp} ({ - 2\mathrm{\pi}\textrm{i}\cdot\textrm{m}\mathrm{\delta}\textrm{x}\cdot\mathrm{\mu}} )\mathop \sum \limits_\textrm{n} {\textrm{a}_\textrm{n}}\mathrm{\delta}({\mathrm{\mu} - \textrm{n}\mathrm{\delta}\mathrm{\mu} } )} \right\}$$
Taking in account the fact our capturing system is bandlimited to Δµ, we should band limit consequently the previous expression. As a result, after performing the convolution, the set of information we capture could be expressed by the following equation:
$${\tilde{\textrm{R}}_\textrm{m}}(\mathrm{\mu} )= \left( {\mathop \sum \limits_{\textrm{n} ={-} \textrm{N}/2}^{ + \textrm{N}/2} {\textrm{a}_\textrm{n}}\textrm{exp}({ - 2\mathrm{\pi}\textrm{i}{\;}{\textrm{m}\mathrm{\delta} \textrm{x}\; \textrm{n}\mathrm{\delta \mu} }} )\tilde{\textrm{S}}({\mathrm{\mu} - \textrm{n}\mathrm{\delta}\mu} )} \right)\textrm{rect}\left( {\frac{\mathrm{\mu}}{{\Delta \mathrm{\mu}}}} \right)$$
where
$$\textrm{N} = \mathrm{\eta}({{\raise0.7ex\hbox{${\Delta \mathrm{\mu}}$} \!\mathord{\left/ {\vphantom {{\Delta \mathrm{\mu}} {\mathrm{\delta}\mu}}} \right.}\!\lower0.7ex\hbox{${\mathrm{\delta}\mu}$}}} )$$
Where $\mathrm{\eta}$ is the super resolution factor i.e. the ratio between the spectral width of the spectrum of the inspected object and Δµ.

In addition, the number of equations each corresponds to different amount of spatial shifting $\textrm{m}\mathrm{\delta}\textrm{x}$ could be calculated as follows:

$$\textrm{M} = \frac{{\mathrm{\eta}\Delta \mu ({{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\mathrm{\delta}\mathrm{\nu}}}} \right.}\!\lower0.7ex\hbox{${\mathrm{\delta}\mathrm{\nu}}$}} + {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\mathrm{\delta}\mu}}} \right.}\!\lower0.7ex\hbox{${\mathrm{\delta}\mu}$}}} )}}{{{\raise0.7ex\hbox{${\Delta \mathrm{\mu}}$} \!\mathord{\left/ {\vphantom {{\Delta \mathrm{\mu}} {\mathrm{\delta}\mathrm{\nu}}}} \right.}\!\lower0.7ex\hbox{${\mathrm{\delta}\mathrm{\nu}}$}}}} = \mathrm{\eta}({1 + {\raise0.7ex\hbox{${\mathrm{\delta}\mathrm{\nu}}$} \!\mathord{\left/ {\vphantom {{\mathrm{\delta}\mathrm{\nu}} {\mathrm{\delta}\mu}}} \right.}\!\lower0.7ex\hbox{${\mathrm{\delta}\mu}$}}} )$$

In case that the encoding function is not periodic, i.e. $\mathrm{\delta}\mathrm{\nu} $=δµ then the last equation becomes $\textrm{M} = 2\mathrm{\eta}$.

In order to solve the set of equations described by Eq. (4) we will write it in matrix representation by using the notation of:

$$\begin{aligned} {\tilde{\textrm{S}}_\textrm{n}}(\mathrm{\mu} )&= \tilde{\textrm{S}}({\mathrm{\mu} - \textrm{n}\mathrm{\delta}\mu} )\textrm{rect}\left( {\frac{\mathrm{\mu}}{{\Delta \mathrm{\mu}}}} \right)\\ {\textrm{A}_{\textrm{m},\textrm{n}}} &= {\textrm{a}_\textrm{n}}\textrm{exp}({ - 2\mathrm{\pi}\textrm{i}{\;}{\textrm{m}\mathrm{\delta} \textrm{x}\; \textrm{n}\mathrm{\delta \mu}}} )\\ {\tilde{\textrm{R}}_\textrm{m}}(\mathrm{\mu} )&= \mathop \sum \limits_\textrm{n} {\textrm{A}_{\textrm{m},\textrm{n}}}{\tilde{\textrm{S}}_\textrm{n}}(\mathrm{\mu} )\end{aligned} $$

Thus, we obtain:

$$\underbrace{{\left[ {\begin{array}{{ccc}} \ldots &{}&{}\\ {}&{{\textrm{A}_{\textrm{m},\textrm{n}}}}&{}\\ {}&{}& \ldots \end{array}\;} \right]}}_{{\textrm{M}\times \textrm{N}}}\underbrace{{\left[ {\begin{array}{{c}} \vdots \\ {{{\tilde{\textrm{S}}}_\textrm{n}}}\\ \vdots \end{array}} \right]}}_{{\textrm{N} \times 1}} = \underbrace{{\left[ {\begin{array}{{c}} \vdots \\ {{{\tilde{\textrm{R}}}_\textrm{m}}}\\ \vdots \end{array}} \right]}}_{{\textrm{M} \times 1}}$$
where the matrix ${\textrm{A}_{\textrm{m},\textrm{n}}}$, which is not known, can be expressed as a product between a known matrix and a diagonal unknown matrix of ${\textrm{a}_\textrm{n}}$ coefficients:
$$\underbrace{{\left[ {\begin{array}{{ccc}} \ldots &{}&{}\\ {}&{{\textrm{A}_{\textrm{m},\textrm{n}}}}&{}\\ {}&{}& \ldots \end{array}\;} \right]}}_{{\textrm{M}\times \textrm{N}}} = \underbrace{{\left[ {\begin{array}{{ccc}} \ldots &{}&{}\\ {}&{\textrm{exp}({ - 2\mathrm{\pi}\textrm{i}{\;}{\textrm{m}\mathrm{\delta} \textrm{x}\; \textrm{n}\mathrm{\delta \mu}}} )}&{}\\ {}&{}& \ldots \end{array}\;} \right]}}_{{\textrm{M}\times \textrm{N}}}\underbrace{{\left[ {\begin{array}{{ccc}} {{\textrm{a}_1}}&0&0\\ 0&{{\textrm{a}_2}}&0\\ 0&0&{{\textrm{a}_\textrm{n}}} \end{array}\;} \right]}}_{{\textrm{N}\times \textrm{N}}}$$
The matrix of the exponents is invertible since its rows are independent and thus, we obtain the desired reconstruction as:
$$\underbrace{{\left[ {\begin{array}{{ccc}} {{\textrm{a}_{ - \textrm{N}/2}}}&0&0\\ 0&{{\textrm{a}_\textrm{n}}}&0\\ 0&0&{{\textrm{a}_{\textrm{N}/2}}} \end{array}\;} \right]}}_{{\textrm{N}\times \textrm{N}}}\underbrace{{\left[ {\begin{array}{{c}} \vdots \\ {{{\tilde{\textrm{S}}}_\textrm{n}}}\\ \vdots \end{array}} \right]}}_{{\textrm{N} \times 1}} = {\left[ {\begin{array}{{ccc}} \ldots &{}&{}\\ {}&{\textrm{exp}({ - 2\mathrm{\pi}\textrm{i}{\;}{\textrm{m}\mathrm{\delta} \textrm{x}\; \textrm{n}\mathrm{\delta \mu}}} )}&{}\\ {}&{}& \ldots \end{array}\;} \right]^{ - 1}}\left[ {\begin{array}{{c}} \vdots \\ {{{\tilde{\textrm{R}}}_\textrm{m}}}\\ \vdots \end{array}} \right]$$

The right side of Eq. (10) is known while the left side is unknown. It is important to note that although in the left as well as in the right side of the equation we get an Nx1 vector, it is actually matrices in both of them since every component of the Nx1 vector is a vector by itself depending on the spectral coordinate µ. The case with less redundancy - i.e. when we have the largest number of unknowns- is accomplished when δµ=δν. In such situation, the following relation is fulfilled:

$${\tilde{\textrm{S}}_\textrm{n}}({\textrm{k}\mathrm{\delta}\mu} )= {\tilde{\textrm{S}}_{\textrm{n} - 1}}({({\textrm{k} - 1} )\; \mathrm{\delta}\mu} )+ {\tilde{\textrm{S}}_\textrm{n}}({ - {\raise0.7ex\hbox{$\textrm{N}$} \!\mathord{\left/ {\vphantom {\textrm{N} 2}} \right.}\!\lower0.7ex\hbox{$2$}}\mathrm{\delta}\mu} )- {\tilde{\textrm{S}}_{\textrm{n} - 1}}({{\raise0.7ex\hbox{$\textrm{N}$} \!\mathord{\left/ {\vphantom {\textrm{N} 2}} \right.}\!\lower0.7ex\hbox{$2$}}\mathrm{\delta}\mu} )$$
where $\mathrm{\mu} = \textrm{k}\mathrm{\delta}\mu $ with k goes from -N/2 to N/2 exactly as n.

Note that in order to make the matrix of the right wing of Eq. (10) invertible it needs to be a square matrix (condition that is accomplished, as its rows are independent, and it is full rank). Thus, we will have: M = N.

The reconstruction algorithm, that can be seen in Fig. 2, is as follows: let's start by looking at row number ‘i’, it will represent the spectrum that was shifted by $\left( { - \frac{\textrm{N}}{2} + \textrm{i} - 1} \right)\delta \mu $, and was multiplied by ${\textrm{a}_{ - \frac{\textrm{N}}{2} + \textrm{i} - 1}}$ and of course the spectrum has to be multiplied by the LPF, and therefore, this row will be: ${\textrm{a}_{ - \frac{\textrm{N}}{2} + \textrm{i} - 1}}\tilde{\textrm{S}}\left( {\mu - \left( { - \frac{\textrm{N}}{2} + \textrm{i} - 1} \right)\delta \mu} \right)\textrm{rect}\left( {\frac{\mu}{{\Delta \mu}}} \right)$, by the same way the ‘i+1’ row will be ${\textrm{a}_{ - \frac{\textrm{N}}{2} + \textrm{i}}}\tilde{\textrm{S}}\left( {\mu - \left( { - \frac{\textrm{N}}{2} + \textrm{i}} \right)\delta \mu} \right)\textrm{rect}\left( {\frac{\mu}{{\Delta \mu}}} \right)$. There are $\frac{{\Delta \mathrm{\mu}}}{{\mathrm{\delta}\mu}}$ elements that can pass the LPF, and because there is a difference of one $\mathrm{\delta}\mu $ shift between two consecutive rows in the matrix, each pair of consecutive rows in the matrix of the left side of Eq. (10) will be the same, to the point of the multiplication permanently ${\textrm{a}_\textrm{i}}/{\textrm{a}_{\textrm{i} + 1}}$, at $\frac{{\Delta \mathrm{\mu}}}{{\mathrm{\delta}\mu}} - 1$ elements. We also should notice that because of the shifting difference, the same spectrum elements will be at different places in each row. Each one of the rows will have one extra different component, and from the connection between the known components we can know the connection between ${\textrm{a}_\textrm{i}}\; $ and ${\textrm{a}_{\textrm{i} + 1}}$.

 figure: Fig. 2.

Fig. 2. Illustration of the process: (a) Object's spectrum samples and the mask's Fourier series components. (b) The convolution result between the object's spectrum and the mask's Fourier components, the low-pass filter, and image #0 i.e. without the mask shifting. (c) Illustration of the reconstruction process that can be done after Eq. (10).

Download Full Size | PDF

One can see in Fig. 2(a) samples of the object's spectrum and the Fourier series components of the mask. Figure 2(b) shows the convolution result, i.e. Eq. (3). In order to achieve the image that our low resolution system captures we should add the low-pass filter, i.e. Eq. (4), and in Fig. 2(c) we can see in addition to the convolution result the image spectrum that passed our lens. The image is ${\textrm{R}_0}(\mathrm{\mu} )$ because this image is without the spatial shifting of the mask. After capturing M low resolution images, in Fig. 2(c) one can see the left side matrix from Eq. (10). The elements in the same color rectangles represent the same spectrum's element and the connection between ${\textrm{a}_\textrm{i}}\; $ and ${\textrm{a}_{\textrm{i} + 1}}$ can be achieved from them.

After the connection between ${\textrm{a}_\textrm{i}}\; $ and ${\textrm{a}_{\textrm{i} + 1}}$ is known, one can know how to combine between two rows and to know a wider spectrum. Let's look at the first two subplots of Fig. 2(c), they represent the first two rows in the left side matrix of Eq. (10). The elements that are marked in orange, represent the same spectrum element and from them we can know the connection between ${a_{ - 2}}$ and ${a_{ - 1}}$ and from this connection we can add the additional element from the second row to the first row. The same way we will add the element from the third row to the second by the yellow connection and after that we will add it to the first row by the orange connection and so on.

This is important to note that each subplot in Fig. 2(c) has a different y-axis limit that depending on the appropriate mask's Fourier component, in order to see that the elements are representing the same spectrum value in an easier way.

Let's look at a simple example, to understand better how from two consecutive rows we can expand the known spectrum by one element. Let's assume that $\frac{{\Delta \mathrm{\mu}}}{{\mathrm{\delta}\mu}} = 7$, and, for example, we will look at the row that represents the spectrum that was multiplied by ${\textrm{a}_0}$, this is the actual spectrum without shifting, we will call it, for this example, ${\textrm{S}_{{\textrm{a}_0}}}$, and we will look at the next row, the one that represents the spectrum that was shifted by $\mathrm{\delta}\mu $ and was multiplied by ${\textrm{a}_1}$, it will be ${\textrm{S}_{{\textrm{a}_1}}}$, and $\textrm{S}(\textrm{k} )$ will be the samples of the original spectrum. They both have 7 elements, ${\textrm{S}_{{\textrm{a}_0}}} = {\textrm{a}_0}[{\textrm{S}({ - 3} ),\; \textrm{S}({ - 2} ), \ldots ,\textrm{S}(2 ),\; \textrm{S}(3 )} ],\; \; {\textrm{S}_{{\textrm{a}_1}}} = {\textrm{a}_1}[{\textrm{S}({ - 4} ),\; \textrm{S}({ - 3} ), \ldots ,\textrm{S}(1 ),\; \textrm{S}(2 )} ],$ we know that, for example, the first element in ${\textrm{S}_{{\textrm{a}_0}}}$ and the second element in ${\textrm{S}_{{\textrm{a}_1}}}$ represent $\textrm{S}({ - 3} )$, that was multiplied by different element in each row. We can calculate $\frac{{{\textrm{S}_{{\textrm{a}_0}}}(1 )}}{{{\textrm{S}_{{\textrm{a}_1}}}(2 )}} = \frac{{{\textrm{a}_0}\textrm{S}({ - 3} )}}{{{\textrm{a}_1}\textrm{S}({ - 3} )}} = \frac{{{\textrm{a}_0}}}{{{\textrm{a}_1}}}\; $ and then we will know the connection between ${\textrm{a}_0}\; \textrm{and}\; {\textrm{a}_1}$, $\frac{{{\textrm{a}_0}}}{{{\textrm{a}_1}}}$ can be calculated from any pair of values representing the same spectrum value, and now we can look at the vector: $\left[ {\frac{{{\textrm{a}_0}}}{{{\textrm{a}_1}}}{\textrm{S}_{{\textrm{a}_1}}}(1 ),\; \; {\textrm{S}_{{\textrm{a}_0}}}} \right] = {\textrm{a}_0}[{\textrm{S}({ - 4} ),\; \textrm{S}({ - 3} ),\; \ldots ,\; \textrm{S}(2 ),\; \textrm{S}(3 )} ]= \left[ {\frac{{{\textrm{a}_0}}}{{{\textrm{a}_1}}}{\textrm{S}_{{\textrm{a}_1}}},\; \; {\textrm{S}_{{\textrm{a}_0}}}(7 )} \right]$.

We will start the reconstruction process at the first and second rows, we assume that ${\textrm{a}_{ - \frac{\textrm{N}}{2}}} = 1$, then we can know ${\textrm{a}_{ - \frac{\textrm{N}}{2} + 1}}$ from the connection between the vector’s elements as described before, and therefore, we can know another component that it is the element that exists in the second row but not in the first one. After that we can proceed with the connection between the second and the third rows and to know ${\textrm{a}_{ - \frac{\textrm{N}}{2} + 2}}$ and the component that exists in the third row and doesn't exists in the second row and so on.

Eventually we will have all coefficients ${\textrm{a}_\textrm{n}}$ (relative to ${\textrm{a}_{ - \textrm{N}/2}}$ which was assumed to be 1) and all values of ${\tilde{\textrm{S}}_\textrm{n}}$.

The effect of the unknown pattern that illuminate the object on the method is that the minimum element size that can be seen in the reconstructed object is of the same order as the minimum mask's element size. If, for example, the unknown pattern is speckle [24], that we are unable to know the exact pattern of the speckles that illuminate the object, but we can know his properties, i.e. the speckle's dot size, then the minimum element size that can be seen in the object will be as the size of the speckle's dots.

Another point that we should notice to is the reconstruction time and the accuracy of the method. The significant calculation that the method contain is the inverse of the matrix. After Eq. (10) the reconstruction calculation is computationally insignificant as it contains in each step a simple calculation of a number of simple mathematical operations. If the matrix of the exponents is known before the calculation, the values that affect the matrix is δx and δµ and should be known before, and therefore the reconstruction time is fast. In addition, we should remember that the connection between two consecutive values of ‘a’ is obtained by the connection between pairs of different values. If we will take an average of different pairs we will be able to increase the accuracy of the value which we are trying to recover.

3. Numerical simulations

We will assume that the object we would like to improve is a random space limited object with N/4 $= 64$ pixels, while in space we have $\textrm{N} = 256$ pixels as shown in Fig. 3(a). The low-resolution object is shown in Fig. 3(b) and the super-resolved reconstruction obtained when applying the proposed concept is shown in Fig. 3(c). One can see in Fig. 3(d) a Comparison of the Original signal with the Low-Resolution and the Super-Resolved signal in the region where the Original signal exists. The graph shows the absolute value of $\frac{{L - O}}{O}$ and $\frac{{S - O}}{O}$ where O is the original signal, L is the Low-Resolution signal and S is the Super-Resolved signal, from this comparisons, we can see that the Super-Resolved signal is much closer to the original one than the Low-Resolution signal. We can see the Gibbs phenomenon which was created by not taking all the coefficients in the Fourier series which builds the original object.

 figure: Fig. 3.

Fig. 3. Left: Reconstruction in space domain. (a). The original object, (b). The low resolution object, (c). The super resolving object. Right: (d) Comparison of Signals.

Download Full Size | PDF

The system was simulated as an aperture that behaves in spectral domain as a band-pass filter allowing to pass only 23 pixels and being all the other pixels zeros outside the aperture. For the shake of clarity, the object spectrum is shown in Fig. 4(a) and the low-resolution object spectrum, after being low passed by the aperture of the imaging lens, is shown in Fig. 4(b).

 figure: Fig. 4.

Fig. 4. Left: The normalized spectral distributions of: (a). The original object, (b). The low resolution object, (c). The super resolving object. Right: The normalized spectral distributions of: (d). The original code, (e). The reconstructed code.

Download Full Size | PDF

The super-resolved spectrum can be seen in Fig. 4(c). The reconstruction was obtained following the mathematics of section 2. The encoding function was a random space limited function with $\textrm{N} = 256$ pixels, whose Fourier coefficients are shown in Fig. 4(d).The reconstruction of the encoding spectrum is depicted in Fig. 4(e). All the shown coefficients were normalized against ${\textrm{a}_0}$.

In Fig. 5 we present numerical simulation of the proposed concept for 2D objects. A resolution USAF target was used as shown in Fig. 5(a). The low-resolution image is seen in Fig. 5(b). The numeric reconstruction is shown in Fig. 5(c). The technique for 2-dimensions is to covert the image matrix into a single row and with the required adjustments of the encoding Fourier coefficient and then we can use the 1-dimensional method.

 figure: Fig. 5.

Fig. 5. Numerical 2D simulations. (a). The original image. (b). The low-resolution image. (c). The super-resolved reconstrcution.

Download Full Size | PDF

4. Experimental results

Our experimental setup includes a Genesis MX532-3 MTM OPSL laser, with a wavelength of 532 nm, that entered into a Texas Instrument's E4500MKII fiber coupled DMD projector from EKB Technologies Ltd, through a computer that is connected with an HDMI cable to the projector. We projected on a specific area of a USAF resolution target model R3L3S1N from Thorlabs a random 1D pattern and we imaged the product between the pattern and the target into a Basler acA1920-25um CCD camera.

Two images of the setup can be seen in Fig. 6: On the left side one can see the projector that illuminates the target. The lens is positioned to image the pattern from the projector on the target, and the following aperture is located in order to block different orders of the projector illumination. After the target we find the lens and the camera that will capture images of the target as being illuminated with the pattern through the projector. The system was designed to coordinate the shifting of a pixel from the projector to a shifting of a pixel on the camera.

 figure: Fig. 6.

Fig. 6. Images of the experimental setup.

Download Full Size | PDF

An imaging system with a limited spatial frequency can be modelled as a 4-f system with two ideal lenses and a spatial aperture that is located at the focal plane of the first lens. In order to achieve a balance in the system between the minimum element size in the object, the size of the camera’s pixels and the size of the projector’s pixel, we chose a group size in the USAF target that is big enough and that our lens was able to resolve. Therefore, in order to be able to unresolve the object we had to have a small spatial aperture in the focal plane of the 4-f system that we could not achieve, i.e. the lens that is in our system can be treated as an ideal lens that does not effect on the object's spectrum.

Therefore, we chose to limit the object’s spectrum with a digital low-pass filter, which simulates the action of a bad quality lens in the imaging system i.e. the diffraction limit. In this situation, our overall system contains an ideal lens that image the object and a digital low-pass filter that simulate the spatial aperture which is placed in the focal plane of the 4-f system model.

The digital low-pass filter has a rect shape and simulated the situation of placing a spatial aperture in the Fourier plane of 4-f system and achieving a NA limited imaging lens.

We preformed a Fourier transform to all of the images that were taken from the camera, and we multiplied each one of them with a low-pass filter.

We took images of the USAF in the specific area containing three lines with resolution of 4 lp/mm. One can see the object in the left side of Fig. 7(a). The low-pass filter has a rect shape as in Eq. (4) and the maximum resolution that can pass through the digital filter is 13/(1920⋅2.2) ${\cdot} {10^3} \approx 3.08\; $ lp/mm, therefore our system cannot resolve the three lines with a 4 lp/mm resolution the object is containing. One can see the low-resolution object in Fig. 7(b), whose spectrum is depicted in Fig. 7(e).

 figure: Fig. 7.

Fig. 7. Experimental results. Left: Spatial reconstruction. (a). The original object, (b). The low resolution object, (c). The super resolving object. Right: Spectral econstruction. (d).The original object’s spectrum, (e). The low resolution spectrum, (f). The super resolving spectrum.

Download Full Size | PDF

By projecting a random encoding 1D pattern, spatially shifting the pattern, and implementing the method we introduced, we were able to reconstruct the three lines of the object that have a frequency of 4 lp/mm. One can see the three lines in the super resolved object in the Fig. 7(c). Additionally, one can see the significant weight of the 4 lpmm frequency coefficient in the spectrum of the super resolved object in Fig. 7(f).

5. Conclusions

In this paper we presented a new super resolving approach based on time multiplexing to overcome the diffraction limitation. The novelty presented of this paper relies on the ability to improve the spatial resolution even without knowing the mask illuminated on the object. What could e.g. assist in super resolving through scattering mediums, (as e.g. biological tissue) as in such situation the projected pattern is changed due to the scattering medium and thus is unknown.

Funding

Ministerio de Economía, Industria y Competitividad, Gobierno de España (FIS2017-89748-P).

Disclosures

The authors declare no conflicts of interest.

References

1. I. J. Cox and C. J. R. Sheppard, “Information capacity and resolution in an optical system,” J. Opt. Soc. Am. A 3(8), 1152–1158 (1986). [CrossRef]  

2. D. Mendlovic and A. W. Lohmann, “Space–bandwidth product adaptation and its application to superresolution: fundamentals,” J. Opt. Soc. Am. A 14(3), 558–562 (1997). [CrossRef]  

3. G. Toraldo di Francia, “Degrees of Freedom of an Image,” J. Opt. Soc. Am. 59(7), 799–804 (1969). [CrossRef]  

4. Z. Zalevsky, D. Mendlovic, and A. W. Lohmann, “Optical system with improved resolving power,” Prog. Opt. 40, 271–341 (2000). [CrossRef]  

5. M. Françon, “Amélioration de résolution d’optique,” Nuovo Cimento 9(S3), 283–287 (1952). [CrossRef]  

6. W. Lukosz, “Optical Systems with Resolving Powers Exceeding the Classical Limit,” J. Opt. Soc. Am. 56(11), 1463–1471 (1966). [CrossRef]  

7. A. I. Kartashev, “Optical systems with enhanced resolving power,” Opt. Spektrosk. 9, 204–206 (1960).

8. J. García, V. Micó, D. Cojoc, and Z. Zalevsky, “Full field of view super-resolution imaging based on two static gratings and white light illumination,” Appl. Opt. 47(17), 3080–3087 (2008). [CrossRef]  

9. A. M. Weiner, J. P. Heritage, and E. M. Kirschner, “High-resolution femtosecond pulse shaping,” J. Opt. Soc. Am. 5(8), 1563–1572 (1988). [CrossRef]  

10. J. D. Armitage, A. W. Lohmann, and D. P. Parish, “Superresolution image forming systems for objects with restricted lambda dependence,” Jpn. J. Appl. Phys. 3(S1), 273–275 (1965). [CrossRef]  

11. W. Lukosz and A. Bachl, “Experiments on superresolution imaging of a reduced object field,” J. Opt. Soc. Am. 57(2), 163–169 (1966). [CrossRef]  

12. Z. Zalevsky, D. Mendlovic, and A. W. Lohmann, “Super resolution optical systems using fixed gratings,” Opt. Commun. 163(1-3), 79–85 (1999). [CrossRef]  

13. M. A. Grimm and A. W. Lohmann, “Superresolution Image for One-Dimensional Objects,” J. Opt. Soc. Am. 56(9), 1151–1156 (1966). [CrossRef]  

14. E. Abbe, “Beiträge zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung,” Arc. F. Mikr. Anat. 9(1), 413–468 (1873). [CrossRef]  

15. H. O. Bartlet and A. W. Lohmann, “Optical processing of 1-D signals,” Opt. Commun. 42(2), 87–91 (1982). [CrossRef]  

16. Z. Zalevsky, V. Eckhouse, N. Konforti, A. Shemer, D. Mendlovic, and J. Garcia, “Super resolving optical system based on spectral dilation,” Opt. Commun. 241(1-3), 43–50 (2004). [CrossRef]  

17. T. Dertinger, R. Colyer, G. Iyer, S. Weiss, and J. Enderlein, “Fast, background-free, 3D super-resolution optical fluctuation imaging (SOFI),” Proc. Natl. Acad. Sci. U. S. A. 106(52), 22287–22292 (2009). [CrossRef]  

18. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]  

19. B. Huang, H. Babcock, and X. Zhuang, “Breaking the diffraction barrier: Super-resolution imaging of cells,” Cell 143(7), 1047–1058 (2010). [CrossRef]  

20. R. Henriques, C. Griffiths, E. H. Rego, and M. M. Mhlanga, “PALM and STORM: unlocking live-cell super-resolution,” Biopolymers 95(5), 322–331 (2011). [CrossRef]  

21. C. Ventalon and J. Mertz, “Dynamic speckle illumination microscopy with translated versus randomized speckle patterns,” Opt. Express 14(16), 7198–7209 (2006). [CrossRef]  

22. J. Min, J. Jang, D. Keum, S.-W. Ryu, C. Choi, K.-H. Jeong, and J. C. Ye, “Fluorescent microscopy beyond diffraction limits using speckle illumination and joint support recovery,” Sci. Rep. 3(1), 2075 (2013). [CrossRef]  

23. J. García, Z. Zalevsky, and D. Fixler, “Synthetic aperture superresolution by speckle pattern projection,” Opt. Express 13(16), 6073–6078 (2005). [CrossRef]  

24. O. Wagner, A. Schwarz, A. Shemer, C. Ferreira, J. García, and Z. Zalevsky, “Super Resolved Imaging Based upon Wavelength Multiplexing of Projected Unknown Speckle Patterns,” Appl. Opt. 54(13), D51–D60 (2015). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Schematic sketch used for the mathematical derivation.
Fig. 2.
Fig. 2. Illustration of the process: (a) Object's spectrum samples and the mask's Fourier series components. (b) The convolution result between the object's spectrum and the mask's Fourier components, the low-pass filter, and image #0 i.e. without the mask shifting. (c) Illustration of the reconstruction process that can be done after Eq. (10).
Fig. 3.
Fig. 3. Left: Reconstruction in space domain. (a). The original object, (b). The low resolution object, (c). The super resolving object. Right: (d) Comparison of Signals.
Fig. 4.
Fig. 4. Left: The normalized spectral distributions of: (a). The original object, (b). The low resolution object, (c). The super resolving object. Right: The normalized spectral distributions of: (d). The original code, (e). The reconstructed code.
Fig. 5.
Fig. 5. Numerical 2D simulations. (a). The original image. (b). The low-resolution image. (c). The super-resolved reconstrcution.
Fig. 6.
Fig. 6. Images of the experimental setup.
Fig. 7.
Fig. 7. Experimental results. Left: Spatial reconstruction. (a). The original object, (b). The low resolution object, (c). The super resolving object. Right: Spectral econstruction. (d).The original object’s spectrum, (e). The low resolution spectrum, (f). The super resolving spectrum.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

δ ν = 1 / 1 x Δ x
R m = S m E m ( x m δ x ) ,
R ~ m ( μ ) = S ~ ( μ ) { exp ( 2 π i m δ x μ ) n a n δ ( μ n δ μ ) }
R ~ m ( μ ) = ( n = N / 2 + N / 2 a n exp ( 2 π i m δ x n δ μ ) S ~ ( μ n δ μ ) ) rect ( μ Δ μ )
N = η ( Δ μ / Δ μ δ μ δ μ )
M = η Δ μ ( 1 / 1 δ ν δ ν + 1 / 1 δ μ δ μ ) Δ μ / Δ μ δ ν δ ν = η ( 1 + δ ν / δ ν δ μ δ μ )
S ~ n ( μ ) = S ~ ( μ n δ μ ) rect ( μ Δ μ ) A m , n = a n exp ( 2 π i m δ x n δ μ ) R ~ m ( μ ) = n A m , n S ~ n ( μ )
[ A m , n ] M × N [ S ~ n ] N × 1 = [ R ~ m ] M × 1
[ A m , n ] M × N = [ exp ( 2 π i m δ x n δ μ ) ] M × N [ a 1 0 0 0 a 2 0 0 0 a n ] N × N
[ a N / 2 0 0 0 a n 0 0 0 a N / 2 ] N × N [ S ~ n ] N × 1 = [ exp ( 2 π i m δ x n δ μ ) ] 1 [ R ~ m ]
S ~ n ( k δ μ ) = S ~ n 1 ( ( k 1 ) δ μ ) + S ~ n ( N / N 2 2 δ μ ) S ~ n 1 ( N / N 2 2 δ μ )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.